Darth_Infernus
Contributor
sunbiz_3000 said:I have been reading av-comparatives for the past 3 hrs and studied their different sample-set and techniques. Its a test mainly to do with zoo-samples which IMO is a good way, but isn't the best way. They could infact try modifying some existing virus source codes and write a few new ones and then test.... but they say its unethical!!
Also couldnt find their sample-set anywhere, to try and replicate their test results... Do u know where can I find their sample-set?? I found AV-Test.org · Tests of Anti-Virus- and Security-Software to be a vey good resource while reading this site... They are a lot more scientific and have provided their sample set... I'll b trying it out and match their results!!
Modifying the code of the malware essentially creates new malware. This is looked down upon by the AV industry and serves very little purpose in the real-world except when you are trying to study the heuristic engines or variant detections of the various AVs.
Regarding AV-comparatives, you cannot get their sample set. They do not base their sample set on the WildList or any other such organization. Only the vendors who participate in the AV-comparatives tests can get the samples. Every sample is run and verified to be executable in order to prevent false detections. The files which do not execute correctly are disposed of.
For the heuristics (Retrospective/Proactive) tests, the products are tested with 3-month old updates and the samples used are the new malware samples they have received during that 3-month period. This way, an effective gauge can be made about which AV has the best heuristics by simply counting the heuristic detections since the malware is new and signatures have not been updated to detect these.
Rest assured that I place a lot of faith in AV-comparatives as well as the individual tester who prepares the test results.
When I prepare a list of favourite AVs, I do not list them in any specific order. I take the following things into consideration:
- Price and licensing
- Support
- Detection rates according to tests
- Features
I also have my own sample set which I use only for the purpose of sending undetected samples for analysis to various AV companies. I do not judge any AV's performance by how much they detect in my sample set.
@sunbiz: KAV has been known to have poor heuristics. Besides, Repacking should not have affected KAV at all since KAV's unpack engine and signature engine are independent from each other. The KAV engine should have detected the packer first and then looked up the signature database. As a result there should be zero detection rate difference because the signatures are not dependent on the packer. Your findings suggest a possible misconfiguration or the fact that KAV didn't support some packers.