Hi Vlk, I disagree with you.

  1. only about the half is pointing directly to binaries/files. The rest are exploits. In your misses you for sure also encountered some exploits and not only direct links. The “problem” is (and it is even written in the report) that practically all products (including of course Avast) are good are blocking/detecting exploits/drive-by downloads. That’s also why the % are so high. If you look at the latest research of Microsoft, the biggest issue for users are not 0-day exploits (according to their paper its even close to 0%) but social-engineered malware, which includes also tricking users in clicking on links pointing to files. If you miss malware from the web, the test will and does reflect that. But I am glad to hear that the next version will improve further in this regard.
  2. too less samples: others use 10 samples for such a test and base ratings based on that. We use usually 50x that size. Arguing that sample size is too small doesn’t sound fair. If it would be 1 million someone would say “who surfs to 1 million malicious sites…?” missing the whole point.
  3. How user-dependent cases are interpreted is up to the user. I do not believe that a product which would ask the user for everything should get the same like a product which is able to distinguish between malware and goodware without letting the decision up to the user. Anyway, only on chart2 you can sort based on the green bar. In chart3 you can combine blocked+userdependent.
  4. I expected that also Whole Product Dynamic Tests would be criticized (like any other test) in future if the scores are unfavorable for someone, despite the internal promotion for such sophisticated tests.