I have recently studied the results of the Proactive Security Challenge. I am surprised by the results showing the Avast Internet Security score of 3% which is near the bottom of the pack.
I would appreciate some understanding from others of these result.
Totally disregard it as the results are flawed as in one test I believe they crippled the AIS version to use as a stand alone firewall when it is designed to be incorporated with the other AIS components.
Not to mention that is a totally unrealistic test as no regular user runs AIS in that way.
Today I studied coffee, comparing Black java to Cream/Sugar added.
The results of this study where much more fulfilling than Matousec.
Matousec essentially tests HIPS, unless something has changed since last I checked. And Avast! has no HIPS. So perhaps the better question isn’t “Why did Avast! score so low?”, but rather “Why is Avast! even in being tested for something it does not have?”
it is not a mere firewall testing but rather a HIPS testing thing. But Avast firewall has a long way to go to become a very strong one such as OUTPOST or Online Armor etc
Why are you stating that as a fact, when you are not even familiar with avast’s firewall? :
The avast Firewall was designed for easy usability. It does its job quietly without any user interaction (Unless you prefer to have more control, then you can set it to Ask mode). It defends against hacker attacks very well, has its own application control (so no constant popups), and strong self-protection. Here’s a review that talks about it and gives you a good idea on how it works.
actually the Matousec tests aren’t that bad if You understood what they about
source is included so if there is need to really incorporate such protection it’s just question of will and time …
ofcourse these suites and products which aren’t complex HIPS & variants will fail …
sadly failed yesterday again but it’s sign of improvement so i’m sure it rapidly change after final v6 hits …
paid if company want re-test outside free test periods
imo the only wrong is the methology which uses ‘incremental levels’ passes
so application can fail on low level even while being perfect on all higher level tests