0xDEADBEEF

Question about AVC real-world test

29 posts in this topic

I have noticed that ESET has been placed in relatively low ranks in AVC's real world tests (from Feb. to Jun.). I am just wondering if this is due to ESET's relatively conservative detection strategy.

Of course, the number of samples they use in real world test is pretty low (~400), and many times the detection rate of different products are pretty close (so the # of missed samples is actually very few). I have read David Harley's article about AV tests and understand that sampling bias and many other factors might affect a product's detection results in a test. But several months of similar rankings still make me wondering if ESET missed more than other vendors (of course the FP rate is always very decent compared to others). Another thing I noticed is that ESET seems to be a bit conservative in detecting macro malwares. Is it because ESET prefer to deal with this at later defense level? (like when payload is actually downloaded?) 

Another mysterious thing is that the performance test in AV-TEST and AV-Comparatives are utterly opposite. Is it because of the difference in their testing strategy?

Share this post


Link to post
Share on other sites

Posted (edited)

Could be AV-Comparatives use of Win 7 in these latest tests.

Eset employs all the latest security enhancements - ELAM, AMSI, etc. - build into Win 10. Therefore, AV-Test and Virus Bulletin tests on Win 10 show better scores. Additionally, ESET is a top performer consistently in SE Labs(UK) and AVLab(Poland) tests.

Edited by itman

Share this post


Link to post
Share on other sites

Posted (edited)

20 minutes ago, itman said:

Could be AV-Comparatives use of Win 7 in these latest tests.

Eset employs all the latest security enhancements - ELAM, AMSI, etc. - build into Win 10. Therefore, AV-Test and Virus Bulletin tests on Win 10 show better scores. Additionally, ESET is a top performer consistently in SE Labs(UK) and AVLab(Poland) tests.

OS might be a factor. But if this is the case, since there is still a decent amount of people using Win7, it does not make sense to provide a compromised protection in one system but not the other.

Region (sampling bias) might also be a factor... But if this is the case, it is not a good explanation to north America users..

 

VB employs quite different testing methodology (# sample is small, only tests static detection, while AVC real-world test seems to test all protection layers, that's why ESET has dynamic detection). I am not familiar about other EU tests, will take a look.

 

As for performance impact score, ESET has very bad score in AV-TEST but very good score in AVC... This is really funny. My experience is that ESET is very light weight (proved by its very low power consumption by looking at energy meter in a windows tablet), but apparently some tests disagree with this.

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites

I thought av-comparatives was the only one that was impartial with its reviews.

@0xDEADBEEFLikewise i don't think that o/s version should really dictate the outcome of the test, If the same sample sets are being used on all the products tested.

 

Share this post


Link to post
Share on other sites

Posted (edited)

4 hours ago, cyberhash said:

I thought av-comparatives was the only one that was impartial with its reviews.

@0xDEADBEEFLikewise i don't think that o/s version should really dictate the outcome of the test, If the same sample sets are being used on all the products tested.

 

I don't either. Actually I care more about what kind of samples did ESET miss every time. Even though it is a very small portion of the whole sample set AVC uses each time, the consistent miss ratio makes me curious about if they are of the same type or not.

I personally don't care too much about the AMTSO org results unless they look too bad. But I indeed heard some people posting negative comments about ESET saying "it performs even worse than the free Windows Defender"... So I think some reasonable explanations are good to have

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites
50 minutes ago, 0xDEADBEEF said:

But I indeed heard some people posting negative comments about ESET saying "it performs even worse than the free Windows Defender"...

These comments usually originated from "sources" that also quote MSE's "miraculous" improvement in detection scores in recent AV-C tests. Scores not registered in other recent AV Lab tests of MSE on Win 7.

Starting to see a "pattern" here? Draw your own conclusions.

Share this post


Link to post
Share on other sites

I think "real world experiences" paint a better picture than any tests can ever achieve. From my own personal experience i have never been infected by anything major since using ESET's products, which i have done since their first version of NOD32.

Then again the weak point in any security app is more likely to be the user. From opening an email with a pdf/word document attached or trying to download pirated products, or visiting bad sites/links.

Back when everyone was a member of Wilders Security Forums it was easier to draw a conclusion as to what performed better as you had users of every security product giving opinions and feedback to a wider audience in one place.

Then on top of detection, you have to look at other matters that affect each product. Like false positives , system impact, borked updates, bad definitions including flagging of windows files as bad.

ESET has done consistently well in these area's too and should always be taken into consideration when making a choice/purchase.

I don't doubt that Microsoft is improving as you now have forced telemetry built into your o/s that was never there before. In addition to them now trying to draw more attention to their own security products.

But i still get a feeling that you won't beat a company who's sole business is in security and have been in that business for a long time.

 

Share this post


Link to post
Share on other sites

Thanks for reading my mind CH;). This topic is ancient but keep rearing its head yearly.

Test results can be easily skewed. "Seat of the pants" daily use is the true test.

I quit chasing test results long ago and am happy where I am with ESS.

Share this post


Link to post
Share on other sites

Posted (edited)

2 hours ago, cyberhash said:

I think "real world experiences" paint a better picture than any tests can ever achieve. From my own personal experience i have never been infected by anything major since using ESET's products, which i have done since their first version of NOD32.

Then again the weak point in any security app is more likely to be the user. From opening an email with a pdf/word document attached or trying to download pirated products, or visiting bad sites/links.

Back when everyone was a member of Wilders Security Forums it was easier to draw a conclusion as to what performed better as you had users of every security product giving opinions and feedback to a wider audience in one place.

Then on top of detection, you have to look at other matters that affect each product. Like false positives , system impact, borked updates, bad definitions including flagging of windows files as bad.

ESET has done consistently well in these area's too and should always be taken into consideration when making a choice/purchase.

I don't doubt that Microsoft is improving as you now have forced telemetry built into your o/s that was never there before. In addition to them now trying to draw more attention to their own security products.

But i still get a feeling that you won't beat a company who's sole business is in security and have been in that business for a long time.

 

Personal experience is never representative. As some articles pointed out, no test can directly guide certain user's choice of AV products especially when correlating with his/her usage pattern. I have several experiences of my computer got infected by web trojan when using NOD32 full protection (when it was at ver3.0), but it doesn't mean anything to other people, nor do other individual's personal experience.

That's why we need some 3rd party test. Generally it is assuming a single person facing all possible (sampled) threats and obtain the probability of infection. I personally view it as the evaluation in "the worst possible case". 

Well, ESET indeed does exceptionally well in balancing FPs and detection rate. But when a certified 3rd party test is consistently showing that it ranks relatively low in detection rate compared to other products, there should be some explanation. On tester's side, this could be due to bad samples, biased samples, inappropriate testing methodology, etc. But it might also be due to the real missing pieces on the vendor's side. I personally tend to suspect the sample quality first before I question the security product. But since I can get no more info from their reports (especially for the system performance impact evaluation), I can only post here and hope someone can give a more convincing explanation. 

My view is: if any test raises an issue, there should be some explanation. It could be the issue with the test itself, it could be the issue with the product, or it could be both.

P.S. I've personally played with Microsoft's heur engine, and I know how funnily it performs :P 

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites

Posted (edited)

Like @TomFace has said above , this topic has frequented every security forum since the start of time and there is no definitive answer to the testing methodology. They don't even give the names of the samples used in the tests.

Wilders was a good place as it was "numerous" personal experiences which added up to a more informed and collective point of view. Bit more like security "politics" and personally i think there is more information drawn from a large user base of different products than a single pdf/spreadsheet written up by 1 person/organisation.

Just because 1 person likes orange juice on their test, does not mean everyone will. Give the orange juice to 20000 people and you get back a less biased result.

Seen it happen before when people see a datasheet of some test results and jump ship to another product and then regret it.

Not just in the security world, but it happens with phones , tv's , cars too.

There is probably very little difference in the capability of the top AV suites out there if it was all looked at from a balanced point of view.

Hence why all vendors offer a free trial period to allow all people from all places to evaluate the software before making a choice to purchase.

If you install something and use it for a long period of time and it's trouble free and does the job it was intended to do and works the way you want it to, then there really is no need to look for alternatives. This rule goes for any vendors product and not just for users of ESET products in case you think i am biased in any way.

If at any point along the way that my system was infected and i found out that another product could have prevented it then i would certainly re evaluate my position. Plus i would also consider the way my system was compromised. If i opened an email that had an attachment that promised me $/£1,000,000 , i should really blame myself.

Realtime "User Interaction" protection will never be available and still the biggest threat by far if you want to go down the barchart/piechart/stats route.





 

Edited by cyberhash
1

Share this post


Link to post
Share on other sites

Posted (edited)

"Personal experience is never representative.".........but it's the biggest factor in making a decision as to what service to use.

Some folks require repeated validation of known (past) experiences. Those folks most likely subscribe to magazines that tell them what to buy (like cars).

Other folks just go buy the car they want (after a test drive).
 

I choose not to suffer from analysis paralysis.

 

 

Edited by TomFace

Share this post


Link to post
Share on other sites
13 minutes ago, TomFace said:

"Personal experience is never representative.".........but it's the biggest factor in making a decision as to what service to use.

Some folks require repeated validation of known (past) experiences. Those folks most likely subscribe to magazines that tell them what to buy (like cars).

Other folks just go buy the car they want (after a test drive).
 

I choose not to suffer from analysis paralysis.

 

 

Personal experience can be helpful, but not all the times. When you do a short-term test drive, you never know if there are some hidden engine problem that might outbreak and lead to disasters sometimes later. For a layman, finding a dealer with good reputation might be a safer option.

Similarly, encountering cyber-threats might be rare and the threats are usually in stealth. Trying is definitely necessary, but the loss induced by these attacks might be too much to afford. Relying on trustable reviews with a systematic testing approach is a natural way to help make decisions.

We see similar things in testing processor performance: product A could outperform B in metric 1, but could also be beaten in metric B. This is natural and is because many things are just too complex to be measured by a single standard. I don't think security product is an exception. I'd prefer to gradually learn and interpret these results and metrics (which might contradict each other if only looking at numbers) in an objective way and fits my need, instead of simply saying a no to these stuffs.

Share this post


Link to post
Share on other sites

It is also about time to explain recent policy changes in the AV Lab industry in regards to testing that were vividly illustrated in this recent AV-C test.

AMTSO is a policy making organization that all the major AV Labs are members of. AMTSO is charged with making AV product security recommendations. AMTSO has recently decided that the "ultimate evil" in regards to AV products is a false positive(FP). They base this conclusion on studies done where in the majority of instances where a security status decision is left to the average consumer PC user, they will make the wrong decision; allow a malicious process or deny valid process execution.

The AV Labs have responded accordingly to the AMTSO FP guidelines and are presently implementing them. It needs to be noted that AV Labs previously have penalized AV products for FPs but it appears this has been amplified due to the AMTSO recent recommendation. No where is this more evident that in the number of FPs Emsisoft received in the recent AV-C test and the resultant penalty it received it product ranking score.

For those not familiar with Emsisoft, it uses both conventional signature detection and an aggressive behavior blocker for its malware detection mechanisms. The behavior blocker performs conventional reputational scanning but goes beyond that to continuously monitor any process with unknown or low reputational status for known malware behavior activities. If such activities are detected, it will alert the user of this with a recommendation how what action should be performed; block or allow. 

It is fairly obvious in the recent AV-C test that "cherry picking" of test samples was performed by AV-C in regards to the use of FPs. This conclusion is based on the relatively small test sample size in proportion to the number of FPs received by Emsisoft. It is obvious to me that AV-C specifically searched for new unknown or relatively unknown valid applications with processing characteristics that would involve modification of critical system directories, registry areas, and the like which would generate an alert by Emsisoft's behavior blocker.

The above is the quandary faced by AV vendors in their approach to reputational behavior analysis detection. The vendors covet high AV Lab test scores since they are a major evaluation criteria used by individual and enterprise security purchasers. Eset has consistently scored the lowest for FP detection. However, that FP scoring comes at the price of missing the detection of above noted reputation status malicious processes.

Finally one will ask why products like Bitdefender, Kaspersky, and others had high test scores. The reason is those products use both a HIPS plus local and cloud based behavior analysis with sandboxing in their detection processing.

The bottom line is AV security like most security mechanisms is contradictory in nature. The greater the security measures employed, the more restrictive those measures become with corresponding impact on the subjects encountering them. I personally subscribe to the philosophy of "it is better to err on the side of caution, than to pay the price of not doing so."

-EDIT-

One additional comment.

In "a perfect security detection world, " all malware would be detected automatically without any user interaction necessary. The AI/Next generation solutions are making such claims. Recent scientific research has debunked most of those claims. Further evidence is given by CloudStrike's test  scores in the AV-C test. The reality is such perfection will never be achievable due to the uncorrectable and permanent security deficiencies that exist in the Windows desktop OS architecture.

Edited by itman

Share this post


Link to post
Share on other sites
3 hours ago, itman said:

Finally one will ask why products like Bitdefender, Kaspersky, and others had high test scores. The reason is those products use both a HIPS plus local and cloud based behavior analysis with sandboxing in their detection processing.

Thanks for the info. I understand that one should always balance between detection and FPs, and sometimes sacrificing detection rate a bit is unavoidable for usability. However, I also see that some products like Bitdefender or Kaspersky achieved good detection rate while also maintaining decent FP rate in AVC (Kaspersky's FP is on par with ESET, and Bitdefender is slightly higher). Doesn't this imply that sacrificing less detection to achieve the similar FP is doable? Especially if the FP testing samples are cherry picked towards the gray zone, it implies their ways of suppressing the FP also work pretty well.

They have something that ESET currently doesn't. K has cloud-based HIPS and rollback mechanism. B has a complex weight-based process and inter-process scoring algorithm to deal with post-execution scenario. Although AMS is a similar technique, it still suffers in some cases while other behavior blockers may well handle these corner cases. I know it is much easier to say than actually implementing additional protection layers without introducing more FPs, but I just hope ESET can become better. It has decent HIPS modules, it has a good reputation system. Maybe it is a good idea to exploit more from these infrastructures.

oh, but I really love ESET's low perf impact and good compatibility with sandboxes

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites

With Microsoft "fanboys" salivating over these AV-C test results especially the stand alone June test in which MSE scored 100%, I guess it should be stated what this test is about per the AV-C test methodology:

We aim to use visible and relevant malicious websites/malware that are currently out there, and present a risk to ordinary users. We usually try to include as many working drive-by exploits as we find –these are usually well covered by practically all major security products, which may be one reason why the scores look relatively high. The rest are URLs that point directly to malware executables; this causes the malware file to be downloaded, thus replicating a scenario in which the user is tricked by social engineering into following links in spam mails or websites, or installing some Trojan or other malicious software.

I see no mention of ransomware which both MSE and WD detection are sorely deficient. Advanced persistent threat detection capability, etc.. With both MSE and WD now performing cloud scanning, this is an ideal test platform for them.

Given that true 0-day malware is extremely difficult to obtain, I again state that I believe most of the test malware Eset failed to detect was packed and obfuscated script based. Appears Microsoft's cloud scanning engine is employing the same interface as used in Win 10 that sandboxes these scripts after they decrypt for cloud engine scanning. 

Share this post


Link to post
Share on other sites
22 hours ago, cyberhash said:

Like @TomFace has said above , this topic has frequented every security forum since the start of time and there is no definitive answer to the testing methodology. They don't even give the names of the samples used in the tests.

Wilders was a good place as it was "numerous" personal experiences which added up to a more informed and collective point of view. Bit more like security "politics" and personally i think there is more information drawn from a large user base of different products than a single pdf/spreadsheet written up by 1 person/organisation.

Just because 1 person likes orange juice on their test, does not mean everyone will. Give the orange juice to 20000 people and you get back a less biased result.

Seen it happen before when people see a datasheet of some test results and jump ship to another product and then regret it.

Not just in the security world, but it happens with phones , tv's , cars too.

There is probably very little difference in the capability of the top AV suites out there if it was all looked at from a balanced point of view.

Hence why all vendors offer a free trial period to allow all people from all places to evaluate the software before making a choice to purchase.

If you install something and use it for a long period of time and it's trouble free and does the job it was intended to do and works the way you want it to, then there really is no need to look for alternatives. This rule goes for any vendors product and not just for users of ESET products in case you think i am biased in any way.

If at any point along the way that my system was infected and i found out that another product could have prevented it then i would certainly re evaluate my position. Plus i would also consider the way my system was compromised. If i opened an email that had an attachment that promised me $/£1,000,000 , i should really blame myself.

Realtime "User Interaction" protection will never be available and still the biggest threat by far if you want to go down the barchart/piechart/stats route.





 

You hit one of the biggest issues i see a lot. When people get infected they don't ask the right question, that being where did the infection come from, how did i get infected etc. They ask how did it get through my security program. For example if you constantly visit dangerous sites even with protection the risk of an infection raises. I've seen people move from security program to security program because they keep getting themselves infected.

Share this post


Link to post
Share on other sites
4 hours ago, peteyt said:

You hit one of the biggest issues i see a lot. When people get infected they don't ask the right question, that being where did the infection come from, how did i get infected etc. They ask how did it get through my security program. For example if you constantly visit dangerous sites even with protection the risk of an infection raises. I've seen people move from security program to security program because they keep getting themselves infected.

This is why i stick by my explanation based on your average web user, and real user experiences being a far better indication of a products reliability. I think of myself as being quite active on the internet and probably visit at most 30 different websites per day, clean websites if you would like to call them that (eg my bank , credit card, energy provider, forums, security and news sites to name a few).

I would never achieve visiting 400 sites per day like these AVC tests are doing, neither would i purposely visit 400 sites that are hosting some form of malware. Of course detection is important, but risk taking plays just as big a role.

A cut and paste from their own words are ..........

Preparation for every testing day

Every morning, any available security software updates are downloaded and installed, and a new base image is made for that day. Before each test case is carried out, the products have some time to download and install newer updates which have just been released, as well as to load their protection modules (which in several cases takes some minutes). If a major signature update for a product is made available during the day, but fails to download/install before each test case starts, the product will at least have the signatures that were available at the start of the day. This replicates the situation of an ordinary user in the real world.

Sounds cynical, but i don't think their "ordinary user in the real world" is neither ordinary or in the real world. As knowingly visiting 400 malware laden websites is only something that someone on a path to self created destruction would take.

This in turn leads to over inflated and artificial results.

@0xDEADBEEF

The HIPS that ESET uses is more powerful than most people give it credit for, but it's set up by default for a home user(that's not on the path to destruction as above). Everyone can go in and edit the HIPS settings for themselves and make their system more robust, than at its basic (safe and non intrusive) settings.

ESET products have also been the at the front line on many major outbreaks of ransomware before many of the other major vendors have and this is something that is very much overlooked. In addition to Threatsense being the first of its kind being applied to any home user product when it was first introduced, and i don't doubt that their cloud services will also improve over time.

Just as threats evolve and change, so do the methods of detection and protection. All the vendors of these tests will probably look at the detail of them properly and may influence their methods of detection. This is a good thing for every end user but .......

I think that the fact that these forums are not flooded with "Help i'm infected" posts actually shows that the products ESET make and sell are working and protecting "Ordinary" users very well.

Which leads me back to my first post where "Real user feedback and experiences" are a better indicator of somethings performance than a chart.

 

Share this post


Link to post
Share on other sites
15 hours ago, itman said:

see no mention of ransomware which both MSE and WD detection are sorely deficient

Show me ONE detection initiated by the Antiransomware module in ESET v10.In the recent ransomware attack, some other modules reacted.

 

4 hours ago, cyberhash said:

The HIPS that ESET uses is more powerful

I several years of using ESET , I never had an alert initiated by HIPS module

 

15 hours ago, itman said:

June test in which MSE scored 100%,

In the end of the day, this is the reality: MSE, a free antivirus, scored 100%, better than some paid.

Share this post


Link to post
Share on other sites
6 hours ago, cyberhash said:

I would never achieve visiting 400 sites per day like these AVC tests are doing, neither would i purposely visit 400 sites that are hosting some form of malware. Of course detection is important, but risk taking plays just as big a role.

It is hard for testers to correlate the testing case with each individual user's pattern. They have to assume a virtual user who faces all possible threats equally (and based of the prevalence of the threat). I see no problem doing this simplification in reality.

6 hours ago, cyberhash said:

The HIPS that ESET uses is more powerful than most people give it credit for, but it's set up by default for a home user(that's not on the path to destruction as above). Everyone can go in and edit the HIPS settings for themselves and make their system more robust, than at its basic (safe and non intrusive) settings.

An antivirus that needs users' frequent intervention is not antivirus, but more of a system control tool. Since detecting malware itself is an undecidable problem, it will be ironic to let this responsibility fall back again to users, who already paid some money for letting experts do the job through their product. A good HIPS should not be per-step popups, and there are plenty of vendors that have implemented more intelligent ones and achieved good results.

6 hours ago, cyberhash said:

ESET products have also been the at the front line on many major outbreaks of ransomware before many of the other major vendors have and this is something that is very much overlooked.

All I can say is that ESET could have done better

 

Finally, I feel that the statement "the user should blame themselves if their computer got infected" is really weird. Ideally it is antivirus's job to help users distinguish good from bad. If users have to force themselves to behave like an "ordinary" user who never visit suspicious website, I don't think they would bother to pay for the protection. Antivirus gives users more freedom, not shackles. If users got infected, it is their right to question the service they have paid for.

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites
1 hour ago, MSE said:

Show me ONE detection initiated by the Antiransomware module in ESET v10.In the recent ransomware attack, some other modules reacted.

I have seen antiransomware module popups several times when I was testing a batch of locker/kovters which have bypassed AMS. I found that they are tuned to be rather conservative and only react to certain type of ransomware, perhaps due to the concern of false positives.

1 hour ago, MSE said:

In the end of the day, this is the reality: MSE, a free antivirus, scored 100%, better than some paid.

That might be true in the end of the day. But MSE still has a long way to go... At least its engine is not as strong as you thought currently.

Share this post


Link to post
Share on other sites
1 hour ago, 0xDEADBEEF said:

I was testing a batch of locker/kovters which have bypassed AMS

Ahh, Kovter. One nasty bugger! Per this Malwarebytes detailed analysis: https://blog.malwarebytes.com/threat-analysis/2016/07/untangling-kovter/ with excerpts shown below, HIPS rules are really needed to monitor mshta.exe startup. Most clever is Kovter's startup of it using WMI. Appears Kovter used memory based reflective .dll injection to inject the .dll payload into WMIPrvSE.exe and run it from there; appears to be process hollowing based.

Behavioral analysis

During the initial assessment of some of the Kovter samples we could notice that it is signed by valid Comodo certificate (it got revoked later).

After the sample gets deployed, Kovter runs PowerShell and install itself in the system

powershell_install

Observing it via Process Explorer we can find the command passed to PowerShell. It’s purpose is to execute a code stored in an environment variable (names are random, new on each run), i.e:

$env:nvwisqng

Content of the variable is a base64 encoded PowerShell script:

env_val

After that initialization phase, we can see PowerShell deploying regsrv32.exe (via which Kovter runs  it’s modules):

regsrv

Examining the network activity we can notice many new connections with the regsrv32.exe that are appearing and disappearing:

-EDIT- Microsoft has announced with fall release of Win 10 CE update, Windows Defender ATP will protect against process hollowing. So Eset needs to do the same or "be left in the Microsoft dust cloud."

Edited by itman

Share this post


Link to post
Share on other sites
2 hours ago, MSE said:

I several years of using ESET , I never had an alert initiated by HIPS module.

Is your HIPS set to automatic as you wouldn't get HIPS alerts if it is

Share this post


Link to post
Share on other sites
16 minutes ago, 0xDEADBEEF said:

It is hard for testers to correlate the testing case with each individual user's pattern. They have to assume a virtual user who faces all possible threats equally (and based of the prevalence of the threat). I see no problem doing this simplification in reality.

An antivirus that needs users' frequent intervention is not antivirus, but more of a system control tool. Since detecting malware itself is an undecidable problem, it will be ironic to let this responsibility fall back again to users, who already paid some money for letting experts do the job. A good HIPS should not be per-step popups, and there are plenty of vendors that have implemented more intelligent ones and achieved good results.

All I can say is that ESET could have done better

 

Finally, I feel that the statement "the user should blame themselves if their computer got infected" is really weird. Ideally it is antivirus's job to help users distinguish good from bad. If users have to force themselves to behave like an "ordinary" user who never visit suspicious website, I don't think they would bother to pay for the protection. Antivirus gives users more freedom, not shackles. If users got infected, it is their right to question the service they have paid for.

 

Yes protection is what people pay for and why it sells, as it's overall very effective at doing its job. No antivirus comes with a guarantee against being infected as the goalposts are continually moving, and therefore people should be cautious while being online.

If there was no need to be cautious, then there would be no need for any online security whatsoever. Likewise if antiviruses were 100% effective then there would be no need to be cautious. Which is why employing good habits when online and using an antivirus are both critical to online safety.

It's simple risk reduction and nothing very complex.

I used the term "ordinary" specifically as i very much doubt even your "baddest of internet badboys" would ever manage to land on 400 malware infected sites in a single month, or the 1955 tested over 5 months.

 









 

Share this post


Link to post
Share on other sites

Whereas this current AV-C test was primarily to assess effectiveness against web borne malware, AV-C conducted an earilier test along conventional realtime scanning detection methods here: https://www.av-comparatives.org/wp-content/uploads/2017/04/avc_mpt_201703_en.pdf .

In this test 37,999 malware samples were used. Also in this test, AV-C differentiated between detection - both off-line w/no Internet connection and online - plus overall protection capability; that is was threat blocked upon execution even through it was not detected upon creation.

A couple of observations:

1. All vendors scored 99+% in overall protection capability - the most important factor.

2. Eset's scores for detection and protection were identical. This indicates that Eset has no behavorial protection capability either conditioned by reputational means or process on-going execution characteristics. That is if the process passes Eset's initial heuristic analysis, it is deemed safe.

-EDIT- Now, the problem. Malware is increasing employing sandbox detection and evasion tactics.

It will:

1. Not install itself. Great - problem solved.

2. Employ anti-sandboxing evasion techniques such as going into a program loop executing NOP instructions or, just lay dormant until heuristic scanning with sandboxing is no longer detected or expected such as executing malicious code after a few days, weeks, etc..

Edited by itman

Share this post


Link to post
Share on other sites
2 hours ago, itman said:

Employ anti-sandboxing evasion techniques such as going into a program loop executing NOP instructions or, just lay dormant until heuristic scanning with sandboxing is no longer detected or expected such as executing malicious code after a few days, weeks, etc..

Well, this is just the ultimate undecidable problem proved in Cohen's work. The attacker can manipulate the control flow of the program and trigger it only under certain circumstances. This will even evade the automatic sandboxes (like Cuckoo), or auto sample processing pipelines on the vendor side if it is well crafted and have a specific set of targets.

One solution is to constantly monitor the behaviors of untrusted programs even when it is in-flight. Many security software have this mechanism. ESET is heavily relying on AMS (it is also "constantly monitoring", but it is still pre-execution examination of the code). It is lack of constantly monitoring in HIPS automatic mode (or, very weak when also considering the antiransom module). The AVC's test you quoted also showed a similar result.

Edited by 0xDEADBEEF

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.