Jump to content

AV-Comparatives: Real-World Protection Test - November 2017


TomD

Recommended Posts

34 minutes ago, John Alex said:

MSE/WD was just an example of free AV getting better detection than a paid one.

There are at least 9 others performing better than ESET, with 1-3 FP and some of them free (Avast!, Avira, Bitdefender, Panda)

This is not the right approach in trying to justify ESET's failure, over and over again.

I would expect ESET developers to contact AV Comparatives, ask for undetected sample, analyze and provide a documented answer.

"I have been using ESET for X years, and never got infected" is not an answer.

I completely agree with that - a documented answer would help. As I said, it's not only 1 month but the poor results have been consistent for a while now.

Link to comment
Share on other sites

  • Most Valued Members
53 minutes ago, John Alex said:

MSE/WD was just an example of free AV getting better detection than a paid one.

There are at least 9 others performing better than ESET, with 1-3 FP and some of them free (Avast!, Avira, Bitdefender, Panda)

This is not the right approach in trying to justify ESET's failure, over and over again.

I would expect ESET developers to contact AV Comparatives, ask for undetected sample, analyze and provide a documented answer.

"I have been using ESET for X years, and never got infected" is not an answer.

The problem is every different testing company will show different results. If you kept moving to different AVs based on test results you would be constantly moving. Surely user results is better than these tests.

Link to comment
Share on other sites

1 hour ago, John Alex said:

I would expect ESET developers to contact AV Comparatives, ask for undetected sample, analyze and provide a documented answer

Actually, they do this on a regular basis.

In an other security forum, @Marcos replied that Eset's non-detections for the Oct. test was due to AV-C's use in that test of a handful or so malicious coin miner samples. Eset didn't detect them since by default, PUA protection is not enabled at install time. The "fix" was to now classify these coin miners as Trojans in the realtime scan engine. In other words, Eset is basically accommodating the AV Lab since most users would have enabled PUA protection at install time.

Remember that AV Labs generate revenue by being "creative" in their sample selection. That is by finding ways around AV solution protections including their default configurations.

Edited by itman
Link to comment
Share on other sites

1 hour ago, itman said:

PUA protection is not enabled at install time

PUA protection is not enabled by default on most antiviruses, if not all.

Link to comment
Share on other sites

Overall, security product score in regards to protection and false positive capability vary by AV Labs. This is due to the samples used and variations in testing procedures employed.

In this last quarterly comparative by SE Labs: https://selabs.uk/en/reports/consumers , MSE scored poorly in the protection category but received a 100% score in the false positive category. On the other hand, Eset received a score of 100% in both categories. This is why I repeatedly state that multiple AV Lab reports need to be reviewed and an average score by product calculated in evaluating security product effectiveness. Addtionally, it is common knowledge that security product effectiveness can vary based on OS version it is installed on. So that also has to be factored in.

 

Edited by itman
Link to comment
Share on other sites

  • Most Valued Members
50 minutes ago, itman said:

This is why I repeatedly state that multiple AV Lab reports need to be reviewed and an average score by product calculated in evaluating security product effectiveness. Addtionally, it is common knowledge that security product effectiveness can vary based on OS version it is installed on. So that also has to be factored in.

 

Thank you for clarity.. 

Link to comment
Share on other sites

  • Administrators

We kindly ask you all to stay on topic and refrain from personal attacks.

I think that all who know more about malware will agree that ~99% detection in tests is excellent and even products that constantly achieve 100% detection in tests do not protect against 100% of malware in reality. Not detecting about 4 out of several hundreds of million of malware samples that exist is just a tiny fraction. In reality, what matters is if you have been hit by malware or not. If you get hit by malware you don't care if the AV you've had installed has constantly achieved 100% detection. It is a matter of fact that what one vendor detects, the other may miss and vice-versa. That said, a product with let's say 70-80% detection rate in tests might have protected and saved you from the malware that you got infected with despite having the most awarded AV installed.

Against "poor" results speaks also the award level achieved in many AV-Comparatives tests. A product with a poor detection would not be awarded the maximum Award+ (***) prize.

Link to comment
Share on other sites

I am opposed to the attitude of saying everything one doesn’t  want to see as “not helpful”

And I don’t think simply averaging different viruslab results is appropriate as they can have drastically different testing methodology.

I believe that AVC real world test can better reflect the real world cyber attack and defense because the test is performed on a daily basis with fresh new samples. All products are tested throughly across all protection layers with cloud enabled. In contrast, I don’t think tests which perform on monthly basis and perhaps only does scanning of a large number of sample or offline detection is still helpful nowadays. In most cases, the life cycle of a particular sample has been severely shortened because of better reaction speed of security vendors. And to be honest, the eset ranking in this test is about right when cross compared with my own testing experience

In addition, their detailed report indicates that most product’s fps are from url tests instead of local binaries. ESET sometimes also has FPs in URL blacklisting. Of course AVC’s local binary fp testing methodology is way too limited and perhaps overlap with the white training set of “nextgen” vendors:rolleyes: 

Edited by 0xDEADBEEF
Link to comment
Share on other sites

  • Most Valued Members

If people used these test results as what they are intended for "information", then these threads would never exist on the forum. Like i said before its a "fraction" of a single percent difference that people are complaining about. If it was 5 or 10% , then yes i think that there would be a reason to ask why there was such a large difference.

These forums provide the proof that people are not getting infected, otherwise they would be flooded with annoyed people.

The tests are valuable as a general measurement of "on the spot" performance , but the tests don't actually reflect when people get infected with samples that have NOT been tested by AV-C. They are useful but don't paint the whole picture as some people believe it does.

 

Link to comment
Share on other sites

AV Labs are performing what can be best described as "stress testing." Their job is to find security weakness in the products they are testing and report those to the test participants. Most AV Lab tests are not free. The participants pay the labs to have their products tested. They do so not only to prove the product's security "worthiness" but again to find its deficiencies. If an AV vendor is not satisfied with an AV Labs methods and the like, they will no longer use the services of the lab.

So how does the above relate to an actual end user's PC use. In reality, it doesn't for the most part. The likelihood of an end user encountering all the threats used in an AV Lab test is slim to none. So keep that in mind folks. This also is one reason I personally tend to throw more weight in my evaluations to specialty malware tests done by concerns like Malware Research Group, AV Labs in Poland, and the like. MRG for example does quarterly tests on AV software banking protection capability. AV Labs has done tests in regards to exploit protection capability in regards to drive-by download detection capability. Finally, both MRG and AV Labs have run tests on ransomware detection capability. I might add that in these specialized tests, Eset wins "hands down" against any Microsoft offered security solution. 

Link to comment
Share on other sites

51 minutes ago, itman said:

The likelihood of an end user encountering all the threats used in an AV Lab test is slim to none

Based on your statement, an average free antivirus would be more than sufficient for a common end user. Why pay for a sophisticated security solution when "the likelihood of an end user encountering all the threats used in an AV Lab test is slim to none"????

 

 

Link to comment
Share on other sites

  • Most Valued Members
3 minutes ago, John Alex said:

Based on your statement, an average free antivirus would be more than sufficient for a common end user. Why pay for a sophisticated security solution when "the likelihood of an end user encountering all the threats used in an AV Lab test is slim to none"????

 

 

And there are some good free ones but as someone who has used eset for years and never had any issues but have had issues with other vendors in the past i will stick to what is working for me. The reality is no user should be downloading multiple viruses like these tests do. There's an old saying if you keep looking under rocks you will eventually find a serpent.

Eset isnt perfect because as marcos and itman and others have mentioned no AV is. These results are supposed to give just a rough idea but nothing more. If there was a massive difference in results i would have to reconsider using eset but to me there isnt    

Link to comment
Share on other sites

On 12/17/2017 at 3:45 AM, itman said:

Overall, security product score in regards to protection and false positive capability vary by AV Labs. This is due to the samples used and variations in testing procedures employed.

In this last quarterly comparative by SE Labs: https://selabs.uk/en/reports/consumers , MSE scored poorly in the protection category but received a 100% score in the false positive category. On the other hand, Eset received a score of 100% in both categories. This is why I repeatedly state that multiple AV Lab reports need to be reviewed and an average score by product calculated in evaluating security product effectiveness. Addtionally, it is common knowledge that security product effectiveness can vary based on OS version it is installed on. So that also has to be factored in.

 

OMG, did those quoted "AV LAB Test" are seeing the result @itman posted here? 100% Detection dude

Link to comment
Share on other sites

15 hours ago, John Alex said:

Based on your statement, an average free antivirus would be more than sufficient for a common end user. Why pay for a sophisticated security solution when "the likelihood of an end user encountering all the threats used in an AV Lab test is slim to none"????

Perhaps a real world example will help.

Microsoft made a "big deal" about its rapid detection of the recent Bad Rabbit ransomware incident in this blog posting: https://blogs.technet.microsoft.com/mmpc/2017/12/11/detonating-a-bad-rabbit-windows-defender-antivirus-and-layered-machine-learning-defenses/ . Noteworthy in this article is the following. The installation where the ransomware was originally detected was using Windows Defender Advanced Threat Protection(ATP). This is an extra cost solution only available on Win 10 Enterprise versions and as such, is used for the most part by corps.. Microsoft noted that the Bad Rabbit ransomware was not positively identified as malicious until at least nine other installations had been infected with the ransomware. Microsoft tried to justify this situation due to the fact that WD ATP sensitivity AI detection threshold was set to its default 80% confidence level. This by the way is the level Microsoft deemed adequate to both provide adequate malware detection and minimize false positives.

Eset likewise has a blog posting on Bad Rabbit here: https://www.welivesecurity.com/2017/10/24/bad-rabbit-not-petya-back/ . Subsequent forensic analysis by Eset and other security vendors confirmed that Bad Rabbit was spread within the network using the NSA EternalRomance exploit. As such, anyone with Eset IS/SS installed would have been protected against this ransomware when it was it the 0-day state. Why? Because Eset was only AV vendor to be able to detect the EternalRomance exploit after it was discovered as verified by this Malware Research Group ad hoc test done in July, 2017 here: https://www.mrg-effitas.com/eternalromance-vs-internet-security-suites-and-nextgen-protections/ . How did Eset detect it? Via its Network Protection Intrusion Detection  module; a feature Microsoft Windows Defender/Firewall does not have.  

Edited by itman
Link to comment
Share on other sites

  • Administrators

Speaking about network protocol exploits, next year all our products will receive network protection, not only those that contain a firewall (ESET Internet Security, ESET Smart Security Premium, ESET Endpoint Security). We also plan to significantly improve update in all new products to react even quicker to new threats and more effectively than the current LiveGrid system. Also administrators will be given new tools and methods to prevent and combat malware and make management easier and ESMC, EDTD, ECMP, EIS, ECA, EBA, etc. will become more than just letters in the next few months.

Link to comment
Share on other sites

I dont think that the websites comparing different AV software are unbiased, or show much difference in most of the software out there.

I will say though that these forums or others should not serve as 'proof' that people are not getting infected while using ESET. I do not believe it would be appropriate or wise to post here were my company to be infected with something while ESET were installed. It would be far more appropriate and wise to contact support via phone or ticket, and it would never be mentioned here. I would assume the same would go for an Individual/personal license holder as well.

Though maybe there is some benefit i'm not seeing?

Jdashn

Link to comment
Share on other sites

  • Most Valued Members
1 hour ago, Marcos said:

Speaking about network protocol exploits, next year all our products will receive network protection, not only those that contain a firewall (ESET Internet Security, ESET Smart Security Premium, ESET Endpoint Security). We also plan to significantly improve update in all new products to react even quicker to new threats and more effectively than the current LiveGrid system. Also administrators will be given new tools and methods to prevent and combat malware and make management easier and ESMC, EDTD, ECMP, EIS, ECA, EBA, etc. will become more than just letters in the next few months.

Surprised about the network protection. That's great news for nod33 users and a great move from eset in general. Curious about the livegrid improvements   

Link to comment
Share on other sites

4 hours ago, itman said:

Perhaps a real world example will help.

 How did Eset detect it? Via its Network Protection Intrusion Detection  module; a feature Microsoft Windows Defender/Firewall does not have.  

Well, there are many "examples":

and here:

"Well at the same time BD and KIS reacted very rapidly after the initial exposure. ESET should be compared with top tier products. I would understand the slow response if there are some nuances in this sample though. Otherwise, two days after receiving the phishing mail is not very responsive anyway. It is even after the source of the malicious file, Mediafire, withdrawing the file from sharing due to malicious content. "

Link to comment
Share on other sites

Malware Research Group also does quarterly real time AV product comparative testing. Unlike many AV Labs, they will show by percentage pie chart illustration what type of malware was tested. They will additionally break out the following categories; banking malware, ransomware, and PUA detection rates for the products tested. Also somewhat unique in testing procedures, they will show the percentage of malware samples that failed initial detection but were blocked with a 24 hour time period. Finally, MRG always includes a few simulated malware tests such as botnet effectiveness; product failure for these tests are not counted negatively in the product effectiveness scoring.

The last test Microsoft participated in was the Quarter 2, 2017 test. Appears they have decided to "go AV Lab shopping" at more "favorable" lab vendors. In any case, the Q2 test was unique in that WD was tested both with SmartScreen, WD's reputation scanner, enabled and disabled. MRG had encountered criticism in various security forums for not testing WD w/SmartScreen enabled. Regardless of the SmartScreen protection factor, WD overall scored the second lowest of products tested in the comparative:

https://www.mrg-effitas.com/wp-content/uploads/2017/08/MRG-Effitas-360-Assessment_2017_Q2_v2.pdf 

Edited by itman
Link to comment
Share on other sites

12 minutes ago, John Alex said:

"Well at the same time BD and KIS reacted very rapidly after the initial exposure. ESET should be compared with top tier products.

I see we are now expanding scope to other "paid" software solutions.

Where it is your prerogative to post endless comments about Eset's, per your opinion, "lack of protection," I really believe you have made your point on the issue.

Link to comment
Share on other sites

1 hour ago, itman said:

I see we are now expanding scope to other "paid" software solutions

Hi itman,

ESET is in line with BD, KIS, Avira.    I expressed my surprise that ESET performed worse then the free MSE.

BD and Avira, both have a free version, if this makes you feel better somehow.

 

Edited by John Alex
Link to comment
Share on other sites

5 minutes ago, John Alex said:

BD and Avira, both have a free version, if this makes you feel better somehow.

Avira free was tested by SE Labs; it didn't do very well.

Link to comment
Share on other sites

  • Administrators

Re. 19eee9336a4527eb76cd2ac69321727f159ad057,  0xDEADBEEF reported in on Nov 10. The detection was added indeed on Nov 10 in update 16388 which was the next update after he reported the sample if I remember correctly. Moreover, it's a jar archive, ie. Java needs to be installed in order for the file to run and therefore jar files have lower potential to do harm. Also the jar file in question has been seen on 5 computers worldwide so far.

Link to comment
Share on other sites

1 hour ago, Marcos said:

Re. 19eee9336a4527eb76cd2ac69321727f159ad057,  0xDEADBEEF reported in on Nov 10. The detection was added indeed on Nov 10 in update 16388 which was the next update after he reported the sample if I remember correctly. Moreover, it's a jar archive, ie. Java needs to be installed in order for the file to run and therefore jar files have lower potential to do harm. Also the jar file in question has been seen on 5 computers worldwide so far.

Nov 10 is the date I submitted to the forum. The initial day I spotted the file and right click submit to ESET was around Nov. 8. This implies the livegrid/background submission has very low priority or even be ignored sometimes.

For malware spread in small-scale, yes, one generally cannot put hope on antivirus software to deal with them as vendors put more resources on "main stream" stuffs. However, during my test I also see some top-tier vendors blacklist so-called "rare" malware very rapidly upon first exposure on their cloud, or block the sample with behavior shield. This is much rarer in ESET. I have to keep email submitting the samples and get virus definition updated.

I don't want to blame anything here, as I understand how hard it is to balance FPs and detection rate. However, I still would like to see that ESET can perform better. 

Edited by 0xDEADBEEF
Link to comment
Share on other sites

WD on Win 10 ver. 1709 does have a feature that I have commented on numerous times I would like to be added as an option to LiveGrid or real time scanning:

Quote

 

Defender/CloudBlockLevel

Added in Windows 10, version 1709. This policy setting determines how aggressive Windows Defender Antivirus will be in blocking and scanning suspicious files. Value type is integer. 

Possible options are:

  • (0x0) Default windows defender blocking level
  • (0x2) High blocking level - aggressively block unknowns while optimizing client performance (greater chance of false positives)
  • (0x4) High+ blocking level – aggressively block unknowns and apply additional protection measures (may impact client performance)
  • (0x6) Zero tolerance blocking level – block all unknown executables

If this setting is on, Windows Defender Antivirus will be more aggressive when identifying suspicious files to block and scan; otherwise, it will be less aggressive and therefore block and scan with less frequency.

 

Obviously Eset would have to build in some limits on when blocking would be performed. Files created via Win Updating for example would be excluded. Ditto for apps using trusted installers and the like. Obviously any .exe or facsimile dropped in the User\Appdata\*, OS root, and program files and data directories would be scanned.

Also this Eset feature has to be more sophisticated that the current Application Modification Protection built-in to the firewall outbound processing that simply alerts that a previous process defined by an existing rule has changed. 

Edited by itman
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...