Jump to content

Archived

This topic is now archived and is closed to further replies.

0xDEADBEEF

VB100 RAP Test

Recommended Posts

Just out of curiosity... Is there a reason that ESET didn't participate in recent VB100's RAP test? The latest one for ESET was on 2017-04.

Share this post


Link to post
Share on other sites

Good question. I have been wondering the same. Somewhat puzzling since Eset always had a respectable score on the test.

Share this post


Link to post
Share on other sites

Might also be informative to post what the RAP test is about. Also of note is that Kaspersky no longer participates; perhaps because System Watcher is not enabled by default :

Quote

Reactive/Proactive (RAP) Test

This is an additional test that aims to give an indication of how well the product’s static detection keeps up with new threats after losing Internet access. Note that the results of this test do not count towards certification.

The RAP test is conducted in a similar manner to the certification test. However, this test only uses the ‘RAP/Response’ set and the RTTL set, and Internet access is disabled during the Proactive part of the test.

  • During the Reactive part of the test, the product is allowed full Internet access and is subjected to a snapshot of the ‘RAP/Response’ set and the RTTL set, as captured in the 10 days leading up to the test date.
     
  • During the Proactive part of the test, Internet access is disabled, effectively ‘freezing’ the product and preventing access to updates and the cloud. Samples used are from the ‘RAP/Response’ set and the RTTL set, as captured in the 10 days elapsed since the product was ‘frozen’.

No clean files are used in this test.

RAP Scoring

RAP figures are calculated as follows.

Basic figures:

  • Reactive A: Percentage of samples caught out of all 6-10-day-old malware samples.
  • Reactive B: Percentage of samples caught out of all 1-5-day-old malware samples.
  • Proactive A: Percentage of samples caught out of all samples collected 1-5 days after the product updates were frozen.
  • Proactive B: Percentage of samples caught out of all samples collected 6-10 days after the product updates were frozen.

Derived figures:

  • Reactive Average: (Reactive A + Reactive B) / 2. This figure signifies how effective a product is at catching the most recent threats when connected to the Internet. The closer to 100% the better.
     
  • Proactive Average: (Proactive A + Proactive B) / 2. This figure signifies how effective a product is at catching new threats when offline. The closer to 100% the better.
     
  • RAP Average: (Reactive Average * 2 + Proactive Average) / 3. This weighted average is the final score we calculate and assigns twice the weight to Reactive performance. The closer to 100% the better.

RAP charts published with the test visualize the four basic Reactive and Proactive components as bars in the background, the RAP Average as the highlighted number in the foreground and also display if there were any false positives in the Certification Test.

https://www.virusbulletin.com/testing/vb100/vb100-methodology/vb100-methodology-ver1/

Share this post


Link to post
Share on other sites

English is my one and only language and i fail to understand what this test actually does and why they do it. From what i can see they just feed an a/v product some samples for 10 days, then pause any updates and see how it scores on the same set of samples with no active internet connection.

Is this not just a test to see how long it takes definitions to be added/updated for samples ? At least that's what i read/interpret from the description .......

Plus if it does not even count against certification then why bother :huh:

Share this post


Link to post
Share on other sites
12 hours ago, cyberhash said:

English is my one and only language and i fail to understand what this test actually does and why they do it.

Malware will try to disable network connections if it can. This is especially true during its payload installation phase.

Depending on the AV product used, the lack of Internet access could have a major impact on product effectiveness. Windows Defender for example is heavily dependent on cloud scanning for detection of recently deployed malware. Ditto for Panda that is 100% cloud based. Even products such as Eset can be impacted. LiveGrid would not be available for reputational evaluation purposes. Note that LiveGrid is an integral part of Eset's ransomware protection. Eset's internally used blacklists could not be updated, etc..

Products that employ behavioral analysis in addition to conventional AV malware detection methods such as signature analysis do well on this test. Hence, the high scores given Emsisoft and Bitdefender. The exception is Trustport usually the highest scoring product in this test. It deploys an aggressive HIPS. As such, Trustport doesn't participate in other AV lab tests since its resultant high false positive(FP) detection as a result of the HIPS aggressiveness would place it in the bottom tier of test scores. BTW - the highest score ever given on the RAP test was by PCMatic that scored 99.9% in both the proactive and reactive tests. The reason? PCMatic is a whitelist or anti-exec solution. Finally, note that the RAP tests do not factor in FP count. PCMatic's FP rate in these tests have been "in the stratosphere" although I notice it has improved considerably in the latest AV-Test lab test.

Share this post


Link to post
Share on other sites

@itman , guess i worded things a bit wrong. It's not that i don't understand the test itself. I just don't understand the reasoning behind or need for the test

Yes anything that employs whitelisting would always score high , irrespective of having a live internet connection or not (PcMatic). PcMatic just took an old freeware idea and slapped a price on it, bad points is that applications are updated that often and the volume is so large is that getting round to your application eg MYEXE v2.297.254 , could take weeks if not months to whitelist. I would guess that applications with a larger user base would be whitelisted before an app that has 200 users worldwide.

The sole reason as to why this method has never taken off, is because people want to use an APP immediately..... not waiting for it to be whitelisted in the cloud (as they are not very tech savvy and don't want to do it manually). Then if you are savvy enough to do it manually, sitting for at least an hour after windows updates allowing changes manually would be soul destroying and allows for user errors to also occur.

As for the big players , i very much doubt that having a working internet connection would make any difference when it comes to samples that have already been analysed.
Once something is identified as bad then surely it will be sent as a definition update and stored locally on the end users hardware. I'm sure that network outages are something that every a/v vendor would factor in as a possibility. It's not just malware that can cause an outage or loss of connection. Getting caught with a virus/malware because your ISP goes down for example would not make any sense from a business perspective.

Every large vendor employs some form of cloud detection method, but as far as i know it is for new and evolving malware. Not stale and already analysed malware that this RAP test uses.

Which takes me back to the point i made above , if it makes no difference to the actual certification then why bother. As the test is already flawed before it starts.

Anything 100% whitelist and cloud based would consistently give the best result, but would also fail to factor in that the majority of the files that it blocked were also false positives.

Just my take on it anyway :mellow:



 

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...