Jump to content

0xDEADBEEF

Most Valued Members
  • Content Count

    361
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by 0xDEADBEEF

  1. It is hard for testers to correlate the testing case with each individual user's pattern. They have to assume a virtual user who faces all possible threats equally (and based of the prevalence of the threat). I see no problem doing this simplification in reality. An antivirus that needs users' frequent intervention is not antivirus, but more of a system control tool. Since detecting malware itself is an undecidable problem, it will be ironic to let this responsibility fall back again to users, who already paid some money for letting experts do the job through their product. A good HIPS
  2. Thanks for the info. I understand that one should always balance between detection and FPs, and sometimes sacrificing detection rate a bit is unavoidable for usability. However, I also see that some products like Bitdefender or Kaspersky achieved good detection rate while also maintaining decent FP rate in AVC (Kaspersky's FP is on par with ESET, and Bitdefender is slightly higher). Doesn't this imply that sacrificing less detection to achieve the similar FP is doable? Especially if the FP testing samples are cherry picked towards the gray zone, it implies their ways of suppressing the FP also
  3. Personal experience can be helpful, but not all the times. When you do a short-term test drive, you never know if there are some hidden engine problem that might outbreak and lead to disasters sometimes later. For a layman, finding a dealer with good reputation might be a safer option. Similarly, encountering cyber-threats might be rare and the threats are usually in stealth. Trying is definitely necessary, but the loss induced by these attacks might be too much to afford. Relying on trustable reviews with a systematic testing approach is a natural way to help make decisions. We see
  4. Personal experience is never representative. As some articles pointed out, no test can directly guide certain user's choice of AV products especially when correlating with his/her usage pattern. I have several experiences of my computer got infected by web trojan when using NOD32 full protection (when it was at ver3.0), but it doesn't mean anything to other people, nor do other individual's personal experience. That's why we need some 3rd party test. Generally it is assuming a single person facing all possible (sampled) threats and obtain the probability of infection. I personally view it
  5. I don't either. Actually I care more about what kind of samples did ESET miss every time. Even though it is a very small portion of the whole sample set AVC uses each time, the consistent miss ratio makes me curious about if they are of the same type or not. I personally don't care too much about the AMTSO org results unless they look too bad. But I indeed heard some people posting negative comments about ESET saying "it performs even worse than the free Windows Defender"... So I think some reasonable explanations are good to have
  6. OS might be a factor. But if this is the case, since there is still a decent amount of people using Win7, it does not make sense to provide a compromised protection in one system but not the other. Region (sampling bias) might also be a factor... But if this is the case, it is not a good explanation to north America users.. VB employs quite different testing methodology (# sample is small, only tests static detection, while AVC real-world test seems to test all protection layers, that's why ESET has dynamic detection). I am not familiar about other EU tests, will take a look.
  7. I have noticed that ESET has been placed in relatively low ranks in AVC's real world tests (from Feb. to Jun.). I am just wondering if this is due to ESET's relatively conservative detection strategy. Of course, the number of samples they use in real world test is pretty low (~400), and many times the detection rate of different products are pretty close (so the # of missed samples is actually very few). I have read David Harley's article about AV tests and understand that sampling bias and many other factors might affect a product's detection results in a test. But several months of simi
  8. Cool, thanks. Looking forward to seeing the new detection feature in the future endpoint releases.
  9. Thanks. It means it will be available again sometimes in the future endpoint release? Will it also be available in personal products?
  10. I once saw this option in the ESET Remote Administrator (in an old version). I am currently using ERA 6.5.522 with EES 6.4, but cannot find this option anymore. I remembered the option was here in the policy tab of windows product. Is it an abandoned feature?
  11. I've noticed that some people's endpoint security has an optional high sensitivity heuristic in the threatsense parameter. However, I cannot find this option in my v6.5 endpoint security installation. Is this option only open to some companies or controlled by the administrator?
  12. So Microsoft also has some in-memory detection mechanism? Is there a name for it?
  13. Somewhat expected... BTW, I enjoyed reading the last part of the machine learning discussion from ESET
  14. Really appreciate your response. I remembered ESET has introduced rule-based HIPS since V4, but till now, the HIPS's auto mode still does little in malware's post-execution scenarios (although there is a smart mode and a ransomware protection). Is it due to the concern of FPs so that ESET only leave this function to advanced users? I agree that it is a paradox to claim a product can know the threat before it appears. But some statements you gave seems to be based on the fact that malware makers can always fool the AVs. Will it be the case that a product is so hard to be fooled so that ma
  15. Seems to be so. I have set a local folder to be visible in the LAN (like "\\server\Runtime"), and added a rule to protect the local path of the folder (like "C:\Users\Username\Runtime"). And apply on all applications. When explorer.exe tries to create a new folder through the local path, the HIPS will prompt the window; but accessing from Network using explorer.exe doesn't prompt any window. Adding the network path to the protection doesn't help
  16. Another question related to the product: when a malware bypasses the scan and detected by AMS, it is already at the execution stage. The executed malware will sometimes have some side-effect on the machine (registry, files, etc.) I have seen some vendors employ rollback mechanism, and some use standard repair procedures. In some cases, wild ransomware might successfully encrypt some files and then be detected through behavior detection, the rollback mechanism of those products will recover encrypted file (currently ESET is not). Is there a reason why ESET doesn't introduce such roll-back mecha
  17. Really glad to see such detailed response from ESET. I am with ESET's view that static ML detection alone, which treat executables as data, is not that reliable against malware because it doesn't really look into the things happening under the hood. These methods are somewhat similar to anomaly detection and should not have been deployed to ordinary clients from my view, due to many potential FPs (ESET's low FP is one of the primary reason for me to stick to this product indeed). But does it mean that in order to control FPs, products should be mostly relying on the response speed of the
  18. Yes, I mean they are generally good at detecting known threats and their variations. For most Cerber or Spora families, my experiences is that AMS will first kick in if it detects de-cloaked code in memory, and if not, rule-based HIPS will kick in, but at the cost of sacrificing some files. But I never see these two detect new family of ransomware (like if the author of the malware rewrite the core code or change its behavior dramatically, a typical example is the Jaff ransomware recently, AMS and HIPS generally kept silence until more signatures are added some days later). I don't expect
  19. Well, it is hard to find a vendor which does not use machine learning techniques these days. I am asking for deep learning, but it is fine if ESET does not want to disclose more details about it. Modifying malware to avoid the detection of security products is common, but this cannot explain why those threats are not also tailored for products with similar or larger market share. The cost for these customizations will for sure rise if the protection layers are harder to bypass I saw many improvements in ESET products generation by generation, like the introduction of HIPS, AMS, and e
  20. It is the description ESET made in whitepapers or other public materials make me think in this way. The blog in welivesecurity further implies that ESET is not interested in those deep learning techniques, which only began to be widely adopted in cybersecurity in recent 3 years or so. These are relatively "non-traditional" compared to those well developed ones. I didn't mean these techniques are superior, but am just wondering if ESET has ever adopted these in the detection process. One example is Wannacry. Although during the time it initially outbroke the exploit blocker can already blo
  21. I've read the tech white paper, but it seems to me what ESET discloses is still close to traditional approaches. The heuristic that pre-execute the malware and do scoring based on the collected behavior has been used by traditional vendors for decades (can't deny that ESET is one of the best). Though I am not sure about adv mem scanner and other techniques, I feel that they are still based on same/similar techniques except for being applied at different stages. On the other hand, some vendors uses static engines to detect malware through their statistics features (like avg entropy or more comp
  22. Recent years I've seen many vendors started using new machine learning techniques to enhance their detection rate. e.g. RNN or other neural network variations, as can be seen from the patents they filed. I am wondering if ESET is keeping up with these techniques. From what I've seen in the articles posted in Welivesecurity about ESET's attitude to the machine learning, it seems to me that ESET is rather conservative in adopting these new techniques and a large portion is due to the concern about FPs. I know ESET is one of the vendors that has lowest FP rate (while those aggressive/paranoid des
×
×
  • Create New...