0xDEADBEEF

Members
  • Content count

    25
  • Joined

  • Last visited

About 0xDEADBEEF

  • Rank
    N/A
  1. Well, this is just the ultimate undecidable problem proved in Cohen's work. The attacker can manipulate the control flow of the program and trigger it only under certain circumstances. This will even evade the automatic sandboxes (like Cuckoo), or auto sample processing pipelines on the vendor side if it is well crafted and have a specific set of targets. One solution is to constantly monitor the behaviors of untrusted programs even when it is in-flight. Many security software have this mechanism. ESET is heavily relying on AMS (it is also "constantly monitoring", but it is still pre-execution examination of the code). It is lack of constantly monitoring in HIPS automatic mode (or, very weak when also considering the antiransom module). The AVC's test you quoted also showed a similar result.
  2. I have seen antiransomware module popups several times when I was testing a batch of locker/kovters which have bypassed AMS. I found that they are tuned to be rather conservative and only react to certain type of ransomware, perhaps due to the concern of false positives. That might be true in the end of the day. But MSE still has a long way to go... At least its engine is not as strong as you thought currently.
  3. It is hard for testers to correlate the testing case with each individual user's pattern. They have to assume a virtual user who faces all possible threats equally (and based of the prevalence of the threat). I see no problem doing this simplification in reality. An antivirus that needs users' frequent intervention is not antivirus, but more of a system control tool. Since detecting malware itself is an undecidable problem, it will be ironic to let this responsibility fall back again to users, who already paid some money for letting experts do the job through their product. A good HIPS should not be per-step popups, and there are plenty of vendors that have implemented more intelligent ones and achieved good results. All I can say is that ESET could have done better Finally, I feel that the statement "the user should blame themselves if their computer got infected" is really weird. Ideally it is antivirus's job to help users distinguish good from bad. If users have to force themselves to behave like an "ordinary" user who never visit suspicious website, I don't think they would bother to pay for the protection. Antivirus gives users more freedom, not shackles. If users got infected, it is their right to question the service they have paid for.
  4. Thanks for the info. I understand that one should always balance between detection and FPs, and sometimes sacrificing detection rate a bit is unavoidable for usability. However, I also see that some products like Bitdefender or Kaspersky achieved good detection rate while also maintaining decent FP rate in AVC (Kaspersky's FP is on par with ESET, and Bitdefender is slightly higher). Doesn't this imply that sacrificing less detection to achieve the similar FP is doable? Especially if the FP testing samples are cherry picked towards the gray zone, it implies their ways of suppressing the FP also work pretty well. They have something that ESET currently doesn't. K has cloud-based HIPS and rollback mechanism. B has a complex weight-based process and inter-process scoring algorithm to deal with post-execution scenario. Although AMS is a similar technique, it still suffers in some cases while other behavior blockers may well handle these corner cases. I know it is much easier to say than actually implementing additional protection layers without introducing more FPs, but I just hope ESET can become better. It has decent HIPS modules, it has a good reputation system. Maybe it is a good idea to exploit more from these infrastructures. oh, but I really love ESET's low perf impact and good compatibility with sandboxes
  5. Personal experience can be helpful, but not all the times. When you do a short-term test drive, you never know if there are some hidden engine problem that might outbreak and lead to disasters sometimes later. For a layman, finding a dealer with good reputation might be a safer option. Similarly, encountering cyber-threats might be rare and the threats are usually in stealth. Trying is definitely necessary, but the loss induced by these attacks might be too much to afford. Relying on trustable reviews with a systematic testing approach is a natural way to help make decisions. We see similar things in testing processor performance: product A could outperform B in metric 1, but could also be beaten in metric B. This is natural and is because many things are just too complex to be measured by a single standard. I don't think security product is an exception. I'd prefer to gradually learn and interpret these results and metrics (which might contradict each other if only looking at numbers) in an objective way and fits my need, instead of simply saying a no to these stuffs.
  6. Personal experience is never representative. As some articles pointed out, no test can directly guide certain user's choice of AV products especially when correlating with his/her usage pattern. I have several experiences of my computer got infected by web trojan when using NOD32 full protection (when it was at ver3.0), but it doesn't mean anything to other people, nor do other individual's personal experience. That's why we need some 3rd party test. Generally it is assuming a single person facing all possible (sampled) threats and obtain the probability of infection. I personally view it as the evaluation in "the worst possible case". Well, ESET indeed does exceptionally well in balancing FPs and detection rate. But when a certified 3rd party test is consistently showing that it ranks relatively low in detection rate compared to other products, there should be some explanation. On tester's side, this could be due to bad samples, biased samples, inappropriate testing methodology, etc. But it might also be due to the real missing pieces on the vendor's side. I personally tend to suspect the sample quality first before I question the security product. But since I can get no more info from their reports (especially for the system performance impact evaluation), I can only post here and hope someone can give a more convincing explanation. My view is: if any test raises an issue, there should be some explanation. It could be the issue with the test itself, it could be the issue with the product, or it could be both. P.S. I've personally played with Microsoft's heur engine, and I know how funnily it performs
  7. I don't either. Actually I care more about what kind of samples did ESET miss every time. Even though it is a very small portion of the whole sample set AVC uses each time, the consistent miss ratio makes me curious about if they are of the same type or not. I personally don't care too much about the AMTSO org results unless they look too bad. But I indeed heard some people posting negative comments about ESET saying "it performs even worse than the free Windows Defender"... So I think some reasonable explanations are good to have
  8. OS might be a factor. But if this is the case, since there is still a decent amount of people using Win7, it does not make sense to provide a compromised protection in one system but not the other. Region (sampling bias) might also be a factor... But if this is the case, it is not a good explanation to north America users.. VB employs quite different testing methodology (# sample is small, only tests static detection, while AVC real-world test seems to test all protection layers, that's why ESET has dynamic detection). I am not familiar about other EU tests, will take a look. As for performance impact score, ESET has very bad score in AV-TEST but very good score in AVC... This is really funny. My experience is that ESET is very light weight (proved by its very low power consumption by looking at energy meter in a windows tablet), but apparently some tests disagree with this.
  9. I have noticed that ESET has been placed in relatively low ranks in AVC's real world tests (from Feb. to Jun.). I am just wondering if this is due to ESET's relatively conservative detection strategy. Of course, the number of samples they use in real world test is pretty low (~400), and many times the detection rate of different products are pretty close (so the # of missed samples is actually very few). I have read David Harley's article about AV tests and understand that sampling bias and many other factors might affect a product's detection results in a test. But several months of similar rankings still make me wondering if ESET missed more than other vendors (of course the FP rate is always very decent compared to others). Another thing I noticed is that ESET seems to be a bit conservative in detecting macro malwares. Is it because ESET prefer to deal with this at later defense level? (like when payload is actually downloaded?) Another mysterious thing is that the performance test in AV-TEST and AV-Comparatives are utterly opposite. Is it because of the difference in their testing strategy?
  10. Cool, thanks. Looking forward to seeing the new detection feature in the future endpoint releases.
  11. Thanks. It means it will be available again sometimes in the future endpoint release? Will it also be available in personal products?
  12. I once saw this option in the ESET Remote Administrator (in an old version). I am currently using ERA 6.5.522 with EES 6.4, but cannot find this option anymore. I remembered the option was here in the policy tab of windows product. Is it an abandoned feature?
  13. I've noticed that some people's endpoint security has an optional high sensitivity heuristic in the threatsense parameter. However, I cannot find this option in my v6.5 endpoint security installation. Is this option only open to some companies or controlled by the administrator?
  14. So Microsoft also has some in-memory detection mechanism? Is there a name for it?