Jump to content

0xDEADBEEF

Most Valued Members
  • Posts

    361
  • Joined

  • Days Won

    3

Everything posted by 0xDEADBEEF

  1. Cool thanks. Wondering if this is the settings for configuring the startup scanning? BTW, am I correct that pausing the protection using the right click menu in the tray will also pause startup scan? Seems there is no standalone knob for turning this on or off in the setting menu.
  2. I noticed even with the realtime file scanning disabled, there is something called startup scanner that will detect some threat before the advanced memory scanner kick in. Is the startup scanner a special monitoring layer? What's the difference between it and the realtime scanner? How to disable/configure it?
  3. Sha256:13eb6bfa41a350b44a12e2f45419e409f7ff51acb82262ba6c3ec0bfc7dbea46 ESET didn't flag it as malicious Seems to me like a false positive from other vendors on VT
  4. Also, some free AV vendors have the nasty history of copying non-free vendor's detection (even the detection name!). They will blacklist whatever some trusted vendors (like ESET) think it is malicious in hours without even examining the samples themselves. They don't pay those trustable vendors and don't get caught and get good scores on some 3rd party tests. This also creates an illusion that they always catch more than some non-free vendors.
  5. yeah this makes sense. Usually AMS detection will have some harmless side-effects remaining on the test machine, so if these side-effects are counted as failure, ESET will for sure to get a bit lower score...
  6. however, the result from page 5 seems to be inconsistent with real-world case..
  7. I think it is doable, as long as the client engine can extract feature vectors locally and only send those information to the cloud for large model verdict... As they showed in the article, the detonation-based model (cloud sandbox) requires minutes, but preliminary static examination takes much less, similar to Avira's strategy However, I've not used WD for a long time, so I don't know if their claims match the what the actual product does. I guess document protection is sort of different from WD's cloud scan. And I agree with your comment about the smartscreen, it is sort of anti-exec tool from my user experience, not that "smart"
  8. perhaps they are targeting large enterprise customer. Their sample report seems to be very detailed Plus, some more discussions about the countermeasures against some evasion techniques I mentioned in the prev post: https://www.first.org/resources/papers/conf2017/Countering-Innovative-Sandbox-Evasion-Techniques-Used-by-Malware.pdf I should have found these materials earlier.. guess I should try more search keywords next time. I believe ESET and perhaps other large AV vendors also have in-house sandbox with such capability, otherwise there will be a lot of samples that will slip through BTW, Cuckoo has just released a new version today, sadly not with the most anticipated feature.
  9. My best guess is they have their in-house kernel logging implemented in the cloud. hah, I found this article: https://www.vmray.com/blog/analyzing-environment-sensitive-malware/ To detect environment-sensitive malware and thus hidden functionality, we combine Intel’s new Processor Tracing Feature with powerful analysis techniques and sophisticated heuristics: We utilize Processor Tracing information to determine code coverage in memory dumps of the monitored processes, i.e. identify all code locations that have not been executed during analysis. From these untriggered code locations, we identify the subset of ‘interesting’ functionality, e.g., by discarding error handling routines. Then we track back execution flow from these ‘interesting’ non-executed code locations to preceding conditional branches that depend on environment settings, e.g., functions that obtain the current time or keyboard layout. Symbolic execution is then applied to identify paths (conditions) that lead to the hidden functionality. On these paths we use a solver to generate concrete values to trigger their execution. Finally, we reanalyze the sample and make sure that all environment queries result in the values needed to reach the hidden functionality. So seems there it is adopted in come cutting-edge sandbox services (or not so cutting-edge from a pro's view) And some related slides: https://www.slideshare.net/FabioRosato/symbolic-execution-of-malicious-software-countering-sandbox-evasion-techniques
  10. Yes, most modern sandboxes are now able to skip basic sleep functions. However, there are many ways to sleep so there is no all-in-one solution to avoid this issue (last year I found a simple method to bypass MSE Engine's such counter measure, not sure if they have fixed it) Yes. Though I am not referring to sandbox that is used for this purpose. I personally find sandbox of this usage very confusing regarding user experience. It is more for pros and doesn't help tell if a untrusted file is malicious or not (unless the malicious behavior is too obvious) Yes, I can't tell how many times cuckoo can't unroll the behavior of a malicious sample. Some need specific user interaction (like the one that allures the user to click a button and then decode the malicious payload and destroy the MBR, or the ones that detect the existence of certain process that fall outside of traditional anti-vm tricks but very effective to target users with certain language backgrounds). Some will send network beacons and only activate on certain IP regions (some public sandbox services I am aware of don't allow such connection to go through for security purpose). I feel an automated system is powerless against such tricks unless there are some secret sauce that I am not aware of. I've heard some using symbolic execution and fuzzy test and try to discover more execution paths. However, I believe malware authors can make it difficult by exponentially increase the number of such paths and make it difficult to track. hmm.
  11. There is a question that long baffles me... Since the Turing machine cannot decide whether a certain portion of code will be executed or not, detecting malware is theoretically undecidable unless the malicious code is triggered under some conditions. One might argue that the problem will be partially solved by constantly monitoring the program behaviors in the background. However, I feel this is particularly a bottleneck for detection methods that execute the sample, collect the behavioral trace, and gives a verdict within a limit amount of time. For example, how does the automatic sandbox that usually runs a sample for several minute to examine malicious apps that will be triggered under certain condition (e.g. some samples only execute malicious code with certain username/language settings/etc., some samples are with UI and requires/asks user to perform certain actions to let the program proceed to the malicious code section, and theoretically some can wait a year and then detonate the malicious code). Is there a way to examine multiple execution paths easily in such dynamic analysis? and perhaps intelligently skip these malicious barriers? Static analysis on the other hand seems doesn't have this problem, but of course is having troubles with heavily obfuscated code.
  12. As additional info, today I got another such document sample which is not detected by ESET scan using latest virus db First look at VT result: Note the first submission of this sample to VT is 12:12 UTC, and I am testing at around 14:00 UTC, around 2 hrs difference in time. Seems to be a tragic result for ESET right? Open it.. Well it is a very typical mal-doc, and ask one to enable macros. Enable, then OK, first internal URL blacklist blocked some Then realtime filesystem monitoring kicked in And finally the botnet protection blocked trojan downloader behavior. And the system is clean with these stuffs successfully blocked. This is a common case in nearly all document samples I've tested As you can see ESET has layered approach against such threat, not just through scanning.
  13. My experience is ESET tends to block malware-carrying documents at later stage instead of at scanning stage. VirusTotal only shows scan results. I partially agree that there are more cases that ESET didn't detect the document or other archives that are commonly seen in malware-spreading spams at early exposure stage through scanning (i.e. other scanners in VT already detect it but ESET doesn't). But after opening/executing these files I usually found the actual payload were blocked either by internal URL blacklists or AMS or later defense layers. These experiences include "realworld" ones that the samples at the time I got were not even exposed in VT. So solely judging through VirusTotal doesn't fully reflect a product's detection ability. Many documents of this type for example, are merely downloaders and don't contain the actual malicious code. Blocking the actual payload at later stage should also be counted as successful. Also, blocking some types of threats at later stage is, from my perspective, a way do decrease false positives, especially if you have the experience that some vendors in VT have aggressive detection against downloaders and occasionally also misclassify legitimate files as such family
  14. I appreciate if ESET can disclose some detailed reasons behind this detection. It can help me evaluate whether to whitelist this software or not (and the truth is most Chinese users simply whitelist this detection... therefore knowing the reason serves as a better justification for not whitelisting this PUA )
  15. hmm, was wondering what kind of signature is extracted from that exploit script BTW I am curious about the malicious behaviors of this Tencent.O. Since it is a very popular IM software in China, I don't think ESET will detect this without a good reason.
  16. I've noticed ESET detects Tencent IM's installer as Tencent.O PUA. May I ask what's the reason for ESET to categorize it as PUA? Link to the installer: https://dldir1.qq.com/qqfile/qq/TIM2.2.0/23808/TIM2.2.0.exe
  17. yes... my test machine is still with old office 2007 BTW this sample is spread through spam, so it is a "real-world" one
  18. thanks! originally I intended to ask if ESET has generic exploit detection like other vendors in VT as shown in that webpage. From the updated detection name, I can see what's happening
  19. Was wondering why ESET scan usually doesn't detect documents with exploit. For example this file (scan shows clean with 17430): https://www.virustotal.com/#/file/84a7c1eac6e1a130cb66126fa48258e9c7c8b60a2a5fd0fcd564305775757641/detection The exec of this sample in a virtual machine successfully download the payload and exec it through the equation editor exploit, and ESET detects the payload post-execution as FareIt using AMS. But I feel like detecting it at early stages is better?
  20. Some more testing reveals that some vendors closely monitor and quickly blacklist VT samples. They can get very bad detection rate when the samples fall outside VT collections This forms a severely biased result: for people who test these products for fun, the samples are likely to be collected from VT or at least been scanned in VT (note that a lot of online sandbox also upload sample to VT as a static verdict). Vendors which closely monitor and blacklist VT samples might get pretty good result because they always get the sample before one can get it due to such sampling bias, so it creates an illusion that these vendors always detect malware samples (ahead of time). In reality, this is not the case, because wide-spread samples might not be on VT and rare samples might be on VT. A recent non-VT sample collection I got had pretty bad result in those high-scored vendors but ESET still performs well. Further tests reveals some simple MD5 modify techniques can easily bypass those VT vendors blacklist signatures (including detection names like GenericKD, UDS, Gen... all are common ones from vendors with good scores on AVC), while ESET's signature and cloud signature have good robustness against such basic technique. So great job ESET
  21. Actually WD has an emulation engine, but a rather poor one. Can be easily fingerprinted and bypassed. I personally feel it is more because it lacks caching capability, so its emulation engine needs to busy emulate executables even they are not new to the machine. Not sure why they didn't add caching. One interesting note is that both ESET and WD have noticeable impact on program start up speed, but ESET is more due to AMS because the realtime scanner has caches. Thanks for the info of the cloud scanner (I remember seeing some detection names with cl as the suffix, perhaps that's it?). I am a bit confused about the two-phase scan you mentioned. Is phase one a local sandboxing? and phase two a cloud sandboxing?
  22. I was more curious about the yellow bar in WIndows Defender score. I am not sure what alert is counted as a user dependent event... smart screen?
  23. This reminds me of another interesting observation: a year ago I was always wondering why ESET is scored very badly in some 3rd party performance impact eval, because it is very counter intuitive from my own experience. After some leisure time benchmarking and analysis across different products, I start to see some reasons behind the numbers. Some tests "successfully" avoided many scenario a caching mechanism may help. Installing new apps, starting an application, etc., all fall outside this range, and they may take up a large portion of the performance score. As performance optimizations are always for optimizing common cases and let uncommon cases finish gracefully, such performance benchmark break-down is in question. Some cloud-based products incurs over 35% slow down in my app start up tests across multiple platforms, but it performs really well in some 3rd party app start up test. This makes me believe some targeted optimization may have been applied to help boost the test score.
  24. As I said, having bad FP score indicates bad product, but having a good FP score doesn't necessarily mean the actual FP of a product is low. I can easily make an innocent hello world program (without obfuscation techniques) and let several products which have low num of FPs in the AV test (say, 1~4) raise a false alarm. It is not easy to make ESET do so. Real world is much more complex than this (note their FP test only executed ~50 samples to test behavior blockers, which is more prone to FPs.) I have even experienced one product with good looking FP score flag PCMark as malicious and auto quarantine it, and one product delete multiple benign application during a disinfection process due to inappropriate OS tagging. These are something you might not know if you don't really try such product out. So far I didn't see an FP test that can well reflect the FP of those products in real world. Anyway, I am not here to defend why ESET doesn't participate in recent tests. Actually I am wondering the same. I just feel it is necessary to state my observation for fairer comparison. There is always a trade-off. And whether a higher detection rate or a lower FP rate is more important is left to customers to decide. I personally feel that a security product will start to be meaningless when its FP exceed certain threshold, because in that case users will be busy excluding files, including malicious ones
×
×
  • Create New...