Jump to content

0xDEADBEEF

Most Valued Members
  • Posts

    361
  • Joined

  • Days Won

    3

Posts posted by 0xDEADBEEF

  1. 4 minutes ago, Marcos said:

    It's another protection layer. While AMS scans process memory upon execution, the startup scan (available as tasks in scheduler) scans files registered in startup locations and memory after each module update and user's logon.

    Cool thanks. Wondering if this is the settings for configuring the startup scanning?

    setting.jpg.1b891ed049092379e20c30452bfb91ea.jpg

    BTW, am I correct that pausing the protection using the right click menu in the tray will also pause startup scan? Seems there is no standalone knob for turning this on or off in the setting menu.

  2. I noticed even with the realtime file scanning disabled, there is something called startup scanner that will detect some threat before the advanced memory scanner kick in. Is the startup scanner a special monitoring layer? What's the difference between it and the realtime scanner? How to disable/configure it?

  3. Also, some free AV vendors have the nasty history of copying non-free vendor's detection (even the detection name!). They will blacklist whatever some trusted vendors (like ESET) think it is malicious in hours without even examining the samples themselves. They don't pay those trustable vendors and don't get caught and get good scores on some 3rd party tests. This also creates an illusion that they always catch more than some non-free vendors.

  4. 1 hour ago, itman said:

    The "reverse all the changes made by a particular malware sample" leads me to believe the primary reason for failure.

    yeah this makes sense. Usually AMS detection will have some harmless side-effects remaining on the test machine, so if these side-effects are counted as failure, ESET will for sure to get a bit lower score...

  5. 18 hours ago, itman said:

    The most audacious claim is that its cloud scanning can detect malware and relay that status back to the requesting device in micro-seconds. The reality is any advanced malware cloud evaluation will take much longer. WD's "out-of-box" default scan duration is 30 secs. or less I believe. Once that duration has elapsed, the process is allowed to execute unimpeded. 

    I think it is doable, as long as the client engine can extract feature vectors locally and only send those information to the cloud for large model verdict... As they showed in the article, the detonation-based model (cloud sandbox) requires minutes, but preliminary static examination takes much less, similar to Avira's strategy

     

    However, I've not used WD for a long time, so I don't know if their claims match the what the actual product does. I guess document protection is sort of different from WD's cloud scan. And I agree with your comment about the smartscreen, it is sort of anti-exec tool from my user experience, not that "smart"

  6. 20 minutes ago, itman said:

    Wonder how many SMB's have one of those employed full-time

    perhaps they are targeting large enterprise customer:rolleyes:. Their sample report seems to be very detailed

    Plus, some more discussions about the countermeasures against some evasion techniques I mentioned in the prev post: https://www.first.org/resources/papers/conf2017/Countering-Innovative-Sandbox-Evasion-Techniques-Used-by-Malware.pdf

    I should have found these materials earlier.. guess I should try more search keywords next time.

    I believe ESET and perhaps other large AV vendors also have in-house sandbox with such capability, otherwise there will be a lot of samples that will slip through

     

    BTW, Cuckoo has just released a new version today, sadly not with the most anticipated feature.

  7. 3 hours ago, itman said:

    I do have one question in regards to Eset's new Enterprise sandbox offering. Presently Eset doesn't detect global keylogger activity. Case in point.

    Attacker deploys a .Net based global keylogger running from PowerShell. For Eset's cloud scanning to detect this keylogger activity, it will have to be employing an engine that has behavioral detection capability. So will Eset be using a new engine in the cloud with behavioral detection capability?

    My best guess is they have their in-house kernel logging implemented in the cloud.

     

    hah, I found this article: https://www.vmray.com/blog/analyzing-environment-sensitive-malware/

    To detect environment-sensitive malware and thus hidden functionality, we combine Intel’s new Processor Tracing Feature with powerful analysis techniques and sophisticated heuristics:

    • We utilize Processor Tracing information to determine code coverage in memory dumps of the monitored processes, i.e. identify all code locations that have not been executed during analysis.
    • From these untriggered code locations, we identify the subset of ‘interesting’ functionality, e.g., by discarding error handling routines.
    • Then we track back execution flow from these ‘interesting’ non-executed code locations to preceding conditional branches that depend on environment settings, e.g., functions that obtain the current time or keyboard layout.
    • Symbolic execution is then applied to identify paths (conditions) that lead to the hidden functionality.
    • On these paths we use a solver to generate concrete values to trigger their execution.
    • Finally, we reanalyze the sample and make sure that all environment queries result in the values needed to reach the hidden functionality.

    So seems there it is adopted in come cutting-edge sandbox services (or not so cutting-edge from a pro's view)

    And some related slides: https://www.slideshare.net/FabioRosato/symbolic-execution-of-malicious-software-countering-sandbox-evasion-techniques

     

  8. 24 minutes ago, itman said:

    Employing "sleeper" malware has always been an effective sandbox bypass method

    Yes, most modern sandboxes are now able to skip basic sleep functions. However, there are many ways to sleep so there is no all-in-one solution to avoid this issue (last year I found a simple method to bypass MSE Engine's such counter measure, not sure if they have fixed it)

    27 minutes ago, itman said:

    In this regard, sandboxing "did its job" in that it indirectly prevented the malware from executing

    Yes. Though I am not referring to sandbox that is used for this purpose. I personally find sandbox of this usage very confusing regarding user experience. It is more for pros and doesn't help tell if a untrusted file is malicious or not (unless the malicious behavior is too obvious)

    29 minutes ago, itman said:

    Cloud based sandboxes and their local based virtual equivalent ones such as the Cuckoo sandbox are used primarily for malware determination status

    Yes, I can't tell how many times cuckoo can't unroll the behavior of a malicious sample. Some need specific user interaction (like the one that allures the user to click a button and then decode the malicious payload and destroy the MBR, or the ones that detect the existence of certain process that fall outside of traditional anti-vm tricks but very effective to target users with certain language backgrounds). Some will send network beacons and only activate on certain IP regions (some public sandbox services I am aware of don't allow such connection to go through for security purpose). I feel an automated system is powerless against such tricks unless there are some secret sauce that I am not aware of. I've heard some using symbolic execution and fuzzy test and try to discover more execution paths. However, I believe malware authors can make it difficult by exponentially increase the number of such paths and make it difficult to track. hmm.

  9. There is a question that long baffles me...

    Since the Turing machine cannot decide whether a certain portion of code will be executed or not, detecting malware is theoretically undecidable unless the malicious code is triggered under some conditions. One might argue that the problem will be partially solved by constantly monitoring the program behaviors in the background. However, I feel this is particularly a bottleneck for detection methods that execute the sample, collect the behavioral trace, and gives a verdict within a limit amount of time. For example, how does the automatic sandbox that usually runs a sample for several minute to examine malicious apps that will be triggered under certain condition (e.g. some samples only execute malicious code with certain username/language settings/etc., some samples are with UI and requires/asks user to perform certain actions to let the program proceed to the malicious code section, and theoretically some can wait a year and then detonate the malicious code). Is there a way to examine multiple execution paths easily in such dynamic analysis? and perhaps intelligently skip these malicious barriers? Static analysis on the other hand seems doesn't have this problem, but of course is having troubles with heavily obfuscated code.

  10. As additional info, today I got another such document sample which is not detected by ESET scan using latest virus db

    First look at VT result: Note the first submission of this sample to VT is 12:12 UTC, and I am testing at around 14:00 UTC, around 2 hrs difference in time.

    detection.thumb.png.87b392c5f5ea82ad20633f3dac33fefc.png

    Seems to be a tragic result for ESET right?

    doc.png.adca6f6014287d44e8ed8d34ce40f583.png

    Open it.. Well it is a very typical mal-doc, and ask one to enable macros.

    2018-06-05_091853.thumb.png.10fc52245ed111ff7fe448ec16e2919a.png

    Enable, then OK, first internal URL blacklist blocked some

    2018-06-05_091905.thumb.png.b0696b27c83d699a8faef3ba2c4ec278.png

    Then realtime filesystem monitoring kicked in

    botnet.png.466b4993da69e4847f8a4ff7c1457f4d.png

    And finally the botnet protection blocked trojan downloader behavior. And the system is clean with these stuffs successfully blocked.

    This is a common case in nearly all document samples I've tested

    As you can see ESET has layered approach against such threat, not just through scanning.

  11. 3 hours ago, claudiu said:

    The OP summarized very well the situation :

    20 hours ago, Sp Ebil said:

    Unfortunately,here and at least 1 year,every email we receive with an attached file, we have to look for it in Virus Total. We have never seen Eset quickly find a file as trojan,walware e.t.c

    My experience is ESET tends to block malware-carrying documents at later stage instead of at scanning stage. VirusTotal only shows scan results.

    I partially agree that there are more cases that ESET didn't detect the document or other archives that are commonly seen in malware-spreading spams at early exposure stage through scanning (i.e. other scanners in VT already detect it but ESET doesn't). But after opening/executing these files I usually found the actual payload were blocked either by internal URL blacklists or AMS or later defense layers. These experiences include "realworld" ones that the samples at the time I got were not even exposed in VT. So solely judging through VirusTotal doesn't fully reflect a product's detection ability.

    Many documents of this type for example, are merely downloaders and don't contain the actual malicious code. Blocking the actual payload at later stage should also be counted as successful. Also, blocking some types of threats at later stage is, from my perspective, a way do decrease false positives, especially if you have the experience that some vendors in VT have aggressive detection against downloaders and occasionally also misclassify legitimate files as such family

  12. 12 minutes ago, Marcos said:

    Tencent has been detected as PUA since 2015. Since it was not me who analyzed it, I don't know what's exactly wrong with it. However, the detection was created by an experienced PUA engineer so there was definitely something that makes it PUA.

    I appreciate if ESET can disclose some detailed reasons behind this detection. It can help me evaluate whether to whitelist this software or not (and the truth is most Chinese users simply whitelist this detection... therefore knowing the reason serves as a better justification for not whitelisting this PUA :))

  13. 5 hours ago, itman said:

    Eset also doesn't like Tencent's Spectre test. It flags it as JS/Exploit.Spectre; most likely due to its running of the Spectre POC code.

    hmm, was wondering what kind of signature is extracted from that exploit script

    BTW I am curious about the malicious behaviors of this Tencent.O. Since it is a very popular IM software in China, I don't think ESET will detect this without a good reason. 

  14. 2 hours ago, Peter Randziak said:

    Hello @0xDEADBEEF

    thank you for the report.

    The detection for this samples has been already added.

    It was not yet detected, when you posted this as it was not yet processed by the automated samples processing system.

    In case you have an undetected samples or suspicious application, please send it to our research lab as described at https://support.eset.com/kb141/#SubmitFile 

    Thank you, P.R.

    thanks! originally I intended to ask if ESET has generic exploit detection like other vendors in VT as shown in that webpage. From the updated detection name, I can see what's happening

  15. Was wondering why ESET scan usually doesn't detect documents with exploit. For example this file (scan shows clean with 17430):

    https://www.virustotal.com/#/file/84a7c1eac6e1a130cb66126fa48258e9c7c8b60a2a5fd0fcd564305775757641/detection

    The exec of this sample in a virtual machine successfully download the payload and exec it through the equation editor exploit, and ESET detects the payload post-execution as FareIt using AMS. But I feel like detecting it at early stages is better?

  16. On ‎04‎/‎17‎/‎2018 at 4:37 PM, 0xDEADBEEF said:

    due to the fact that some vendors collect samples from VT but ESET (seems) doesn't

    Some more testing reveals that some vendors closely monitor and quickly blacklist VT samples. They can get very bad detection rate when the samples fall outside VT collections

    This forms a severely biased result: for people who test these products for fun, the samples are likely to be collected from VT or at least been scanned in VT (note that a lot of online sandbox also upload sample to VT as a static verdict). Vendors which closely monitor and blacklist VT samples might get pretty good result because they always get the sample before one can get it due to such sampling bias, so it creates an illusion that these vendors always detect malware samples (ahead of time). In reality, this is not the case, because wide-spread samples might not be on VT and rare samples might be on VT. A recent non-VT sample collection I got had pretty bad result in those high-scored vendors but ESET still performs well.

    Further tests reveals some simple MD5 modify techniques can easily bypass those VT vendors blacklist signatures (including detection names like GenericKD, UDS, Gen... all are common ones from vendors with good scores on AVC), while ESET's signature and cloud signature have good robustness against such basic technique.

    So great job ESET:)

  17. 6 hours ago, itman said:

    For example, WD is basically a signature based solution. It lacks advanced local heuristic scanning capability that the major AV vendors like Eset employ

    Actually WD has an emulation engine, but a rather poor one. Can be easily fingerprinted and bypassed.

    6 hours ago, itman said:

    This is the major reason that WD always is dead last in AV-C's Performance Comparative for example

    I personally feel it is more because it lacks caching capability, so its emulation engine needs to busy emulate executables even they are not new to the machine. Not sure why they didn't add caching. One interesting note is that both ESET and WD have noticeable impact on program start up speed, but ESET is more due to AMS because the realtime scanner has caches.

    Thanks for the info of the cloud scanner (I remember seeing some detection names with cl as the suffix, perhaps that's it?). I am a bit confused about the two-phase scan you mentioned. Is phase one a local sandboxing? and phase two a cloud sandboxing?

  18. 1 hour ago, itman said:

    Eset on the other hand allows advanced users much of the above capability while at the same time giving like protections to the average user in the "out-of-the-box" default configuration

    I was more curious about the yellow bar in WIndows Defender score. I am not sure what alert is counted as a user dependent event... smart screen?

  19. 47 minutes ago, itman said:

    This Eset security blog article might explain some of what is happening on the AV lab test scene

    This reminds me of another interesting observation: a year ago I was always wondering why ESET is scored very badly in some 3rd party performance impact eval, because it is very counter intuitive from my own experience. After some leisure time benchmarking and analysis across different products, I start to see some reasons behind the numbers.

    Some tests "successfully" avoided many scenario a caching mechanism may help. Installing new apps, starting an application, etc., all fall outside this range, and they may take up a large portion of the performance score. As performance optimizations are always for optimizing common cases and let uncommon cases finish gracefully, such performance benchmark break-down is in question.

    Some cloud-based products incurs over 35% slow down in my app start up tests across multiple platforms, but it performs really well in some 3rd party app start up test. This makes me believe some targeted optimization may have been applied to help boost the test score.

  20. 5 hours ago, MSE said:

    ESET had 0FP in Aug2017 , while MSE had 2FP in Jan2018 and 1FP in Feb2018, from 1,500,000 samples, which is 0.00006%.

    Is beyond reasonable.

     

    As I said, having bad FP score indicates bad product, but having a good FP score doesn't necessarily mean the actual FP of a product is low. I can easily make an innocent hello world program (without obfuscation techniques) and let several products which have low num of FPs in the AV test (say, 1~4) raise a false alarm. It is not easy to make ESET do so. Real world is much more complex than this (note their FP test only executed ~50 samples to test behavior blockers, which is more prone to FPs.) I have even experienced one product with good looking FP score flag PCMark as malicious and auto quarantine it, and one product delete multiple benign application during a disinfection process due to inappropriate OS tagging. These are something you might not know if you don't really try such product out. So far I didn't see an FP test that can well reflect the FP of those products in real world. 

    Anyway, I am not here to defend why ESET doesn't participate in recent tests. Actually I am wondering the same.

    I just feel it is necessary to state my observation for fairer comparison. There is always a trade-off. And whether a higher detection rate or a lower FP rate is more important is left to customers to decide. I personally feel that a security product will start to be meaningless when its FP exceed certain threshold, because in that case users will be busy excluding files, including malicious ones :rolleyes:

×
×
  • Create New...