Jump to content

What Does Your Machine Actually Learn?


itman

Recommended Posts

Hilarious and definitely worth a read: 

Quote

Machine Learning, or Artificial Intelligence as it is sadly erroneously being marketed as, is all the rage right now. We are being promised a brand new emerging world where digital minions jump at our every whim to fulfil our dreams and wishes. It even promises to do away with pesky employees and their meat body demands and expectations.

Sadly, this is of course just the latest in a stream of hyped up marketing that overpromises and underdelivers. The soberer truth right now is that Artificial Intelligence is like the dancing bear at a circus. We are not fascinated because the bear dances well – because it doesn’t. We are fascinated that the bear dances at all.

To underline this point, Gartner recently compared the craze for every company to pretend to be an AI company to the “greenwashing” that led every company to brand themselves as environmentally conscious and friendly. If you look at Gartner’s Hype Cycle for Emerging Technologies though, Machine Learning it is at the peak of inflated expectations. As anyone who is familiar with Gartner’s Hype Cycles knows, after that comes the trough of disillusionment. And as anyone who is familiar with the history of AI knows, we’ve been here before - several times. Each time, AI was unable to deliver what it promised and suffered from several lost decades with a lack of funding. We are now on the 5th generation of AI.

Ref.: http://www.securityweek.com/what-does-your-machine-actually-learn

Edited by itman
Link to comment
Share on other sites

  • Most Valued Members

The fact of the matter is both machine learning and artificial intelligence exist just now. It just depends on what peoples expectations are of it, plus what it's marketed as. 

ML at its current form can be seen by anyone online that goes to google........ then types in "how do i" ....
Google then gives you suggested items to complete your sentence, where it can either fail or win.

Example of something that will probably work for a few, was during the elections in the USA. If you went to google at that point in time and typed in "should i vote for", would have most likely suggested Donald Trump or Hillary Clinton.

But google was wrong as I was actually looking for "should i vote for the new library to be built" for example

If you then apply the same expectations and outcomes from a security product then it's easy to see how "guesswork" can get it so wrong most of the time, but on occasion get it right.




 

Link to comment
Share on other sites

Add to the original posting this:

Quote

AI Training Algorithms Susceptible to Backdoors, Manipulation

Three researchers from New York University (NYU) have published a paper this week describing a method that an attacker could use to poison deep learning-based artificial intelligence (AI) algorithms.

Researchers based their attack on a common practice in the AI community where research teams and companies alike outsource AI training operations using on-demand Machine-Learning-as-a-Service (MLaaS) platforms.

For example, Google allows researchers access to the Google Cloud Machine Learning Engine, which research teams can use to train AI systems using a simple API, using their own data sets, or one provided by Google (images, videos, scanned text, etc.). Microsoft provides similar services through Azure Batch AI Training, and Amazon, through its EC2 service.

Ref.: https://www.bleepingcomputer.com/news/security/ai-training-algorithms-susceptible-to-backdoors-manipulation/

Edited by itman
Link to comment
Share on other sites

  • Most Valued Members
On 25/08/2017 at 0:19 AM, cyberhash said:

The fact of the matter is both machine learning and artificial intelligence exist just now. It just depends on what peoples expectations are of it, plus what it's marketed as. 

ML at its current form can be seen by anyone online that goes to google........ then types in "how do i" ....
Google then gives you suggested items to complete your sentence, where it can either fail or win.

Example of something that will probably work for a few, was during the elections in the USA. If you went to google at that point in time and typed in "should i vote for", would have most likely suggested Donald Trump or Hillary Clinton.

But google was wrong as I was actually looking for "should i vote for the new library to be built" for example

If you then apply the same expectations and outcomes from a security product then it's easy to see how "guesswork" can get it so wrong most of the time, but on occasion get it right.




 

Would Eset's Firewall's learning mode not be classed as machine learning then. I haven't used it but i presume you teach it and it works out what to do by your previous actions.

And i think that's the problem. Companies are trying to sell this as a new technology but it's been around in some form or another for years. It just seems to be a buzz word really

Link to comment
Share on other sites

3 hours ago, peteyt said:

Would Eset's Firewall's learning mode not be classed as machine learning then.

In the context of the posted articles, the answer is no.

Eset's firewall and HIPS learning modes consist of "allow all activities." In other words, they are doing nothing more than recording firewall or process HIPS monitored activity and then creating the appropriate allow rules for that activity.

Machine learning in the context of the posted articles has been around for a long time and all the major AV products use it in some form. For example, Eset uses it to create its "DNA" or generic realtime malware signature database which allow it identify variants of a malware by using a single signature for it. What the "Next Gen" products are doing is using machine learning based on process behavior to differentiate between normal and abnormal process behavior. Once abnormal behavior is detected, these solutions then apply AI algorithms which for the most part are probabilistic based to determine the likelihood, i.e. confidence level, that the behavior is malicious.

For the record, none of this technology per se is new. A security product named Dynamic Security Agent attempted to do likewise in the early 2000's and it never was widely used and quickly abandoned. Granted the methods it employed were quite basic in comparison to the current Next Gen solutions, but many of its issues still remain; namely the incidence of high false positives.

Edited by itman
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...