Takeaway: AI and ML are powerful forces in disrupting cybercrime, protecting enterprises, and safeguarding against data breaches.
Cybercrime is constantly finding new ways to wreak havoc, steal your private information, and commit all kinds of mischief. New technologies such as artificial intelligence (AI) and machine learning (ML) have already been leveraged by hackers and cyber criminals for their malicious intents.
As one of the founders and CEO of Intel, Andy Grove once said:
“At the heart of the Internet culture is a force that wants to find out everything about you. And once it has found out everything about you and two hundred million others, that’s a very valuable asset, and people will be tempted to trade and do commerce with that asset.”
However, AI and ML are also powerful forces in disrupting cybercrime, protecting enterprises, and safeguarding data against breaches and exploits. What are some recent developments in adopting ML for cybercrime defence?
Protecting Credit Card Security and Privacy
Today, protecting the privacy and security of shoppers’ credit cards is mandatory – lots of people shop online, plus even in retail stores, credit cards are used routinely. Every single transaction that occurs must be examined in real-time for signs of fraud.
A truly titanic feat if we think about the fact that a global payment processor, such as Mastercard, can process nearly 165 million transactions per hour.
Only ML algorithms powered by high-performance computing (HPC) are able to establish this much-needed layer of protection by applying 1.9 million rules to each transaction in less than one second.
Teaching Users How to Protect Themselves
The most effective way to prevent some forms of cybercrime such as phishing is to simply teach people how to avoid falling for them. Most phishing crimes are quite transparent, while others are sneakier and harder to detect. For this reason, many companies teach their employees how to protect themselves against phishing with simulation campaigns.
Some workers in an organization are inherently more vulnerable than others – but those who aren’t, still keep getting the same annoying messages over and over no matter what.
Hoxhunt is a company that employs ML to take the effectiveness of phishing simulations to the next level. Instead of teaching the same lessons to everyone regardless of their abilities, roles and email use patterns, the system is able to draw information from the individual responses of each employee.
The AI then “personalizes” the learning experience accordingly, sending fake phishing emails over time with increasing sophistication to test people’s vigilance. The more frequently a worker falls for the phishing simulation, the more training he or she will receive. Similarly, if a user demonstrates a higher level of awareness, the platform will reduce the frequency of the simulations.
As Hoxhunt CEO Mika Aalto explained:
“One of the multiple challenges faced by organizations today is the severe shortage of talented security professionals in the market. With the support of ML, it’s possible to tailor individual training to each employee based on their role and progression, without adding extra constraint to the current team.”
Fighting Fire with Fire
Although the use of new ML algorithms is helping cybercriminals automate their massive attacks and exploits, AI can be used to automate and streamline data analysis for cybercrime defence as well. AI programs can examine incoming and outgoing business traffic at an amazing speed to detect any anomaly or abnormality in data patterns.
They can be used to spot a breach as it occurs, effectively preventing it, or at least mitigating it. Supervised learning can help the AI become more efficient in detecting advanced malware over time.
For example, DeepArmor is an ML-based tool that leverages Google Cloud Machine Learning Engine to prevent endpoint attacks by detecting threats early with 99.5% accuracy.
The scalability of AI is also critical to reduce the overwhelming workload of IT security departments that are in dire need of more streamlined processes to parse through all the data and root out threats.
Especially in the case of smaller businesses, nearly one-quarter of enterprises lack the resources to achieve effective in-house cybersecurity such as having a fully-dedicated team to monitor performance and spot signs of a threat.
AI can classify risk autonomously, suggest a course of action, and, when coupled with human efforts, enable egregious threat-based decision-making that goes beyond merely relying on pre-defined risk management strategies.
Skipping Dangerous Hijack Networks
An increasingly popular cybercrime is to hijack IP addresses for malicious purposes such as stealing cryptocurrencies or sending malware and spam. The Border Gateway Protocol (BGP) is a routing mechanism used to send data packets to their correct destination and exchange data between networks.
Back in the late 1990s, a critical shortcoming leading to a serious exploit was found by a team of hackers. More than 20 years later, no security procedures are available to validate messages, and IP hijackers can easily redirect data packets to specific “bad” networks.
Even companies like Google and Amazon have been damaged by IP hijacking attempts, that are used for global espionage as well. A new machine learning system has been developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
They identified some of the characteristics of IP hijackers such as high volatility and the presence of foreign IP addresses, and flagged over 800 suspicious networks – some of which have been used for malicious purposes for years. This system could be used to block fraudulent routing incidents and complement the existing solutions to prevent these crimes.
Conclusion
AI and ML are among the key drivers of the Fourth Industrial Revolution. As the risk and threat landscape continues to change and evolve, these technologies are the fundamental instruments that we need to prepare an adequate response.