CyberSecurity incidents expected to rise by nearly 70% and cost $5 trillion annually by 2024.
These days, the use of artificial intelligence (AI) is becoming increasingly commonplace. Companies and governments use facial recognition technology to verify our identities; virtually every smartphone on the market has mapping and translation apps; and machine learning is an indispensable tool in diverse fields including conservation, healthcare, and agriculture.
As the power, influence, and reach of AI spreads, many international observers are scrutinizing the dual nature of AI technology. They’re considering not only AI’s positive transformative effects on human society and development — think of medical AI applications that help diagnose cancer early — but also its downsides, particularly in terms of the global security threats to which it can expose us all.
AI as a Weapon
As AI gets better and more sophisticated, it also enables cybercriminals to use deep learning and AI to breach security systems (just as cybersecurity experts use the same technology tools to detect suspicious online behavior). Deepfakes — using AI to superimpose one person’s face or voice over another in a video, for example — and other advanced AI-based methods will probably play a larger role in social media cybercrime and social engineering. It sounds scary, and it’s not science fiction.
In one noteworthy recent example of a deepfake that generated headlines in The Wall Street Journal, criminals employed AI-based software to replicate a CEO’s voice to command a cash transfer of €220,000 (approximately $243,000). Cybercrime experts called it a rare case of hacking that leveraged artificial intelligence.
In that scam, the head of a UK-based energy company thought he was on the phone with his boss, the chief executive of the firm’s German parent firm, who directed him to send the money to a Hungarian supplier. The German “caller” claimed the request was urgent and ordered the unwitting UK executive to initiate the transfer within the hour.
The IoT is a Bonanza for Cybercriminals
That’s just one instance of how AI has huge potential to transform how crime, and cybercrime in particular, is conducted. Using AI, bad actors will be able to refine their ability to launch attacks and discover new targets, such as by altering the signaling system in driverless cars. The growing ubiquity of the Internet of Things (IoT) is a particular gold mine for cybercriminals. There’s also increasing convergence of operational IT and corporate IT; which means that the production lines, warehouses, conveyor belts, and cooling systems of tomorrow will be even more exposed to an unprecedented volume of cyber threats. Even pumps at gas stations could be controlled or taken offline from afar by hackers.
Like any connected device that’s improperly secured (or not secured at all), it’s possible that Internet-connected gas pumps and other smart devices could be co-opted into botnets for use in distributed denial-of-service attacks, with bad guys recruiting them in their efforts to overload online services.
But it’s not only companies that are vulnerable. Cyberattacks on critical infrastructure can lead to widespread blackouts that can cripple a major city, an entire region, or a country for days or weeks, which makes such attacks a massively destructive weapon for malicious nation-states. North Korea is infamous for cyber warfare capabilities including sabotage, exploitation, and data theft. According to the United Nations, the country has racked up roughly $2 billion via “widespread and increasingly sophisticated” cyberattacks to bankroll its weapons of mass destruction programs.
Damages to Exceed $5 Trillion by 2024
Because of the general trend toward corporate digitization and the growing volume of everyday activities that require online services, society is becoming ever more vulnerable to cyberattacks. Juniper Research recently reported that the price tag of security breaches will rise from $3 trillion each year to over $5 trillion in 2024, an average annual growth of 11%. As government regulation gets stricter, this growth will be driven mainly by increasingly higher fines for data breaches as well as business losses incurred by enterprises that rely on digital services.
According to Jupiter’s report, the cost per breach will steadily rise in the future. The levels of data disclosed certainly will make headlines, but they won’t directly impact breach costs, as most fines and lost business are not directly related to breach sizes.
AI-Based Attacks Require AI-Based Defenses
As cyberattacks become more increasingly devious and hard to detect, companies need to give their defense strategies some serious second or third thoughts. AI can constantly improve itself and change parameters and signatures automatically in response to any defense it’s up against. Given the global shortage of IT and cybersecurity talent, merely putting more brilliant and ingenious noses to the grindstone won’t solve the problem. The only way to battle a machine is with another machine.
On the plus side, AI has the potential to expand the reach for spotting and defending against cyberattacks, some of which have had worldwide impact. When it comes to detecting anomalies in traffic patterns or modeling user behavior, AI really shines. It can eliminate human error and dramatically reduce complexity. For example, Google stopped 99% of incoming spam using its machine learning technology. Some observers say AI may become a useful tool to link attacks to their perpetrators — whether it’s a criminal act by a lone actor or a security breach by a rogue state.
In the cybersecurity world, the bad guys are picking up the pace. As a result, the corporate sector must pay attention to AI’s potential as a first line of defense. Doing so is the only way to understand the threats and respond to the consequences of cybercrime.