AI offers potential but companies must resolve its issues too.
2020 was a year that further cemented AI’s role in our everyday lives because of Covid-19 and the pandemic-fueled shift to digital life. A couple of months into 2021 and it is clear that this is a trend that is here to stay. Technologies underpinned by algorithms have become critical, whether it is to ask a chatbot about an online order or for banks to verify the identity of a customer trying to open a new bank account digitally.
But while AI is becoming more commonplace, we’re also still seeing teething problems and misuse which could lead to further issues if not addressed this year. We only need to look at the Ofqual exam grading fiasco to understand the need to address embedded bias, and in addition we’re beginning to see analyst reports emerge that predict the likelihood of AI-fueled cyberattacks.
So as we look ahead to the rest of the year, what are the key considerations for AI and the fine-tuning required to continue its successful range of use cases?
Addressing bias in AI algorithms
Enterprises are becoming increasingly concerned about demographic bias in AI algorithms (race, age, gender) and its effect on their brand and potential to raise legal issues if it leads to poor decisions that negatively impact end users. Evaluating how vendors address demographic bias will become a top priority when selecting identity proofing solutions. According to Gartner, more than 95 percent of RFPs for document-centric identity proofing (comparing a government-issued ID to a selfie) will contain clear requirements regarding minimizing demographic bias by 2022, an increase from fewer than 15 percent today.
Companies will increasingly need to have clear answers to those who want to know how a vendor’s AI “black box” was built, where the data originated from and how representative the training data is to the broader population being served.
As organizations continue to adopt biometric-based facial recognition technology for identity verification, the industry must address the inherent bias in systems. The topic of AI, data and ethnicity is not new, but it must come to a head this year. According to researchers at MIT who analyzed imagery datasets used to develop facial recognition technologies, 77 percent of images were male and 83 percent were white. This demonstrates that one of the main reasons why systematic bias exists in facial recognition technology is that it is just not representative. In 2021, guidelines will be introduced to offset this systematic bias. But until that happens, organizations using facial recognition technology should be asking their technology providers how their algorithms were developed and trained to ensure that their vendor is not training algorithms on purchased datasets or with small datasets that do not reflect the larger population that they serve.
Criminals will weaponize AI in new ways for fraud
The past decade has given rise to an entire cybercrime ecosystem on the dark web. Increasingly, cybercriminals have gained access to new and emerging technologies to automate their attacks on a massive scale. The dark web has also become a virtual watercooler for cybercriminals to share tips and tricks for scanning for vulnerabilities and perpetrating fraud. In fact, fraud went up by 28 percent in the UK on online transactions, over 2020. We can expect to see the evolution and sophistication of cybercrime continue in 2021 as criminals leverage artificial intelligence and bots more than ever before.
Just as organizations have adopted artificial intelligence to shore up the attack surface and thwart fraud, fraudsters are using AI to carry out attacks at-scale. For this reason, we can expect to see an AI arms race over the coming year, as companies attempt to stay ahead of the attack curve while criminals aim to overtake it. We anticipate this at unprecedented levels across several key areas:
Machine Learning: Bad actors will leverage machine learning (ML) to accelerate attacks on networks and systems, using AI to pinpoint vulnerabilities. As companies continue to digitally transform, spurred by the Covid-19 pandemic, we will witness more fraudsters rapidly leveraging ML to identify and exploit business security gaps.
Attacks on AI: Yes, AI systems can be hacked. Attacks on AI systems are different from traditional attacks and exploit inherent limitations in the underlying AI algorithms that cannot be fixed. The end goal is to manipulate an AI system to alter its behavior – which could have widespread and damaging repercussions, as AI is now a core component in critical systems across all industries. Imagine, for example, if someone changed how data is classified and where it is stored at-scale.
AI Spear-Phishing Attacks: AI will be used to increase the precision of phishing attacks. AI-powered spear-phishing email campaigns are hyper-targeted with a specific audience in mind. Scouting information from social media and tailoring attacks to a specific victim can increase the click-through rate by as much as 40 times and all of this can be automated through sophisticated AI technology. Cybercriminals will continue to model phishing attacks on human behavior, replicating specific language or tone, to drive higher levels of ROI on attack investments.
Deepfake Videos: Deepfake technology uses AI combined with existing imagery to replace someone’s likeness, closely replicating both their face and voice. Increasingly in 2020, deepfake technology was leveraged for fraud. As more companies adopt biometric verification solutions in 2021, deepfakes will be a highly coveted technology for fraudsters to gain access to consumer accounts. Perhaps unsurprisingly, technology capable of identifying and stopping deepfakes will be of equal importance to organizations who can leverage digital identity verification solutions. Organizations must be sure any solution they implement has the sophistication in place to stop these growing attacks, which will be highly utilized by fraudsters in 2021.
As we look ahead to the rest of the year, it’s clear that AI introduces new opportunities and threats. Whether it’s ensuring the data that underpins algorithms accurately represents our societies or being acutely aware of how it could be used in malicious exploits, organizations have a lot to consider as we move into the next, more mature, phase of this technology.