Bottom Line: In 2021, cybersecurity vendors will accelerate AI and machine learning app development to combine human and machine insights so they can out-innovate attackers intent on escalating an AI-based arms race.
Attackers and cybercriminals capitalized on the chaotic year by attempting to breach a record number of enterprise systems in e-commerce, financial services, healthcare and many other industries. AI and machine learning-based cybersecurity apps and platforms combined with human expertise and insights make it more challenging for attackers to succeed in their efforts. Accustomed to endpoint security systems that rely on passwords alone, admin accounts that don’t have fundamental security in place, including Multi-Factor Authentication (MFA) and more and attackers created a digital pandemic this year.
What 20 Leading Cybersecurity Experts Are Predicting For 2021
Interested in what the leading cybersecurity experts are thinking will happen in 2021, I contacted twenty of them who are actively researching how AI can improve cybersecurity next year. Leading experts in the field include including Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, BJ Jenkins, President and CEO of Barracuda Networks, Ali Siddiqui, Chief Product Officer and Ram Chakravarti, Chief Technology Officer, both from BMC, Dr. Torsten George, Cybersecurity Evangelist at Centrify, Tej Redkar, Chief Product Officer at LogicMonitor, Bill Harrod, Vice President of Public Sector at Ivanti, Dr. Mike Lloyd, CTO at RedSeal and many others. Each of them brings a knowledgeable, insightful and unique perspective on how AI will improve cybersecurity in 2021.
The following are their twenty predictions:
Employers’ and employees’ virtual IT and security needs are quickly changing. AI, machine learning and BIOS-level technologies enable more resilient, persistent endpoint connections that can keep up with this rapid rate of change. According to Nicko van Someren, Chief Technology Officer at Absolute Software, nearly all employees work and connect outside of a traditional office building and off the corporate network in the current scenario. As a result, there needs to be a way to perform fully remote lifecycle management of PCs – without requiring any hands-on intervention required by IT and while still giving IT all of the insights and control that they need. The capabilities that Absolute provides to support remote management are the first step in giving employees the full set of tools they need to work virtually on a protected endpoint device. Using these tools, businesses can handle the whole “deployment to disposal” lifecycle without needing physical access to a machine.
“Delivering a resilient, persistent connection to every device, no matter where it is, needs to start by assuming every endpoint is in a potentially hostile physical environment,” he said. “At Absolute, we are already seeing greater heterogeneity in the way people connect to the network, especially for cloud services. Where the challenges are today and will be in the future, is ensuring the resiliency and persistency of any remote device’s security while accessing files and resources from any service, in the cloud or at the office. In those scenarios, whether the device is on-premises, or in a branch office, or at somebody’s home, endpoints need to be persistent and resilient enough to be completely recreated if necessary. Attackers are seeking to capitalize on the chaotic situation created by the ongoing pandemic and a lot of organizations are having to accelerate the roll-out of new technologies to cope with these rapid changes. But, often we are seeing that the changes that they are making are changes that will be for the better… even after the pandemic has been beaten. It reminds me of line from a favorite Red Hot Chili Peppers song, Californication: ‘Destruction leads to a very rough road, but it also breeds creation.’ That’s exactly what fuels us at Absolute to double-down on our efforts to innovate faster and secure our customers’ systems, starting at the endpoint.
AI Will Aid the Cybersecurity Skills Shortage. BJ Jenkins, President and CEO of Barracuda Networks says that the prominent cybersecurity skills shortage, paired with the increase in employees working from home, has opened up more opportunities for cybercriminals to carry out nefarious activity. And as cybercriminals are known to prey on areas of weakness, organizations will need all the help they can get to stay protected. In recent years, AI has become a common defense against cyberattacks, recognizing patterns of attacks, suspicious email activity and more. And although this technology has fueled an arms race between threat innovation and threat protection, AI will prove itself a champion in 2021 by alleviating bandwidth for the security professionals who are working tirelessly to keep their companies secure. With the use of AI, companies can automate their protection processes against phishing, ransomware, account takeover and more. As the cybersecurity industry grapples with attracting new talent to meet the skills gap, AI can free up bandwidth for existing professionals to carry out employee training and other, more hands-on, security tasks.
In the coming year we will see more use of AI as many people have shifted to remote office and online services to key areas where attackers are looking for vulnerabilities predicts Hatem Naguib, COO, Barracuda Networks who says AI is a key tool in the arsenal against cyber attackers. The ability to leverage algorithms against massive data sources to determine aberrant patterns is one of the most important ways we determine the new type of phishing and spears phishing attacks that are based on social engineering. This is especially useful in attacks on two key email vectors, email and applications. For email, originally, AI and ML (machine learning) can be used to stop attacks that mask as inquiries and updates asking you to click or share credential information. More recently we have used AI/ML to learn patterns of email communications to determine when an email account has been hijacked and is used to send to attacks to other victims. For applications with internet-facing interfaces are constantly responding to bots to get up-to-date information on the application. “Many attackers use bots as attackers to search for unauthorized access to applications. There are millions of these bots running at all time on the internet and AI is used to determine which are malicious and which are benign.
AIOps Will Heat Up to Enhance the Customer Experience and deliver on Application Assurance and Optimization predicts Ali Siddiqui, Chief Product Officer, BMC. “With a year of unpredictability behind us, enterprises will have to expect the unexpected when it comes to making technology stacks infallible and proactive. We’ll see demand for AIOps continue to grow, as it can address and anticipate these unexpected scenarios using AI, ML and predictive analytics,” says Ali Siddiqui, Chief Product Officer, BMC. “The increasing complexity of digital enterprise applications spanning hybrid on-premise and cloud infrastructures coupled with the adoption of modern application architectures such as containerization will result in an unprecedented increase in both the volume and complexity of data. While data overload from modern digital environments can delay repair and overwhelm IT Ops teams, noisy datasets will be a barrier of the past as smarter strategies and centralized AIOps systems help organizations improve the customer experience, deliver on modern application assurance and optimization, tie it to intelligent automation and thrive as autonomous digital enterprises. In fact, conventional IT Operations approaches may no longer be feasible – making the adoption of AIOps inevitable to be able to scale resources and effectively manage modern environments.”
Pervasive intelligence and enterprise automation will have significant impacts on business growth and strategy in 2021 according to Ram Chakravarti, Chief Technology Officer, BMC. “Both experienced key developments this year in light of COVID-19. Additionally, implementation increased exponentially because of the pandemic, with more AI-powered and driven smart devices being deployed to adapt to changing environments, particularly abrupt changes, to better predict outcomes. While the technology was always destined to have long-lasting implications for digital transformation, in 2021 the effects and advancements of pervasive intelligence and enterprise automation will be felt much quicker and more globally because capabilities are not only increasing but becoming more significant and measurable.”
AI Will Prove Essential To Solving Entitlement Challenges Related to Cloud Adoption. Dr. Torsten George, Cybersecurity Evangelist at Centrify predicts that cloud adoption continues to grow rapidly and has even been accelerated as a result of the COVID-19 pandemic. As resources are often created and spun down in a matter of hours or even minutes, it has become challenging for IT security team to manage those cloud entitlements, meaning who is allowed to access cloud workloads, when and for how long. Traditional tools are often not applicable to these new environments. However, AI technology can help detect access-related risks across Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) environments by discovering both human and machine identities across cloud environments and then assess their entitlements, roles and policies. Establishing this granular visibility allows organizations not only to fulfill their compliance obligations but also to enforce least-privilege access at scale, even in highly distributed cloud environments. AI technology can also be leveraged to establish cloud configuration baselines and report changes or irregularities to raise alerts and/or self-heal the identified misconfiguration. Capital One’s data breach is a good example where AI could have detected configuration changes (in that case, misconfiguration of a firewall) and led to an automated response to mitigate the risk.
AI Will Become More Embedded in Authentication Frameworks. Dr. Torsten George, Cybersecurity Evangelist at Centrify predicts that when AI is utilized in authentication, it provides the ability to be far more dynamic, create less friction and guarantee real-time decisions. In the context of privileged access management (PAM), we know that adaptive multi-factor authentication (MFA) is one example where a multitude of authentication factors combined with taking dynamic user behavior into account can dramatically reduce risk when making authentication decisions. In 2021, this could lead to AI being used more frequently to establish real-time risk scores and stop threats at the authentication stage before they can get in to do real damage.
Gaining more visibility into open source contributors will be critical in 2021 and the use of AI and ML will be a catalyst for weeding out those with malicious intentions. Maty Siman, Checkmarx’s CTO sees 2021 being a year when malicious actors increasingly find open source to be an easy way into organizations. Maty says that “rarely does a week go by without the discovery of malicious open source packages. Yes, developers and organizations understand they need to secure the open source components they’re using and existing solutions help them in removing packages that are mistakenly vulnerable (where a developer accidentally embeds a vulnerability into the package). But, they are still blind to instances where adversaries maliciously push tainted code into packages. This is where AI and ML comes into play – making it possible to detect malicious open-source contributors and packages with greater accuracy and efficiency and at scale. For example, AI and ML algorithms can identify and flag scenarios where it’s the first open source project a user has contributed to, whether or not the user is active in any public-facing networks, such as social channels, to verify their credibility and if the user alters code in sensitives areas of the system. This approach can essentially give open source contributors a reputation score, making it easier for developers to vet both who they’re trusting and the packages they’re leveraging.”
“Whenever there are discussions about the relationship that exists between AI and data privacy, the two words that immediately come to mind are “it’s complicated.” Privacy regulations like the GDPR explicitly limit the use of automated technologies in processing and profiling using personal data as a part of their set obligations and have set the bar for AI accountability principles being adopted into other privacy regulations globally”. Cassandra Cooper, Senior Research Analyst, Security, Risk & Compliance at Info-Tech Research Group says the potential for exploitation of personal data through AI employed in smart devices, as well as facial recognition technologies due to the sheer volume of data that is amassed, processed and analyzed by AI technology, is immense and is one that takes centre stage on the radar of privacy advocates. But opportunity also exists for AI to help better promote privacy and enhance privacy technologies. Federated learning is one such example that has been gathering steam recently as it helps to satisfy the stringent requirements of the many global privacy regulations while still ensuring that data can be used as a strategic enabler of the business. When properly applied, federated learning adds a layer of effective data protection to AI technologies by decentralizing the machine learning model and enabling algorithmic learning to be distributed across multiple devices. One of the primary issues that arises from ML techniques today is that you’re aggregating huge volumes of (often) very sensitive data to train the model and it’s all going to one place – a big red flag for malicious actors and a huge privacy risk. Federated learning models promote principles of data protection and privacy by creating a framework in which the devices do not exchange or share any data, nor is a centralized location relied upon to send information to and from – the only person that has access to the information is the individual themselves. While growing in popularity, it is not entirely privacy-proof – opportunities do exist for breach of sensitive personal data even in this decentralized model. However, bearing in mind that federated learning has really only been a part of the AI landscape since circa 2017 and with the growing prevalence of data protection regulations globally, there exists significant promise in its application in assisting with maintaining a high degree of privacy protection.”
Unsupervised machine learning approaches will continue to advance, particularly in cybersecurity, predicts Josh Johnston, Senior Director of Engineering at Kount. “These techniques find patterns and structure in data, rather than training classifiers using past outcomes as a supervisory signal. Anomaly detection and network analysis are two major areas of unsupervised learning that are particularly well-suited to cybersecurity. Besides, relying on historical data for training is a worse idea than usual given the giant asterisk that was 2020.”
Kount’s AI combines both supervised and unsupervised machine learning for fraud detection across the entire customer journey. The company takes the unique approach of giving its customers access to their eCommerce data, meaning these decisions aren’t made in a black box. That’s key to the future of AI Johnston says, “As a field, we won’t be able to keep putting off explainable outcomes and model governance. The regulators are catching up and in-house cybersecurity teams need to stay a step ahead. Cybersecurity professionals that can’t satisfy legal and governance requirements will find themselves stripping out AI and ML solutions regardless of their performance or ROI.”
Driven by AI, Security and IT Operations Will Be Better Integrated. Tej Redkar, Chief Product Officer at LogicMonitor says that when it comes to securing your business infrastructure and applications, the fundamental data is almost the same as IT operation data sets. It is the machine and user data flowing through your digital infrastructure. Security algorithms model the historical behavioral patterns and detect anomalies and deviations from those patterns in near real-time. Using AI, this process could be further automated towards blocking bad actors in near real-time.
For example, a hacker is trying to access or penetrate a firewall. That is detected by either a change in the volume of data or a change in the location of the user that is trying to access it. Multiple features could be used to classify that particular access as either regular access, hacker access, or insecure access. Once that is detected, it could be handed over to the automation/AI system to block the IP address of that particular region or that particular range.
If you observe carefully, the underlying data required to gather this intelligence is still the transactions, logs and metrics, but the users are security teams and the problem that they are trying to solve is securing the business from bad actors. The business problems and algorithms are different but the underlying data is the same. Next year, the IT Operations and Security teams will collaborate closely to not only detect problems in the infrastructure performance but also prevent cybersecurity threats in near real-time.
Threat actors will continue to use machine learning to improve their cheap phishing attacks. Ernesto Broersma, Partner Technical Specialist at Mimecast predicts that what is considered a targeted threat today will be considered spam tomorrow. “Pattern of Life analysis will be further automated and many sophisticated attacks will be generated without human intervention”, Ernesto predicts.
We’ll use AI as a new form of authentication in 2021. Bill Harrod, Vice President of Public Sector at Ivanti says that password related cyberattacks continue to dominate every industry, with there being a reported more than 88 billion credential stuffing attacks alone in a 24 month period. To overcome this issue and kill the password for good, organizations need to take a mobile-centric zero trust security approach. He predicts that using AI and machine learning, this approach will go beyond identity management and gateway approaches by utilizing a more comprehensive set of attributes to determine compliance before granting access. It validates devices, establishes user context, checks app authorization, verifies the network and detects and remediates threats before granting secure access to a device or user.
AI will be key to bolstering security in a remote world. Security is top-of-mind for any organization’s C-suite that has embarked on a digital transformation journey, but its importance has only been accelerated by the pandemic. Scott Boettcher, VP, Enterprise Information Management, NTT DATA Services predicts with so many endpoints scattered across the world as employees have the flexibility to work remote from wherever they choose, vulnerabilities multiply. Scott predicts, “a major trend we will see in 2021 and beyond is the application of AI to security measures, because humans alone cannot monitor, control and check each endpoint to adequately or efficiently protect a modern enterprise. If security leaders (especially those at Fortune 500 companies) don’t make the time and financial investment to enhance security with AI now, they can expect to be targeted by hackers in the future and scramble to protect their data.”
In 2021, organizations will zero in on privacy and security as critical elements to their data protection strategies, predicts Steve Totman, Chief Product Officer at Privitar. He says that “our digital dependence accelerated throughout 2020 has heightened the need for embracing data privacy as a core element of business dataops, especially where AI and ML is being embraced. Even self-driving cars need guardrails to protect you from running off the road, collision avoidance systems to avoid a crash and in the worst case, air bags to prevent harm in an accident. Similarly, privacy technologies must provide the same multi-level controls automatically to ensure data is protected, usability is preserved and in the event of a breach, remediation is a given. The usage of AI and ML necessitates the automatic integration of data privacy ops to ensure the controls are in place to responsibly and ethically use data within, across and outside the organization.”
In 2021, the interplay between AI and cybersecurity will be increasingly apparent – security vendors are spending more time and money than ever on specialists in artificial intelligence and data science to mine their data and enhance their products using AI and machine learning predicts Erick Galinkin, Principal Artificial Intelligence Researcher at Rapid7. He further predicts that “additionally, development on artificial intelligences for aggregating and correlating security data is rapidly improving. A variety of security companies and researchers are deeply invested not only in using data science generally to build use cases within their products, but also in using natural language processing and other machine learning technologies to improve the ability of their existing products to ingest and integrate information from additional sources.”
Security teams will get better at understanding which jobs are best handled by machines, which by humans and how to build combined teams predicts Dr. Mike Lloyd, CTO at RedSeal. Dr. Lloyd also says the skills shortage is still a main driver of the need to rely on machines, but we cannot overlook the point that current and near-term AI tech is still short-sighted, easily fooled and unable to grasp the human motivations of bad actors. Dr. Lloyd says that “this is why the focus in 2021 is not on which AI/ML engine has the most features or the lowest error rate – it’s moving over to which AI approaches integrate humans into the process in the best way. The focus will increasingly shift away from black boxes – inscrutable engines that compute correlations that nobody can understand and which are often biased in significant ways – and towards more transparent reasoning approaches, where AI doesn’t just present answers, but can present reasoning that humans can follow, to understand why a given conclusion is important. Machine learning has peaked, but the next wave is machine reasoning. AI will continue its journey along the classic Hype Cycle stages (defined by Gartner), proceeding from the recent peak of inflated expectations, through the current trough of disillusionment and out towards the plateau of productivity.”
The remote workforce appears to be putting organizations at a greater risk of data breaches, IP theft and illegal access through company and personal devices. In the first six months of the pandemic, 48% of total U.S. knowledge workers said they had experienced targeted phishing emails, calls, or texts in a personal or professional capacity – this number will only continue to grow. “If these risks are not addressed, 2021 will be yet another year where we say, “the threat landscape continues to become more complex”—a phrase that I feel we’ve been (justifiably) repeating for the last decade”, predicts Grady Summers, EVP, Solutions and Technology at SailPoint. “Throughout a few decades in security, I’ve seen that identity and access management plays a major role in securing enterprise identities and limiting the blast radius from a compromise. But IAM processes are complex and a well-managed identity governance program can thus be costly and out of reach for many organizations. Yet AI is already starting to change this and the trend will accelerate in 2021. Identity management will become more streamlined as we analyze patterns and anomalies to automate access requests, spot risky users and eliminate manual and cumbersome re-certification processes. Organizations will become more comfortable embracing automated governance around the real crown jewels in any org—their identities—and this automation will make IAM programs more accessible to a broader range of organizations. I believe regulators will start to become comfortable with AI-driven decisions as they realize that machines will deliver smarter and faster results vs. overwhelmed humans trying to determine who can access what and when.”
AI will play a much larger role in cybersecurity in 2021 including addressing the talent shortage, thwarting adversarial AI-based attacks and securing enterprises to the algorithm level predicts Michael Borohovski, Director of Software Engineering, Synopsys Software Integrity Group. “First, there is the talent shortage of cybersecurity professionals. Companies are turning to MSP partners to use as external security teams due to the shortage, or they’re focusing on automating tooling, driven by AI ,to defend their networks and the Software they’re developing. Second, adversaries are starting to utilize more AI to target their messaging (for social engineering attacks) and to find bugs they may be able to turn into exploits for Software and hardware. Organizations will need to respond with new ideas and infrastructures that aim to identify such attack strategies. The third reason AI will grow and mature in the year ahead, at least in terms of cybersecurity, requires us to take a look back. 2020 has significantly increased AI technology adoption across the enterprise, driven by an improved customer experience; greater employee efficiency and accelerated innovation (see study here). As new technologies are built around (and using) AI, organizations need to understand an entirely new layer of attack surface. They no longer need to protect only their infrastructure and the Software they’ve written – they must also protect their AI algorithms from attack as well. As new attacks begin to emerge in the training stage for AI (e.g., poisoning, trojans, backdoor attacks) and the production stage (e.g., adversarial reprogramming, evasion of false positives/negatives), organizations must adjust the algorithms (or build new systems) to be able to detect and, more importantly, react to such attacks — not to mention any new attack strategies that attackers will certainly develop in the future. Defending an algorithm whose primary function is learning, as opposed to an algorithm with consistently predictable results, is a venture I find rather exciting, albeit a challenging one.
Advances in artificial intelligence will continue to improve the process, identification, triage, response and remediation performance, but innovation in newer specialty areas like Explainable AI (XAI) and Adversarial Machine Learning is exciting to watch says Sam Small, Chief Security Officer at ZeroFOX. He says that “while some businesses had robust cybersecurity processes in place to secure remote work and remote access ahead of the pandemic, many found themselves ill-equipped to achieve the same levels of visibility and protection they had developed within traditional office environments. Ad hoc solutions and processes emerged to support business continuity in the short term; however, with the distribution of office and remote work now likely transformed forever, CSOs face the challenge of rebalancing their programs and budgets to support more complex, distributed and heterogeneous environments for the long term.“