This blog post is the first in a series of Perspectives in Responsible AI. We present a framework that lays out a series of steps to operationalize Responsible AI. We discuss the challenges at each stage and how to progress to the next one. In our first issue, we’ll present an overview of the framework and discuss the foundational stages.
AI continues to be adopted by ever more organizations. However, adherence to the FATE principles of Responsible AI* continues to be more a matter of hopes and dreams rather than ways and means. A recent study by Juniper Networks found that while 87% of organizations think they “have a responsibility to implement policies that minimize the negative impact of AI”, only 7% have established a company-wide program for strategy and governance – a significant gap of 80%.
While each organization and AI program is unique, fundamentally, the challenge boils down to motivating people and teams to take tangible actions without tangible rewards. This is the essence of the 80% problem: building responsible AI is more than just a technical challenge; it requires adherence to a set of fundamentals and best practices. However, compared with mission-critical business initiatives, fundamentals and best practices do not have a clear ROI. As a result, in 80% of companies, no action is taken until a high-risk event (e.g., a PR scandal from biased AI) triggers massive financial consequences and a sudden sense of urgency to adopt Responsible AI.
So, to counteract these challenges, we introduce this series and this framework on Operationalizing Responsible AI. Before an organization can implement an AI program in a Responsible manner, systems and processes involving people and culture need to be managed and evolved within the context of the business and in lock step with technical maturity. Our frameworks cover all of these elements as follows:
Responsible AI Maturity Framework Stages
In this first issue, we address the foundational elements of the framework, which are Data Literacy and Contextual and Cultural Perspective. In later articles, we learn how companies that have cleared the fundamental stages can look to apply more advanced concepts such as FATE principles, governance and system operation, and ultimately, business value.
Data Literacy
Low data literacy is a widespread issue that prevents companies from identifying actionable insights and ultimately producing value from data, according to research by Accenture and Qlik. It’s an issue that prevents Chief Data Officers from infusing a culture of analytics into cross-functional teams across operations, and products and service delivery. As it relates to Responsible AI, this presents a problem because it creates a lack of confidence in any kind of automated decision-making, let alone decisions powered by AI. Employees who are skeptical of the data quality, reliability, and effectiveness of automated systems will represent a fundamental blocker to implementing AI systems, let alone doing so with responsibility at the core.
A more concrete perspective comes from the data scientist, Caroline Buck, at Wunderman Thompson, a marketing technology consultancy. Caroline points out that in the early days of the pandemic, in March 2020, “lack of data literacy in the general population led to a lot of confusion. Some people were more worried than they needed to be while others weren’t taking things seriously.” This was largely due to low data literacy charts like the one below, which shows overall COVID case counts without normalizing for population. This shows Seattle as a hotspot of the pandemic while in reality, on a population-normalized basis, there were far fewer cases there than in New York City.
Example of Data Visualization from March 2020 showing total COVID-19 cases rather than as a percentage of population
Organizations that are lacking in data literacy should consider changing their internal narrative around data. Rather than viewing specialized teams and niche SME’s as the “owners” of data, companies should scale fundamental knowledge of data and analytics across all roles. Instead of them viewing Excel as ‘good enough’ for most people to get by, internal training programs in data-driven decision making can overcome the perception of data tools as being complicated, scary, and unforgiving. Increased data literacy makes organizations more competitive by increasing their digital dexterity, and it also paves the way for the development of more advanced systems, which require contextual sensitivity in order to succeed.
Contextual and Cultural Perspective
Context refers to the need for organizations and creators of AI to think about their intended audience, the role that AI will play in their lives, and the existing routines and rituals that AI will co-exist with. Developing products based on technical considerations alone will result in unintended results in a real-world setting. One of the most well known examples of this is a facial recognition algorithm from Google being trained on a dataset limited to primarily caucasian and light-skinned faces; thus, leading to a spectacular classification fail of viewing dark-skinned faces as gorillas.
The problem extends beyond training data. A specific example comes from Jody-Ann Jones, Adjunct Professor of Data Science at the University of the Commonwealth in Kingston Jamaica. Jody-Ann likens the risk of building AI systems without contextual awareness to the mixed track record of the IMF in international development. Jody notes that “IMF models are textbook perfect, yet they have a poor track record of success when it comes to developing countries being able to pay back IMF loans. It’s because they prescribe policies that make economic sense, but lack cultural context. AI systems are the same way: they need to be built with the audience in mind and be representative of all stakeholders to be sustainable and successful.”
As teams within your organization begin to build AI/ML systems, they should begin with the audience’s needs and behaviors in mind. Cultural immersion, ethnographic research, and traditional research methods such as focus groups and quantitative surveys can provide vital understanding of user behavior before the development of an AI system begins. Once the scope of the problem becomes technical in nature, there are model architectures, such as knowledge graphs, as shown below, that facilitate contextual understanding, nuance, and subtlety. This can set the stage for a more holistic approach, as we will tackle in our next issue, discussing the principles of FATE (Fairness, Accountability, Transparency, and Ethics).
Using Graph Systems to Embed Contextual Awareness in AI Systems
Source: Neo4j, “Artificial Intelligence & Graph Technology Enhancing AI with Context & Connections”, by Amy E. Hodler
Conclusion: Bridging the Gap
The 80% problem represents the sizable gap between organizations with good intentions and those that have found a way to take action. Those that have taken action have found that operationalizing Responsible AI requires a commitment to data literacy and cultural understanding, in addition to technical considerations. And remember, that by taking a step back to think about the role that your AI system plays in the world, you can set your organization on the path to turn your good intentions into action.
In our next edition, we’ll explore how organizations that have achieved Data Literacy and Cultural Perspectives look to apply Responsible AI within technical and product teams through the application of FATE principles.