Business leaders have a delicate balancing act when it comes to AI. On one hand, according to O’Reilly, 85% of executives across 25 industries are tasked with either evaluating or deploying AI. On the other hand, risks and unintended consequences continue to grow, from Google search results showing offensively skewed results for “black girls”, to questions about the insurance startup Lemonade’s use of data to make predictions against specific religious groups. AI is hard enough from the dimensions of business value, technical feasibility, and cybersecurity. Knowing how to navigate all those challenges and still build responsible AI systems would seem to be an impossible balancing act!
Luckily, there’s an easy way forward: focusing on the human angle. In this article we’ll give business leaders a helpful primer on Responsible AI, through a combination of individual stories, technical risks, and best-practices moving forward. This article is by no means exhaustive, but it can give business managers a quick primer on where to get started.
Looking at Responsible AI through one person’s journey
An easy place to start is with the film, “Coded Bias”, directed by Shalini Kantayya. This is a documentary which had a limited original release at the Sundance FIlm Festival in 2020, but it has now reached much wider distribution on Netflix. The film opens with the story of Joy Buolamwini, then a grad student at MIT, who discovered that facial recognition apps struggled to recognize faces of color. From there, it follows Joy’s work and meets several leading voices calling attention to the risks of the misuse of AI, including Kathy O’Neil, Meredith Broussard, Deb Raji, Timnit Gebru, Silkie Carlo, and many others. It’s a 90-minute introduction to the key issues at the heart of responsible AI: innovation without regulation can reinforce existing racial, economic, and social disparities.
For a more technical view, there have been extensive papers published on the potential risks of language models in reinforcing harmful associations and stereotypes. A good place to explore is the recent paper, “The Dangers of Stochastic Parrots”, which not only highlights the risks in the language models themselves, but also the phenomenon of people putting an undue amount of faith in machine-produced output, which is known as automation bias. This creates a risk in which people using language models in areas such as healthcare and cognitive therapy might give the programs a layer of trust that they have not earned.
Technical risks: Prediction vs Predetermination
Getting to a more practical level, business managers need to understand a few structural limitations that AI faces at this stage of its development. When used to make decisions in a business context, AI will only be as good as its training data. This creates the risk that AI predicts a future that looks like the past. This is most notable in a well-publicized example of an Amazon resume screening model that down-weighted female candidates because more engineers had historically been male, and misidentified gender as being a predictive factor in what makes a ‘good’ engineering candidate. The model eventually ended up being scrapped. This is also the case with facial recognition models trained on predominantly white faces that fail to recognize faces of color, as documented in “Coded Bias”.
Another facet of this issue is that there can be patterns in the data that predetermine the outcome. A well known example of this is the controversy around the COMPAS model, which is used in several US states for prison inmates who are up for parole, and it is meant to predict the risk of these inmates committing another crime and needing to go back to prison. While the model did not include race as a variable, it included several socio-economic factors that were highly correlated with race, and gave those inputs a higher risk score. An investigation of the model by ProPublica found that as a result, “the model was nearly twice as likely to categorize black inmates who did not commit another crime as being high risk, while it was much more likely to categorize white inmates as low risk, even those who went on to commit additional crimes”. Since then, a widespread debate has kicked off regarding the methodology around COMPAS and how the concept of “fairness” can be applied to such models.
Best practices for the Responsible AI future
Today, the industry stands at a crossroads, with a growing proliferation of tools, processes, and datasets emerging to help AI practitioners build systems with responsibility as a core component.
- Numerous frameworks have been created around key themes of Fair, Accountable, Transparent, and Ethical (FATE). Industry leaders, from Microsoft to Google to Salesforce to NVIDIA, are publishing their own frameworks for the application of these principles. On the business side, companies such as PWC and BCG have created their own frameworks to help decision-makers navigate this complex landscape.
- Explainability and Reproducibility: On a technical level, a proliferation of tools have emerged to ensure AI systems are built transparently. This includes everything from python libraries using pixel attribution to explain deep learning computer vision to ML platforms like Weights and Biases that help practitioners see where the ‘signal’ in their experiment is coming from across data sets, models, and hyperparameters.
- Data Sets: There are also labelled datasets to help with model training for equitable outcomes, from the Bolukbasi framework for identifying gender bias in word embeddings to Facebook’s recent release of a dataset aiming to correct skin tone bias in facial recognition by including racially diverse audiences in the training set.
In Conclusion
We should all remember that Responsible AI is more than a set of technical best practices; it’s a commitment to human principles of empathy, responsibility, and equitable treatment. AI is still in its adolescence. But, just as we would do with a human child, we have the opportunity to teach it the golden rule: do unto others as you would have them do unto you.