Using Responsible AI To Teach The Golden Rule

Using Responsible AI To Teach The Golden Rule

Business leaders have a delicate balancing act when it comes to AI. On one hand, according to O’Reilly, 85% of executives across 25 industries are tasked with either evaluating or deploying AI. On the other hand, risks and unintended consequences continue to grow, from Google search results showing offensively skewed results for “black girls”, to questions about the insurance startup Lemonade’s use of data to make predictions against specific religious groups. AI is hard enough from the dimensions of business value, technical feasibility, and cybersecurity. Knowing how to navigate all those challenges and still build responsible AI systems would seem to be an impossible balancing act!

Luckily, there’s an easy way forward: focusing on the human angle. In this article we’ll give business leaders a helpful primer on Responsible AI, through a combination of individual stories, technical risks, and best-practices moving forward. This article is by no means exhaustive, but it can give business managers a quick primer on where to get started. 

Looking at Responsible AI through one person’s journey

An easy place to start is with the film, “Coded Bias”, directed by Shalini Kantayya. This is a documentary which had a limited original release at the Sundance FIlm Festival in 2020, but it has now reached much wider distribution on Netflix. The film opens with the story of Joy Buolamwini, then a grad student at MIT, who discovered that facial recognition apps struggled to recognize faces of color. From there, it follows Joy’s work and meets several leading voices calling attention to the risks of the misuse of AI, including Kathy O’Neil, Meredith Broussard, Deb Raji, Timnit Gebru, Silkie Carlo, and many others. It’s a 90-minute introduction to the key issues at the heart of responsible AI: innovation without regulation can reinforce existing racial, economic, and social disparities.

For a more technical view, there have been extensive papers published on the potential risks of language models in reinforcing harmful associations and stereotypes. A good place to explore is the recent paper, “The Dangers of Stochastic Parrots”, which not only highlights the risks in the language models themselves, but also the phenomenon of people putting an undue amount of faith in machine-produced output, which is known as automation bias. This creates a risk in which people using language models in areas such as healthcare and cognitive therapy might give the programs a layer of trust that they have not earned.

Technical risks: Prediction vs Predetermination

Getting to a more practical level, business managers need to understand a few structural limitations that AI faces at this stage of its development. When used to make decisions in a business context, AI will only be as good as its training data. This creates the risk that AI predicts a future that looks like the past. This is most notable in a well-publicized example of an Amazon resume screening model that down-weighted female candidates because more engineers had historically been male, and misidentified gender as being a predictive factor in what makes a ‘good’ engineering candidate. The model eventually ended up being scrapped. This is also the case with facial recognition models trained on predominantly white faces that fail to recognize faces of color, as documented in “Coded Bias”.

Another facet of this issue is that there can be patterns in the data that predetermine the outcome. A well known example of this is the controversy around the COMPAS model, which is used in several US states for prison inmates who are up for parole, and it is meant to predict the risk of these inmates committing another crime and needing to go back to prison. While the model did not include race as a variable, it included several socio-economic factors that were highly correlated with race, and gave those inputs a higher risk score. An investigation of the model by ProPublica found that as a result, “the model was nearly twice as likely to categorize black inmates who did not commit another crime as being high risk, while it was much more likely to categorize white inmates as low risk, even those who went on to commit additional crimes”. Since then, a widespread debate has kicked off regarding the methodology around COMPAS and how the concept of “fairness” can be applied to such models.

Best practices for the Responsible AI future

Today, the industry stands at a crossroads, with a growing proliferation of tools, processes, and datasets emerging to help AI practitioners build systems with responsibility as a core component.

In Conclusion

We should all remember that Responsible AI is more than a set of technical best practices; it’s a commitment to human principles of empathy, responsibility, and equitable treatment. AI is still in its adolescence. But, just as we would do with a human child, we have the opportunity to teach it the golden rule: do unto others as you would have them do unto you.

  • Top articles, research, podcasts, webinars and more delivered to you monthly.

  • Vice President of Decision Science for HLK
    Leave a Comment
    Next Post

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    AI & Machine Learning,Future of Work
    AI’s Role in the Future of Work

    Artificial intelligence is shaping the future of work around the world in virtually every field. The role AI will play in employment in the years ahead is dynamic and collaborative. Rather than eliminating jobs altogether, AI will augment the capabilities and resources of employees and businesses, allowing them to do more with less. In more

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    How Can AI Help Improve Legal Services Delivery?

    Everybody is discussing Artificial Intelligence (AI) and machine learning, and some legal professionals are already leveraging these technological capabilities.  AI is not the future expectation; it is the present reality.  Aside from law, AI is widely used in various fields such as transportation and manufacturing, education, employment, defense, health care, business intelligence, robotics, and so

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    5 AI Applications Changing the Energy Industry

    The energy industry faces some significant challenges, but AI applications could help. Increasing demand, population expansion, and climate change necessitate creative solutions that could fundamentally alter how businesses generate and utilize electricity. Industry researchers looking for ways to solve these problems have turned to data and new data-processing technology. Artificial intelligence, in particular — and

    3 MINUTES READ Continue Reading »