The Chronicles Of AI Ethics: The Man, The Machine, And The Black Box

Aparna Dhinakaran Aparna Dhinakaran
March 17, 2021 AI & Machine Learning

Today, machine learning and artificial intelligence systems, trained by data, have become so effective that many of the largest and most well-respected companies in the world use them almost exclusively to make mission-critical business decisions. The outcome of a loan, insurance or job application, or the detection of fraudulent activity is now determined using processes that involve no human involvement whatsoever.

In a past life, I worked on machine learning infrastructure at Uber.  From estimating ETAs to dynamic pricing and even matching riders with drivers, Uber relies on machine learning and artificial intelligence to enhance customer happiness and increase driver satisfaction. Frankly, without machine learning, I question whether Uber would exist as we know it today.

For data-driven businesses, there is no doubt that machine learning and artificial intelligence are enduring technologies that are now table stakes in business operations, not differentiating factors.

While machine learning models aim to mirror and predict real-life as closely as possible, they are not without their challenges. Household name brands like Amazon, Apple, Facebook, Google have been accused of algorithmic bias, thus affecting society at large.

For instance, Apple famously ran into an AI bias storm when it introduced the Apple Card and users noticed that it was offering smaller lines of credit to women than to men.

In more extreme and troubling cases, judicial systems in the U.S. are using AI systems to inform prison sentencing and parole terms despite the fact that these systems are built on historically biased crime data, amplifying and perpetuating embedded systemic biases and calling into question algorithmic fairness in the criminal justice system.

In the wake of the Apple Card controversy, Apple’s issuing partner, Goldman Sachs, defended its credit limit decisions by noting that its algorithm had been vetted by a third-party and that gender was not used as an input or determining factor.

While applicants were not asked for gender when applying for the Apple Card, women were nonetheless receiving smaller credit limits, underscoring a troubling truth: machine learning systems can often develop biases even when a protected class variable is absent.

Data science and AI/ML teams today don’t match protected class information back to model data for plausible deniability. If I didn’t use the data, machines can’t be making decisions on it, right? In reality, many variables can be correlated with gender, race or other aspects of identity and, in turn, lead to decision-making that does not offer equal opportunity to all people.

The Imbalance of Responsibility

We are living in an era where major technological advances are imperfectly regulated and effectively shielded from social responsibility, while their users face major repercussions.

We come face to face with what M.C. Eilish coined, “The Moral Crumple Zone”. This zone represents the diffusion of responsibility onto the user instead of the system as a whole. Just as a car’s hood takes the brunt of the impact in a head on collision, the user of technology takes the impact for the mistakes of the ML system. For example: as it stands, if a car with self-driving capabilities fails to recognize a stop sign, the driver is responsible for any mistakes and subsequent damages that the car makes, not those who trained the models and produced the car.

To make matters worse, the users of most technology very rarely have a full understanding of how the technology works and its broader impact on society. It is unfair to expect users to make the right risk management decisions with minimal understanding of how these systems even work.

These effects are magnified when talking about users in underrepresented and disadvantaged communities. People from these groups have a much harder time managing unforeseen risk and defending themselves from potentially damaging outcomes. This is especially damaging if an AI system makes decisions with limited data on these populations – which is why topics like facial recognition technology for law enforcement are particularly contentious. Turning a blind eye is no longer an option given the social stakes.

Those who intentionally built these complex models must consider their ethical responsibilities in doing so as our world has lasting structural consequences that do not resolve by themselves.

Rise Up Or Shut Up: Taking Accountability.

We live in a society that manages its own risks through establishing ethical frameworks, creating acceptable codes of conduct, and in the end: codifying these beliefs with legislation. When it comes to ML systems, we are way behind here. We are barely starting to talk about the ethical foundations of ML, and as a result our society is going to have to pay the price for our slow action.

We must work harder to understand how machine learning models are making their decisions and how we can improve this decision making to avoid societal catastrophe.

So, what steps do we need to take now to start tackling the problem?

STEP 1: Admit that proper ethical validation is mission-critical to the success of our rapidly growing technology.

The first step in exposing and improving how AI/ML affects us as a society is to better understand complex models and validate ethical practices. It is no okay to longer avoid the problem and claim ignorance.

STEP 2: Make protected class data available to modelers

Contrary to current practices which excluded protected class data from models to allow for plausible deniability in the case of biased outcomes, protected class data should in fact be available to modelers and included in data sets that inform ML/AI models. The ability to test against this data puts the onus on these modelers to make certain their outputs aren’t biased.

STEP 3: Break down barriers between model builders and the protected class data

Bias problems and analysis are not only the purview of model validation teams. Putting a wall between teams and data only diffuses responsibility. The team’s building model needs this responsibility and needs the data to make those decisions.

STEP 4: Employ emerging technologies such as ML observability that enable accountability

You can’t change what you don’t measure. Businesses and organizations need to proactively seek tools and solutions that help them better monitor, troubleshoot, and explain what their technology is doing. And subsequently, uncover ways to improve the systems they’ve built.

Ultimately, the problem of the black box is growing as AI/ML technologies are becoming more advanced, yet we have little idea of how most of these systems truly work. As we give our technology more and more responsibility, the importance of making ethically charged decisions in our model building is amplified exponentially. It all boils down to really understanding our creations. If we don’t know what is happening in the black box, we can’t fix its mistakes to make a better model and a better world.

  • Experfy Insights

    Top articles, research, podcasts, webinars and more delivered to you monthly.

  • Aparna Dhinakaran

    Tags
    Artificial IntelligenceBlack BoxEthicsMachine Learning
    © 2021, Experfy Inc. All rights reserved.
    Leave a Comment
    Next Post
    How Is 5g Relevant In The Present Day?

    How Is 5g Relevant In The Present Day?

    Leave a Reply Cancel reply

    Your email address will not be published. Required fields are marked *

    More in AI & Machine Learning
    AI & Machine Learning,Future of Work
    AI’s Role in the Future of Work

    Artificial intelligence is shaping the future of work around the world in virtually every field. The role AI will play in employment in the years ahead is dynamic and collaborative. Rather than eliminating jobs altogether, AI will augment the capabilities and resources of employees and businesses, allowing them to do more with less. In more

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    How Can AI Help Improve Legal Services Delivery?

    Everybody is discussing Artificial Intelligence (AI) and machine learning, and some legal professionals are already leveraging these technological capabilities.  AI is not the future expectation; it is the present reality.  Aside from law, AI is widely used in various fields such as transportation and manufacturing, education, employment, defense, health care, business intelligence, robotics, and so

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    5 AI Applications Changing the Energy Industry

    The energy industry faces some significant challenges, but AI applications could help. Increasing demand, population expansion, and climate change necessitate creative solutions that could fundamentally alter how businesses generate and utilize electricity. Industry researchers looking for ways to solve these problems have turned to data and new data-processing technology. Artificial intelligence, in particular — and

    3 MINUTES READ Continue Reading »

    About Us

    Incubated in Harvard Innovation Lab, Experfy specializes in pipelining and deploying the world's best AI and engineering talent at breakneck speed, with exceptional focus on quality and compliance. Enterprises and governments also leverage our award-winning SaaS platform to build their own customized future of work solutions such as talent clouds.

    Join Us At

    Contact Us

    1700 West Park Drive, Suite 190
    Westborough, MA 01581

    Email: support@experfy.com

    Toll Free: (844) EXPERFY or
    (844) 397-3739

    © 2025, Experfy Inc. All rights reserved.