Can Artificial Intelligence Increase Our Morality?

Matthew Hutson Matthew Hutson
December 17, 2019 AI & Machine Learning

Just as we define our technologies, they define us.

In discussions of AI ethics, there’s a lot of talk of designing “ethical” algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans’ morality, our own capacity to behave virtuously? 

That’s the subject of a talk on “AI and Moral Self-Cultivation” given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on “Character, Social Connections and Flourishing in the 21st Century,” hosted by Templeton World Charity Foundation, in Nassau, The Bahamas. (Full disclosure: I was the invited respondent for Vallor’s talk, providing commentary and facilitating discussion, and TWCF paid for my travel.)

Vallor painted a suspect picture of technology as it stands, noting several ways in which algorithms degrade our morality. Russian bots disrupt civil discourse online. YouTube recommendations facilitate our compunction to click on extremist content. Video games goad us into continued playing, feeding addictive behavior. Even well-meaning AI applications have potential dark sides, she said. Algorithms aimed at putting at-risk students back on track could conceivably increase conformity. Therapy apps that give points for good behavior might make personal growth feel like a badge-harvesting grind. Social credit systems like that in China, or even more subtle systems of nudging, could make virtue feel inauthentic. 

Vallor noted a few successful efforts to temper our worse impulses. Some platform filter harmful content, and some phones lock people out after extended screen time. But she labeled these “remedial efforts,” meant to limit harms but not generate new benefits. Moreover, she pointed to three reasons morality-enhancing tech hasn’t been a priority for Silicon Valley: There’s no clear profit motive, modifying our behavior can seem paternalistic, and even deciding which behavior to encourage can stifle pluralism.

But Vallor held out some hope. “Are we really stuck between the Scylla of a digital Wild West, and the Charybdis of surrender to Orwellian digital overlords?” she asked. “I don’t see why we must be.” Here she cited the humanizing force of Fred Rogers, imagining the companionship of a virtual Mr. Rogers, or at least the types of apps he would have designed. “AI systems could invite us to reflect privately upon the sort of person we think we are or want to be,” she said, “and then offer ways in which we might steer our actual choices more effectively in that desired direction.” Of course, she cautioned, even a virtual Mr. Rogers would not be immune to the issues of fairness, accountability, and transparency that attend almost every other AI system. 

As I said at the meeting, I found Vallor’s talk wise, insightful and beautifully written. I went on to mention a few near-term AI systems that enhance human cooperation, or at least coordination. Nicholas Christakis (who presented later at the meeting) has shown that interspersing bots in social networks can help people solve puzzles that require coordination. In recent research, autonomous vehicles learned to reduce surrounding traffic in a simulation—and could perhaps reduce road rage in reality. When Twitter bots call out racists, the racists use fewer slurs; Intel is similarly teaching AI to call out hate speech in Reddit forums. And researchers have developed a reinforcement learning algorithm that’s better than people at eliciting cooperation from human partners in iterated prisoner’s dilemma (as long as they think it’s a person). 

And while there aren’t many apps that target morality specifically, moral development can result from broader interventions. Therapy apps improve users’ mental health, and when we’re well we can focus on being good. AI could also help people dedicate more facetime to each other by automating paperwork and other rote tasks. And social robots have been known to improve the ability of autistic children and trauma survivors to open up to other people.  

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it. 

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy? 

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

There’s a lot to speculate about. What’s clear is that just as we define our technologies, they define us. All the more reason to think hard about where it’s going—and to involve psychologists and philosophers in the discussion. 

This article originally appeared in Psychology Today.

  • Experfy Insights

    Top articles, research, podcasts, webinars and more delivered to you monthly.

  • Matthew Hutson

    Tags
    Artificial Intelligence
    Leave a Comment
    Next Post
    Anomaly Detection – Another Challenge for Artificial Intelligence

    Anomaly Detection - Another Challenge for Artificial Intelligence

    Leave a Reply Cancel reply

    Your email address will not be published. Required fields are marked *

    More in AI & Machine Learning
    AI & Machine Learning,Future of Work
    AI’s Role in the Future of Work

    Artificial intelligence is shaping the future of work around the world in virtually every field. The role AI will play in employment in the years ahead is dynamic and collaborative. Rather than eliminating jobs altogether, AI will augment the capabilities and resources of employees and businesses, allowing them to do more with less. In more

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    How Can AI Help Improve Legal Services Delivery?

    Everybody is discussing Artificial Intelligence (AI) and machine learning, and some legal professionals are already leveraging these technological capabilities.  AI is not the future expectation; it is the present reality.  Aside from law, AI is widely used in various fields such as transportation and manufacturing, education, employment, defense, health care, business intelligence, robotics, and so

    5 MINUTES READ Continue Reading »
    AI & Machine Learning
    5 AI Applications Changing the Energy Industry

    The energy industry faces some significant challenges, but AI applications could help. Increasing demand, population expansion, and climate change necessitate creative solutions that could fundamentally alter how businesses generate and utilize electricity. Industry researchers looking for ways to solve these problems have turned to data and new data-processing technology. Artificial intelligence, in particular — and

    3 MINUTES READ Continue Reading »

    About Us

    Incubated in Harvard Innovation Lab, Experfy specializes in pipelining and deploying the world's best AI and engineering talent at breakneck speed, with exceptional focus on quality and compliance. Enterprises and governments also leverage our award-winning SaaS platform to build their own customized future of work solutions such as talent clouds.

    Join Us At

    Contact Us

    1700 West Park Drive, Suite 190
    Westborough, MA 01581

    Email: support@experfy.com

    Toll Free: (844) EXPERFY or
    (844) 397-3739

    © 2023, Experfy Inc. All rights reserved.