We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex and practitioners are trained to make good decisions.
Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.
Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.
The Responsibility of Agency
For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?
For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.
That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.
As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.
Evaluating Tradeoffs
“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.
Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.
The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.
In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.
Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.
Building A Consensus Based On Transparency
It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.
It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.
In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.
The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.
Meeting New Standards
In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.
The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.
New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.
Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.
These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.