Global demand for AI is growing faster than anyone would have ever predicted. Demand for AI software is expected to be worth $126 billion by 2025. While we have become aware of many of the benefits of this new technology, we have also been forced to come to terms with some of its risks.
Few other developments have spawned such a legitimate debate around ethics as AI. We have debated the ethics of AI since the first science fiction stories about robots were written. Isaac Asimov and other writers created fictious stories about the possibility of robots taking over the world, although predictions from stories like “Three Laws of Robotics” were a bit off base.
These ideas were interesting enough in the context of a novel, years before AI as we know it was a thing. However, modern AI can have both beautiful and frightening capabilities. This has forced us to have a serious discussion about the ethics of AI.
Complexity of AI Makes Ethical Discussions Murkier
Ethical discussions about AI have become a lot more confusing than we ever expected when AI emerged as a major disruptive technology. The complexity is really because we’re not projecting human intentions and plans, but rather speculating about an intelligence we don’t understand. We are projecting human desires, of course, because humans build AI – that’s how it currently works, as of right now.
But AI has an ability nothing to date has ever had before – it can roll on, take flight, grow wings. AI can find its own way forward – it won’t need us beyond a certain point in time. This is the singularity posited as the point where true self-sufficiency manifests in a global AI.
That alone makes it as potentially dangerous as an alien race – and for those scoffing, wait a decade or two. Once machines can not only optimize but also build the better, faster, and smarter machines of tomorrow, we’ll have crossed the line.
Perhaps more importantly, however – and on a more optimistic note – we all anticipate human morality will suffuse AI because it’s being built by moral humans.
AI hasn’t touched everyone’s lives – not yet – but to glimpse how close it is to a silent suburban (and less quiet commercial and industrial) takeover, you only have to look at IT support. For instance, agencies helping SMEs and dealing with the mainstream tech-enabled lives in business, are both encountering and leaning more on AI. The tech support landscape was always fast and fluid, but this is something else entirely.
Making AI fair to humans
AI doesn’t have human intelligence, but it does own much of the ethical construct around its existence, and our interrelationship. The second reason that makes AI potentially alien and scary is that we don’t know how it thinks.
We build it, we prompt it, we set it off within a paradigm with plenty of guides, but results are sometimes weird – there’s no other word for it (there’s that imitation of us humans shining through!)
One of the most common ethical concerns around AI is unemployment.
It might seem to be a soft issue, but when the late capitalist experiment is what sustains billions of people all over the world, how are their families to eat when they become obsolete? The theory is that jobs won’t be lost, just redefined, and history would seem to back that up. But history might prove a lousy prophet on this issue, as this is something different. We’re going to hand over huge slices of our daily industry to robots, and white-collar workers are no less likely to get the chop, either.
On the other hand, we will perhaps one day look back and acknowledge that human labour was the suffering it is, and that humans selling their time in exchange for the money that bought shelter, food, and clothing is now repugnant to decent people. Such may well be the joys of the looming push-button paradise, but two things can be taken for granted: firstly, if it represents a cost-saving “improvement”, business will adopt it without caring about the human fallout. Secondly, AI won’t care about it, either.
Rolling right on from that, our societies currently suffer under tremendous inequality.
The planet can show wildly different versions of life, and when social mobility has been a direct result of a person’s contribution to the economy, how will they survive when that economy now finds them superfluous? How will we funnel the wealth that AI will generate? If a company is now AI-enabled and has shed its workforce, profits go up (no staff “cost centres”), but they go up for only a handful of people. This is while billions might be struggling to find meaning (and income) after robots (or chatbots) have taken their place.
Decency might become the first casualty of AI – it’s already on life support as it is.
Our fundamental humanity might demand redefinition once AI impacts us in all of its nuanced possibilities. Our interaction as a species seems to be degenerating already, and it’s becoming less intelligent each year. Social media platforms are falling over themselves to automate more and more, gather more and more, and sell more and more, while users keep staring at their screens.
We already interact with AI whether we acknowledge it or not, with the farthest downside being the ability we must render mobile games addictive. Coupled with learning AI, that kind of isolating, negative trend is only going to get worse. Tech addiction is real, and now we’re going to get tech to learn with the aim of figuring its own way forward, and to police itself?
That doesn’t sound good.
Can AI (by definition) be unencumbered by compassion, empathy, or factoring in other’s well-being, to be smarter than us? Of course, that sounds silly, as the AI apps we see are wholly geared towards our service or pleasure so far, but it could manifest as a long term behaviour. Subtly, almost imperceptibly, AI might normalise less and less humane decisions – again capably aided by the modern humans from which it learns. In a nutshell, AI has the potential to become sanction, justification, keen student and ultimately the master of our uncaring.
That’s a real possibility. AI might be the catalyst towards a dramatic drop in neighbourliness and mutual caring, something it will then perpetuate either as the visible norm or, potentially, as a precursor to its own agenda, whatever that might be – we wouldn’t know.
AI is a mirror of humanity right now, and that’s not all good or bad
People can be hateful and biased, but there’s a moment here where our best attributes can be infused into AI, not our darkest behaviour.
Only time will tell, but it’s worrying. And what ethics should we manifest towards robots, for example, that have come to emulate humans in almost every way?
Robot rights might sound laughable now, but if we succeed or AI succeeds – it’s sometimes hard to identify the protagonist in this – what or who will we be staring in the face 100 years from now?
Is it a machine, or is the fact that it so exquisitely and accurately resembles humans sufficient to call a robot a “who”?
In a classic line from I, Robot, Will Smith is thanked by the robot Sonny for saying “someone” and not “something” when addressing him. Perhaps the singularity will also be the defining moment when AI becomes more human than people.
Ethical questions around AI typically centre on how AI or its preliminary masters will deal with the suffering AI generates as it replaces human efficiency, by an enormous factor. Further, AI will face the same dilemmas as people have – needing to choose “the best you can do under the circumstances”, needing to point out that, for example, smart self-driven cars might put millions out of work in the transport industry – but save just a few million more lives because of it – by manifesting a far higher safety record than human drivers ever could.
The best we can hope for is that as AI encroaches upon every life on the planet, we maintain our demands that if it eases our lives, it does the same for our neighbour. In other words, artificial intelligence does have massive potential for the future, but we truly need to define its morality now, in the present, to inhibit that future being peppered with strife, human suffering, and – who knows – maybe extinction itself?