As a society, we are obsessed with the idea of humanising artificial intelligence (AI).
Every day, our conversations with chatbots are becoming more natural, and consumers are increasingly expecting machines to replicate real-life human interactions. We expect the service we receive from virtual assistants on our banking apps to mimic the experience we would have in a high street branch.
Natural language processing(NLP), for instance, has accelerated over the last few years and transformed the way we communicate with machines. AI is becoming increasingly adept at understanding and replicating nuanced human speech – to the point that it can even conduct sentiment analysis to gauge what people think about a product or service. By identifying and extracting information in sources like social media, companies have been using the technology to understand customers’ feelings about their offering.
And yet, AI is still a long way off from displaying human qualities like emotion. One question we must consider, then, is this: will this obstacle impede our progress towards large-scale AI adoption?
Here’s what the research tells us…
At Fountech.ai, we recently conducted some research to better understand people’s concerns about AI. What we found was illuminating, but not altogether surprising.
According to our research, the lack of a distinctly ‘human’ element in AI puts people off from fully trusting machines with routine tasks or decision-making. A significant 61% of people said that the idea of AI systems being able to function without human assistance concerns them. At a staggering 70%, those aged over 55 were the most likely to harbour such concerns.
Meanwhile, more than half (57%) of respondent across all demographics think AI is fundamentally flawed because it cannot apply the same emotional intelligence or intuition that humans can when making decisions.
So, where does this leave us?
The research suggests that there is a strong desire to “humanise” AI, so to speak. We have already embarked on this journey by giving AI tools human-like names (Hello, Alexa!), mannerisms and even physical features in the case of robots.
But will people be more willing to use AI when it more closely resembles the human form – and what challenges stand in our way?
What’s the problem?
Artificial general intelligence (AGI) has long been hailed the gold standard for AI. If and when AI advances to this point, it will express inherently human qualities like consciousness and self-awareness.
A mammoth task, it remains to be seen whether scientists and engineers that are currently attempting to engrain more complex, human-centric ideas into machines will be successful in our lifetimes. It is difficult to define such complicated cognitive features like consciousness, let alone create it in the digital domain through the medium of algorithms. In reality, we still have very little understanding of how the human brain works and how these elusive concepts operate in humans. Recreating understanding of emotions based on a series of specified parameters presents significant challenges for developers – one that we have yet to overcome.
In theory, adding these characteristics to AI will be revolutionary. Being able to empathise with machines will likely accelerate the adoption of this technology, and help it better integrate with human culture and society. It is likely that we may be more willing to trust AI to drive decision-making, such as offering a medical diagnosis, for example, if it can display empathy towards the patient.
It is difficult to gauge how long it might be until this time comes. In the meantime, we must focus on bettering our understanding of human consciousness and emotions, as well as preparing for how humans and AI can learn and develop in partnership.