Ready to learn Machine Learning? Browse courses like Robotics Application Machine Learning developed by industry thought leaders and Experfy in Harvard Innovation Lab.
The artificial intelligence (AI) revolution is upon us: from Siri to facial recognition to self-driving cars, there’s no ignoring AI’s involvement in our everyday lives. Automation, which once started as a desire to make mundane tasks easier, has advanced rapidly to create fundamental and beneficial changes to human life. In fact, PricewaterhouseCoopers recently projected that the potential contribution from AI to the global economy will reach $15.7 trillion by 2030.
Despite its widespread advantages, some have turned the discussion around AI into the negative. Doomsday scenarios in movies such as The Terminator have led to two main fears surrounding AI: its ability to be used for malicious purposes and the possibility that robots and computers could make significant changes to the world at humankind’s expense.
But one key factor is critical to keep in mind: technology cannot be “good” or “bad” in and of itself. AI only works toward a goal that it is programmed or “trained” to accomplish, leaving the defining factor that determines the merit of its outcome as human motivation. This fact resonates across various scenarios; for example, although the main use for cars is to move people from one place to another, negative incidents involving cars still occur, such as drunk driving accidents or terrorism. But it is widely understood that in none of these cases is the car itself to blame.
In the case of AI, is it the technology’s fault if one were to build a destructive killer robot? Decidedly not; the responsibility and inherent qualities of AI fall squarely on the human who is wielding AI for nefarious purposes. But because the possibility exists for AI to be used in a harmful manner, it is now time to take action and place controls on AI development to proactively mitigate this spiteful intent.
This isn’t an unusual necessity; rules often govern many types of technologies and tools to regulate human motive. A great example is nuclear technology, which offers a number of potentially positive uses, but it was always evident that it could also be used for destructive purposes in the form of weapons. Treaties and regulations have been put in place to curb the production and proliferation of these devices.
Global controls over AI development can work in the same manner, allowing companies to use the technology to streamline processes and protect assets while limiting its use for malevolent purposes. Despite the doom and gloom proclaimed by some, AI is an exciting and innovative technology and its embrace will require us to remember that it is neither good nor bad.
A recent report from MIT Sloan Management review stated that more than 80 percent of global enterprises see AI as a strategic opportunity rather than a risk; for this mindset to continue, it is up to humans to ensure AI is deployed responsibly. Rather than heralding AI as the biggest danger to human existence, thought leaders must put energy into producing the proper controls that enable the technology to provide the greatest benefit to the greatest number of people.