Ready to learn Machine Learning? Browse courses like Machine Learning Foundations: Supervised Learning developed by industry thought leaders and Experfy in Harvard Innovation Lab.
It’s time we start talking about AI regulation.
As the technology progresses at a rapid pace, it is a critical time for governments and policymakers to think about how we can safeguard the effects of Artificial Intelligence on a social, economic and political scale. Artificial Intelligence is not inherently good or bad, but the way we use it could well be one or the other.
Unfortunately, there has been little attention paid by such governing bodies as yet in regard to the impact of this technology. We’re going to see huge changes to employment, privacy, and arms to name a few, that if managed incorrectly or not at all, could spell disaster. Handled correctly, with forward planning and proper regulation, the technology has the potential to better the future of our societies.
Elon Musk’s warnings have made headlines in recent months, as he urges the regulation of AI to be proactive rather than reactive for fears the latter would be much too late. Whether you’re in the Musk or Zuckerberg camp, it’s undeniable that we need to consider all outcomes for society.
It’s been a year since giants in the field of deep learning, Amazon, Facebook, Google, IBM and Microsoft, announced the launch of non-profit Partnership on AI. Since this, other companies and industry leaders have followed suit in coming together to highlight the need for a governing body on AI and ethics.
CEO of the Allen Institute for AI has put forward that we should adopt an approach to AI, similar to Issac Asimov’s three laws of robotics – if you’ve not read Asimov’s work already, I highly recommend! Whilst these laws are ambiguous at best for an artificially intelligent being to interpret in a world so much further advanced than 1946 when they were written, we could use them as a foundation/base to shape three laws of AI with much more specific adherence to modern laws. Rather than the policing of artificial ‘general’ intelligence, and some sky net scenario with super intelligent beings with consciousness, it's more the need to regulate the way in which AI technology is used.
And while there are so many companies out there working on applying AI for good, what happens if things go wrong? There is much demand for a body/ethical board to help govern AI practice and development as it rapidly advances as it is right now.
One example is a rather dystopian study which has recently made claim that they’ve developed software that can distinguish a person’s sexual orientation just on their face by using algorithms trained from dating website data. This obviously received a huge backlash regardless of researchers obviously not approaching the study with poor intention. This sort of facial recognition system could be misused if, in the wrong hands, it’s a breach of personal privacy, and could be used to target vulnerable individuals.
Facial recognition is also going further in China, as researchers have posited that they can tell if someone is likely to commit a crime. This sort of use of AI technology is bringing the destructive physiognomy practices of the 19th century (and earlier) into the modern world, not to mention exacerbating racial bias and stereotyping, with little to no accountability.
Whilst many experts are saying that it’s premature to be putting AI regulations in place, and we are still not entirely sure of the exact impacts and implications AI will bring to society, it’s important to remember that governments and administrations notoriously move at a glacial pace compared to technological progress. So, it is better to be premature and lay the groundwork for later policies to take shape or be late to the party and risk unmoderated AI.