So, it is important to tell an algorithm as much as possible when developing it. The more you tell, i.e. train, the algorithm, the more it considers. Next to that, when designing the algorithm, you must be crystal clear about what you want the algorithm to do and not to do. Algorithms focus on the data they have access to and often that data has a short-term focus. As a result, algorithms tend to focus on the short term. Humans, most of them anyway, understand the importance of a long-term approach. Algorithms do not unless they are told to focus on the long-term.
Apart from the right goals, the most critical aspect in developing the right AI is to use unbiased data to train an AI agent as well as minimise the influence of the biased developer. An approach where AI learns by playing against itself and is only given the rules of a game can help in that instance.
This will enable AI learns from its environment and improve over time due to deep learning and machine learning. AI is not limited by information overload, complex and dynamic situations, lack of complete understanding of the environment (due to unknown unknowns), or overconfidence in its own knowledge or influence. It can take into account all available data, information and knowledge and is not influenced by emotions.
In addition, how much are these predictions worth, if we don’t understand the reasoning behind it? Automated decision-making is great until it has a negative outcome for you or your organisation, and you cannot change that decision or, at least, understand the rationale behind that decision.
Whatever happens inside an algorithm is sometimes only known to the organisation that uses it, yet quite often this goes beyond their understanding as well. Therefore, it is important to have explanatory capabilities within the algorithm, to understand why a certain decision was made.
The term Explainable AI (XAI) was first coined in 2004 as a way to offer users of AI an easily understood chain of reasoning on the decisions made by the AI, in this case especially for simulation games [2]. XAI relates to explanatory capabilities within an algorithm to help understand why certain decisions were made. With machines getting more responsibilities, they should be held accountable for their actions. XAI should present the user with an easy to understand the chain of reasoning for its decision. When AI is capable of asking itself the right questions at the right moment to explain a certain action or situation, basically debugging its own code, it can create trust and improve the overall system.
Explainable AI should be an important aspect of any algorithm. When the algorithm can explain why certain decisions have been / will be made and what the strengths and weaknesses of that decision are, the algorithm becomes accountable for its actions. Just like humans are. It can then be altered and improved if it becomes (too) biased or if it becomes too literal, resulting in better AI for everyone.
Responsible AI can be achieved by using unbiased data, minimising the influence of biased developers, having a mixed data approach to include the context and by developing AI that can explain itself. The final step in developing responsible Ai is by incorporating ethics into AI.
Already in 1677, Benedictus de Spinoza, one of the great rationalists of 17th-century philosophy, defined moral agency as ‘emotionally motivated rational action to preserve one’s own physical and mental existence within a community of other rational actors’. However, how would that affect artificial agents and how would AI ethics change if one sees AI as moral things that are sentient and sapient? When we think about applying ethics in an artificial context, we have to be careful ‘not to mistake mid-level ethical principles for foundational normative truths’ [6].
High-quality, unbiased data, combined with the right processes to ensure ethical behaviour within a digital environment, could significantly contribute to AI that can behave ethically. Of course, from a technical standpoint, ethics is more than just usage of high-quality, unbiased data and having the right governance processes in place. It includes instilling AI with the right ethical values that are flexible enough to change over time.
To achieve this, we need to consider the morals and values that have not yet developed and remove those that might be wrong. To understand how difficult this is, let’s see how Nick Bostrom – Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute – and Eliezer Yudkowsky – an artificial intelligence theorist concerned with self-improving AIs – describe achieving ethical AI [7]:
As may be evident by CEV, achieving ethical AI is a highly challenging task that requires special attention if we wish to build Responsible AI. Those stakeholders involved in developing advanced AI should play a key role in achieving AI ethics.
Machine learning has huge risks, and although extensive testing and governance processes are required, not all organisations will do so for various reasons. Those organisations that can implement the right stakeholder management to determine whether AI is on track or not and pull or tighten the parameters around AI if it is not will stand the best chance to benefit from AI. However, as a society, we should ensure that all organisations – and governments – will adhere to using unbiased data, to minimising the influence of biased developers, to having a mixed data approach to include the context, to developing AI that can explain itself and to instil ethics into AI.
In the end, AI can bring a lot of advantages to organisations, but it requires the right regulation and control methods to prevent bad actors from creating bad AI and to prevent well-intentioned AI from going rogue. A daunting task, but one we cannot ignore.
[1] Yudkowsky, E., Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks, 2008. 1: p. 303.
[2] Van Lent, M., W. Fisher, and M. Mancuso. An explainable artificial intelligence system for small-unit tactical behavior. in The 19th National Conference on Artificial Intelligence. 2004. San Jose: AAAI.
[3] Bostrom, N. and E. Yudkowsky, The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 2014: p. 316-334.
[4] Hurtado, M., The Ethics of Super Intelligence. International Journal of Swarm Intelligence and Evolutionary Computation, 2016. 2016.
[5] Anderson, M. and S.L. Anderson, Machine ethics. 2011: Cambridge University Press.
[6] Bostrom, N. and E. Yudkowsky, The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 2014: p. 316-334.
[7] Bostrom, N., Superintelligence: Paths, dangers, strategies. 2014: OUP Oxford.