Skip to content

Is it smart AI or dumb AI that should concern us?

Posted 1 Mar 2018

Back in 1993, George Luger and William Stubblefield defined Artificial Intelligence (AI) as “the branch of Computer Science that is concerned with the automation of intelligent behaviour”. Nowadays, self-driving cars, automated medical diagnosis and surgery procedures, electronic trading and robot control are just a few examples of applications that make use of AI to improve transportation, healthcare, finance, industry sectors, and more broadly, our lives. Alexa, Siri and Google assistant are other examples of AI assisting us in our daily life. But to what extent do we, as end users, understand AI? Do we realise the rationale behind every decision or suggestion AI makes? That said, how can we trust an algorithm and, more importantly, the dataset being used to train AI?

These kind of questions are motivating very influential figures such as Stephen Hawking, Elon Musk, and Bill Gates to voice their concerns on AI becoming a potential threat to humanity. Stephen Hawking called AI the greatest but also last breakthrough in human history [1]. Elon Musk, CEO of Tesla Motors and SpaceX, was quoted as saying “AI is the greatest risk we face as a civilization”[2] and successively engaged in a debate with Mark Zuckerberg, CEO of Facebook, over their disagreement on the subject[3]. Bill Gates agreed with Elon Musk and expressed his surprise that people are not concerned[4]. Overall, the key points of these concerns revolve around the idea that AI could become dangerous if it evolves past the point of human comprehension, and that unregulated AI could potentially be used to start conflicts by manipulating information.

Always look on the bright side

At Digital Catapult we like to focus on the positives of technology. Clearly, we always consider the potential implications that a biased development and improper use of a particular technology can have, however, we believe that AI, as per every other technology, has to be inclusive, transparent and trustworthy to really benefit our society. That is why, on the 19th of February, we attended the workshop “Building Trust in AI – Designing for Consent” hosted by Loughborough University in London. Together with University of Hertfordshire, University of York, Newcastle University and Open Lab we explored the latest research outcomes around eXplainable AI (XAI).

We started by analysing today’s machine learning techniques and how their different nature and inner working affects both prediction accuracy and explainability (e.g., how easy it is to identify which specific components of the input lead to a particular decision).

Figure 1 provides a glimpse of this reasoning.

Figure 1: Prediction accuracy vs explainability of today’s machine learning techniques (Source: DARPA[5])

When it comes to AI models, it’s all there in black and white

During his presentation, Dr. Freddy Lecue, principal scientist at Accenture Technology Labs, explained that when choosing a machine learning model we usually think of it in two ways: accurate but black box, or white box but weak. The best classification accuracy is typically achieved by black box models. Take for instance, neural networks based techniques, such as Deep Learning. This techniques seek to draw a link between an input (e.g., “a picture from the internet”), and an output, (e.g., “this is a picture of a cat”). They learn from all the examples you might give them, forming their own network of inference that can then be applied to pictures they have never seen before.

Deep neural networks have produced some of the most exciting accomplishments of the last decade, from learning how to translate words with better-than-human accuracy, to learning how to drive. However, as other black box models such as Gaussian processes, or random forests, neural networks are often criticised because their inner workings are really hard to understand. They do not, in general, provide a clear explanation of the reasons they made a certain prediction, they just spit out as a probability.

On the other end of the spectrum, white box models, such as linear regression or decision trees, whose predictions are easy to understand and communicate are usually very impoverished in their predictive capacity or might be inflexible and computationally heavy (e.g., graphical models).

As humans, we tend to understand stories better than data. Prof. Barry Smyth, Digital Chair of Computer Science, University College Dublin and Director of the Insight Centre for Data Analytics, presented a way to incorporate filtered and specialised explanations into recommendations relying on reviews and opinions. The idea is to rank recommendations based on the strength of their corresponding explanations. A recommendation is preferable, provided that it can be explained in a compelling way to the user. As practical implementation of the work, Prof. Smyth tested his approach against the TripAdvisor ranking system. By using user-based opinion and reviews as a rich source of knowledge, his approach created novel and more personalised explanations to drive a recommendation ranking. This outperformed, in terms of compellingness, TripAdvisor’s ranking mechanism[6].

Will new regulations address AI?

Explaining AI decisions is also important from a legal perspective. The incoming EU General Data Protection Regulation (GDPR) addresses AI matters in several ways. Recital 71 specifies the right to obtain an explanation of the decision reached after automated processing, including profiling. Moreover, art. 22 defines that whenever this decision produces legal effects concerning or significantly affecting a particular data subject, the data controller is required to ask for explicit consent and to inform the data subject about exactly what is happening with his/her data.

GDPR also addresses AI from a fairness perspective. It states that the data controller should use appropriate mathematical or statistical procedures for the profiling, implement appropriate technical and organisational measures that prevent discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect[7].

As a result, machine learning systems should not illegally discriminate. Rather there should be accountability and transparency for actions that have been taken, and data subjects should have some control over how they are perceived. However, it is well known that machine learning systems leave themselves open to potentially undesirable, and illegal, bias. Skewed representation of the world often results in machines using unwanted patterns in training data. For example, voice recognition software embedded on Siri or Alexa, struggles to understand dialects (e.g., will Siri ever learn Scottish? [8]). Even worse, crime-prediction tools targeting particular neighbourhoods might reinforce discriminatory policing[9].

Michael Veale, doctoral researcher at UCL, presented some literature on discrimination awareness and its removal in machine learning systems. He discussed why it’s hard to formalise fairness, and the trade-offs involved, and also spoke of innovative ways to implement constraints on learning and mapping dataset that include demographic parity, equality of opportunity and counterfactual fairness. Best practices points to massaging the training set so that any classifier trained on it will be “fair” and introduce a “fairness constraint” into the classifier. This will ensure that any dataset fed into it will give a “fair” result; and finally, audit the system after it is built.

Ultimately, it is all about trust

It’s not smart AI but dumb AI that should concern us. It shouldn’t only be about technology, but above all else it should be about the people that have to trust machines, that’s if we expect a fruitful coexistence and a true picture of the future that we have read about in Asimov’s books. And this is what GDPR is requesting, to build trust through transparency, and make explanations of it available and accessible to people and everyday users. This will require us considering what can be explained and how it should be explained, and that needs to come about by asking and testing assumption with real end-users that access the technology. At Digital Catapult, we have already done something similar through our Personal Data Receipts. It humanises privacy policies and as such we would like to explore this further bearing in mind this new context of XAI.

If you are a business that’s interested in improving or testing your AI solution, have a look at our Machine Intelligence Garage programme. It is designed to help organisations access the computation power and expertise they need to develop and build Machine Learning and Artificial Intelligence solutions. Within this context, we can also learn about its challenges and recommendation to make XAI work better for organisations looking to implement it.

References

  1. “Stephen Hawking: ‘Transcendence looks at the … – The Independent.” 1 May. 2014, http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html.
  2. “Elon Musk: Tesla, SpaceX CEO on Artificial Intelligence Risk | Fortune.” 15 Jul. 2017, http://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.
  3. “Elon Musk Just Dissed Mark Zuckerberg’s Understanding of … – Fortune.” 25 Jul. 2017, http://fortune.com/2017/07/25/elon-musk-just-dissed-mark-zuckerbergs-understanding-of-artificial-intelligence/.
  4. “Bill Gates joins Elon Musk and Stephen Hawking in saying artificial ….” 29 Jan. 2015, https://qz.com/335768/bill-gates-joins-elon-musk-and-stephen-hawking-in-saying-artificial-intelligence-is-scary/.
  5. “Explainable Artificial Intelligence (XAI).” https://www.darpa.mil/attachments/XAIIndustryDay_Final.pptx.
  6. “User-Based Opinion-based Recommendation – IJCAI.” http://static.ijcai.org/proceedings-2017/0674.pdf.
  7. “Recital 71 EU General Data Protection Regulation (EU-GDPR).” http://www.privacy-regulation.eu/en/recital-71-GDPR.htm.
  8. “Will Siri Ever Learn Scottish? | Gizmodo UK.” 5 Dec. 2016, http://www.gizmodo.co.uk/2016/12/will-siri-ever-learn-scottish/.
  9. “Crime-prediction tool may be reinforcing discriminatory policing ….” 11 Oct. 2016, http://uk.businessinsider.com/predictive-policing-discriminatory-police-crime-2016-10