BLOG

Written Evidence For The Robotics And AI Select Committee Inquiry

Future Advocacy is pleased that the UK Science and Technology Select Committee has launched an inquiry into Artificial Intelligence and Robotics. You can see our full written submission to the inquiry below. We look forward to reading the Committee’s report on its findings in September.

 

1. Introduction

1.1. Future Advocacy is a new advocacy organisation focusing on humanity’s biggest challenges in the 21st Century. Along with consultancy projects for Jamie Oliver’s Food Foundation and the charity Theirworld, we are developing an advocacy strategy to ensure that the risks associated with the development of artificial intelligence (AI) are minimised, while the opportunities are maximised. We hope to achieve smart policy change through effective advocacy. As part of this, we organised a recent workshop to encourage cross-industry collaboration between academics, private businesses, AI developers, and government. An output of this meeting was a mindmap of the actors who might have an impact on the development of AI which can be found here. Our AI work is currently self-funded.

 

2. Executive Summary

2.1. This inquiry is timely and significant, both because of the unique position of the UK as a hub for artificial intelligence, and because of the credibility the Committee’s report will have internationally.

2.2. Narrow/specialist AI already permeates our lives.  It trades shares on the stock market, helps filter spam, and recommends things for us to buy.  Over the coming years the number of specific tasks at which machines are better than humans will increase. Machines are less good at switching between tasks, but that too will change as machines become more flexible and better at learning.  At some point, Artificial General Intelligence (AGI) will be achieved.  A machine that could perform more or less any task a human can.  This point is probably decades away, but probably not centuries away. Depending on the path by which the AGI is developed, it may be qualitatively similar to human intelligence or qualitatively very different.  The arrival of AGI will not be the end point.  Machine intelligence will continue to grow.  It may grow very rapidly if it has access to all the knowledge on the Internet, or if additional computing power is easily available and allows parallelism or speeding-up to be deployed.  If a machine reaches human-level intelligence through a process of self-improvement, then a feedback loop could allow the intelligence to grow rapidly.

2.3. This submission explores the challenges and opportunities associated with AI development in the shorter and long-term.  We use ‘shorter-term’ to refer to the period before the development of AGI and ‘long-term’ for the period afterwards.  This distinction is a somewhat arbitrary simplification, but a useful way of breaking down the subject matter. Of course, this ‘shorter-term’ is rather long when compared to political timeframes and decision-making horizons.  We see this as a major challenge for the Select Committee: how to ensure political focus now on extraordinary transformations in society that will come to full fruition in the 2020s. Leave alone the challenge of how to get focus on the even greater transformations that will probably occur several decades away as the world moves beyond AGI.  The shorter-term issues we focus on relate to workforce; autonomous weapons; and privacy, data-mining, cyber-attack, and transparency issues.  The long term poses a challenge of imagination as much as anything else.  It is very hard to imagine a world in which humans live along-side machine intelligence which is superior to them.  The arts and science fiction can perhaps help us grapple with the nature of the existence that humans can look forward to.  To the extent that long-term challenges will be amplifications of shorter-term challenges, fixing shorter-term problems should help prepare for the long-term ones.  However, it is likely that some long-term challenges will be qualitatively different from the shorter-term ones.

2.4.Of course the development of AI also presents enormous shorter-term and long-term opportunities. Autonomous vehicles could reduce the number of deaths on the road by an estimated factor of 100;[1] universal elite medical advice could be accessible through an intelligent personal assistant; medical devices could be more efficient and safer. New developments are happening all the time, from Google DeepMind’s gaming triumph with AlphaGo to their newly-announced research around how AI could improve our health. The launch of OpenAI is another important development.

2.5. To mitigate the risks and maximise the opportunities, we present these four priority recommendations for the UK government:

  • Commission UK-specific research to assess which jobs are most at risk by sector, geography, age group, and gender.  Implement a smart strategy to address future job losses through retraining, job creation, and psychological well-being support
  • Support a ban on Lethal Autonomous Weapons Systems (LAWS) in the context of the 2016 Convention on Conventional Weapons
  • Agree a ‘new deal on data’ between government, companies, and citizens. This could be developed by a commission led by a notable, respected, and objective public figure (similar to Warnock Commission on IVF)
  • Support the development of an international code of conduct for computer scientists (IEEE[2] are working on this)

 

3. Shorter-Term Challenges And Opportunities: Ai And The Labour Market

3.1. There is some disagreement about the precise effects of artificial intelligence on the labour market. Some claim that it will create more jobs, but an Oxford University (2013) study predicts that 35% of existing UK jobs are at risk in the next 20 years.[3]  At some point 1.2 million call centre workers in the UK may quite swiftly lose their jobs to machines. The advent of autonomous cars – which are being tested now in London – could similarly bring redundancy to an entire industry of professional drivers. Inequality, concentration of wealth, and concomitant social and psychological problems could ensue.

3.2.In order to manage the risks associated with increased automation, we have the following recommendations. The government should:

  • Commission UK-specific research to assess which jobs are most at risk by sector, geography, age group, and gender.  Implement a smart strategy to address future job losses through retraining, job creation, and psychological wellbeing support
  • Incentivize workers to join high demand professions like care-workers by re-valuing the work and giving it a higher social status
  • Encourage entrepreneurialism, making it as easy as possible for start-ups (as biggest job creators) to thrive, reducing their tax burdens, and putting safeguards in place so they can bounce back when one idea fails
  • Adapt the education system to focus on things that machines will be less good at for longer (creativity, ideation, judgement, inter-personal skills); prepare students for working with and alongside artificial intelligence
  • Experiment with incentives for longer maternity/paternity leave, shorter working hours/weeks
  • Consider policies to tackle income and wealth distribution (e.g. Basic Income Guarantee; broader distribution of share ownership, perhaps through a national mutual fund).
  • Shift the tax burden away from income tax (which makes people more expensive than machines)

 

4. Shorter-Term Challenges And Opportunities: Lethal Autonomous Weapon Systems (Laws)

4.1.Some nations with high-tech militaries, including China, Israel, South Korea, Russia, the US and the UK are moving towards weapons that deploy AI to allow combat autonomy.[4]Commonly referred to as Lethal Autonomous Weapons Systems (LAWS), these weapons would have the power to kill without any human intervention in the identification and prosecution of a target. This raises technical, legal and ethical issues, making the decision to go to war easier and causing problems with accountability when a robot kills without a human in the decision-making process. Furthermore, if a nation deploys LAWS, other nations may feel that they need to defend themselves, resulting in a LAWS arms race.

4.2.It is necessary to act now before national investment makes it difficult to change course.  2016 is a critical year for action on this issue with the 5-yearly review conference of the Convention on Conventional Weapons on 12-16 December. NGOs including the Campaign Against Killer Robots are trying to secure action on LAWS in this context.

4.3.The following recommendations for the UK government would make sure they play their part to mitigate the risks posed by LAWS:

  • Support ban on Lethal Autonomous Weapons Systems (LAWS)
  • Enforce the ban on UK-based companies that are involved in the development of LAWS
  • Encourage and lead co-operation between member states so international legislation is inclusive and impactful; collaborate with the High Level Panel on Lethal Autonomous Robotics

 

5. Shorter-Term Challenges And Opportunities: Privacy, Data-Mining, Cyber-Attacks, And Transparency

5.1. The integration of information and the extraordinary data-mining capacity of AI provide great opportunities. It could be used in the healthcare industry to identify inefficiencies and best practices to cut costs and improve care.  At the same time, the prospect of data-mining by ever-more powerful AI raises serious concerns about privacy and surveillance. Given the profound issues at stake there has been a paucity of public discussion and political debate in this area. Urgent steps need to be taken to determine the extent to which privacy can/should be safeguarded as AI gets better at interpreting the data it is given access to.

5.2. Cyber-attacks pose further threats to privacy, at the very least. Most cyber-attacks are now carried out by AI. The USA’s ‘Monstermind’, the National Security Agency’s automated response to a foreign cyber-attack brought to light by Edward Snowden, can apparently engage in its own autonomous cyber-attacks as well as defence.[5] Such cyber-attacks could have real world consequences: shutting down electricity in part of a city would threaten the work of hospitals, for example.

5.3. Another key issue, commonly referred to as the ‘black box’ issue, arises when humans are unable to determine how a neural network reached a certain decision. Deep learning networks may be deeply inscrutable. The process behind the AI’s output is unclear and unavailable, even to its creators.

5.4. In order to address the issues around privacy, data-mining, cyber-attacks, and transparency the government should:

  • Agree ‘a new deal on data’ between government, companies, and citizens. This could be developed by a commission led by a notable, respected, and objective public figure (similar to Warnock Commission on IVF)
  • Revise the Draft Communications Data Bill in line with commission’s findings
  • Adopt clear and consistent regulations for companies that produce and export surveillance technology
  • Ensure that surveillance tech companies disclose client lists
  • Increase investment in encryption technologies.  Ensure businesses do more to prevent data hacks (e.g. Talk Talk)
  • Legislate to require an explanation of the process behind any system that makes a decision which has a substantive impact on human life

 

6. Long Term: Challenges And Opportunities

6.1.  Longer-term developments are harder to predict. AGI (the equivalent of human intelligence across the board rather than in one specialism) and superintelligence (intelligence greatly superior to humans across the board) could deliver unprecedented scientific and economic opportunities.

6.2.  AGI and superintelligence could also pose severe risks. Powerful AI controlled by a single person or organisation could potentially be used for self-interested purposes that do not contribute to the greater good.  On the other hand, without adequate human control, an advanced AI may generate sub-goals such as self-preservation, cognitive enhancement, and resource acquisition, so it can best achieve the primary goal it has been set. The pursuit of these sub-goals could potentially have very serious unintended consequences as described in detail by Bostrom.[6]

6.3.  Short-term exigencies make it hard for governments to implement policies where the impact could be felt in 50 or more years. However, the drastic impact that the arrival of superintelligence would have on society demands that we prepare for it. The government should have the freedom to invest long-term without needing immediate returns to show to voters. Decisive steps are already being taken in other countries that the UK government could learn from. The South Korean Government published the Robots Ethics Charter in 2012 which sets ethical guidelines concerning robot functions, anticipating a time when service robots are part of daily life.[7] The Japanese Government in its Robot Strategy has emphasised the importance of establishing internationally compatible regulations and standardisation.[8] The UK government should:

  • Support the development of an international code of conduct for computer scientists (IEEE are working on this)
  • Engage and keep up to date with the international regulations around AI put forward by the UN and UNICRI
  • Support initiatives to keep members of government and civil service up to speed on the latest scientific developments- for example Cambridge’s CSaP model[9]
  • Incorporate teaching about possible opportunities and challenges coming up in school and university curricula
  • Ensure that the teaching of ethics is incorporated into computer science courses
  • Build on the results of the Sciencewise study ‘Robotics and Autonomous Systems: What the public thinks’ to help foster a public debate around AI that avoids a simplistic dystopia/utopia binary

 

7. Appendix

 

Upcoming Key Dates

31 May (Washington, DC):  IEEE Values By Design Summit (concludes 1 June)

9-15 Jul (New York): 16th International Joint Conference on Artificial Intelligence (IJCAI-16)

31 Aug – 2 Sep (Geneva): Meeting of preparatory committee for Fifth CCW Review Conference

Dec. 12-16 (Geneva):  Convention on Conventional Weapons Fifth Review Conference

 

[1] ‘The State of AI’ at the World Economic Forum, January 2016 https://www.weforum.org/events/world-economic-forum-annual-meeting-2016/sessions/the-state-of-artificial-intelligence/

[2] Institute of Electrical and Electronics Engineers

[3] ‘Technology at Work v2’, Citi Bank and Oxford Martin School, January 2016 http://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work_2.pdf

[4] http://www.stopkillerrobots.org/the-problem/

[5]http://duckofminerva.com/2014/08/monstermind-or-the-doomsday-machine-autonomous-cyberwarfare.html

[6] “Superintelligence” Nick Bostrom

[7] https://akikok012um1.wordpress.com/south-korean-robot-ethics-charter-2012/

[8] Japan’s Robot Strategy, Vision Strategy, Action Plan

[9] http://www.csap.cam.ac.uk/programmes/policy-fellowships/