Robotics, AI And The Power Of The Public: A Review Of Evidence To The UK Science And Technology Select Committee Inquiry

The UK Science and Technology Select Committee is currently undertaking an inquiry into robotics and artificial intelligence, aiming to publish a report on its findings in September. In the meantime, Cath Elliston looks at the evidence so far…


The range of individuals and organisations that have submitted evidence for this inquiry – from NGOs, to government departments, to academics, to industry – is testament to the wide-ranging impact that robotics and AI are set to have on all our lives. But despite the diversity of opinion in these submissions, consensus emerges on a crucial point: the public has a vital role to play in ensuring that the opportunities presented by these technologies can be fully explored and the risks can be mitigated.


Public Trust Is Essential To Exploit Opportunities Across Different Sectors

Robotics and Autonomous Systems (RAS) present numerous opportunities for addressing social problems. As Geoff Pegman, Managing Director of R U Robots Ltd, outlines in his submission, AI could be used to solve issues relating to ‘healthcare, care of the elderly, education and an aging workforce’. However, a 2015 Sciencewise study showed that although the British public tended to be ‘broadly optimistic’ towards RAS, innovation in some of these areas proved controversial. 61% of respondents felt that RAS should be banned from care of children, elderly and disabled, and 30% from education and healthcare. Such opposition would be an obstacle to innovation in these sectors, as well as beyond them. Professor David Lane observed in his submission that ‘building public consensus and support’ will be ‘key’ in any adoption. He notes that ‘currently the public perceives threats to employment, loss of privacy, loss of control’.

Public confidence is also essential to stimulate economic growth through RAS. TechUK, which represents more than 900 technology companies (accounting for about half of all tech jobs in the UK), emphasize in their submission that innovation can only be achieved with the right ‘conditions and regulatory environment’, in which ‘the UK population trusts and understands the opportunities afforded by … UK global leadership in robotics and AI.’


How Can We Build Public Trust In Robotics And AI?

So how can we build trust in robotics and AI? At the most recent oral evidence session for the inquiry, this question was probed repeatedly. Professor Philip Nelson, Chair of Research Councils UK, suggested that government could establish a regulatory body to secure public trust, similar to the civil aviation authority. ‘People are generally very comfortable about flying in aircrafts, because they know there is a civil aviation authority that regulates the whole business’, he observed. ‘It is a very low-risk activity because, as soon as there is an accident, there is a proper investigation into it.’ Dr Rob Buckingham, Director of RACE, highlighted the driverless car tests taking place around the UK, as an opportunity for people across the country to become more informed. Another popular recommendation to gain the public’s trust was adding AI education into the curriculum (‘AIEd’). Professor Rose Luckin argued in her submission that this has the ‘unique potential’ to mitigate the impact of automation on jobs, ‘enabling people to better meet the needs of the changing workplace’. Building on this suggestion, thinktank Transpolitica highlighted the need for education at all levels of society ‘from those in early learning, attending college or university to those performing jobs soon to be automated or retired’.


The Public Should Have Their Say

Another less expensive way to gain public trust is of course decision making transparency, and allowing them to influence developments. This should happen early on – not just because it will unlock economic growth and opportunities in fields like healthcare, but also because the public should be able to have their say on a technology that is likely to affect them profoundly. For example, in Future Advocacy’s own submission to the select committee, one of our key recommendations was to agree a ‘new deal on data’ between government, companies, and citizens. This could be developed by a commission led by a notable, respected, and objective public figure, as in the case of the Warnock Commission on IVF.

So in spite of the amount and range of submissions to this inquiry, the conversation needs to be wider still. Whether in the short, medium, or long term, these technological advancements will affect all of us, and so we should all be well informed enough to take part in the decision making that will determine the future.

By Cath Elliston