The handover problem
“In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters [sic] of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.”
This was the statement put out by Tesla after one of a handful of high profile crashes involving autonomous and semi-autonomous vehicles this year. The driver died, becoming the 4th known fatality of a driverless car crash (Wikipedia has a gruesome list here). The crash highlights the ‘handover problem’ that semi-autonomous car manufacturers, insurers and others are grappling with. For the foreseeable future there will be few fully autonomous vehicles on our roads (‘Level 5’ automation, according to the Society of Automotive Engineers’ classification). Instead there will be semi-autonomous vehicles that hand over control to a human driver when they are in a situation they don’t know how to handle. This poses obvious problems if the human is distracted by other tasks. There is also the risk that over time humans will become deskilled with less driving practice and therefore become less able to deal with what is in front of them, particularly if it is the really complicated situations that are the ones the machine can’t deal with.
The ‘handover problem’ is not new. When Air France Flight 447 crashed in the Atlantic Ocean on May 31, 2009, a key factor that led to the disaster was a failure of the human pilots to take over safely when the automated ‘fly-by-wire’ system shut itself off, as it was programmed to do, when a pressure probe on the outside of the plane iced over, and the system could no longer tell how fast the plane was going.
And it’s not just in transport where these ‘handover problems’ arise. Many of the doctors, patients, and ethicists Future Advocacy interviewed for our recently published report on the Ethical, Social, and Political Challenges of AI in Health with the Wellcome Trust raised similar questions around how autonomously-operating algorithms hand decision-making control back to doctors and nurses. If autonomous algorithms only hand over to human operators in complex situations that they are not designed to handle, how will human practitioners keep up their skills sufficiently to be able to address these situations? And should this transition from algorithmic control to human control be clearly flagged to patients?
Understanding edges
To some extent these are all technical questions with technical solutions. They are about improving user experience and user interfaces. Perhaps these problems will simply go away if Elon Musk’s Neuralink venture, seeking to merge machine and human intelligence, is successful. But there are also important philosophical and sociological issues at play here, where technologists could benefit greatly from engaging with thinkers in other fields as they seek to find good answers. These questions about what happens at the edges between humans and machines are also questions about the kind of world we want to live in.
Social anthropologists often focus on edges. A lot of important human action takes place at the real boundaries between communities, cultures, and individuals. And a lot of additional human action is focused on building up and maintaining the symbolic boundaries that separate people into groups and generate feelings of membership, of insiders and outsiders. Legendary sociologist Émile Durkheim saw the symbolic boundary between sacred and profane as the most important of all social facts, and the one from which lesser symbolic boundaries were derived. Durkheim believed that Rituals were the way groups maintained their symbolic boundaries.
Children’s play and much of our humour can be seen as testing and probing where the boundaries lie. Our calendars celebrate edge points where one season hands over to the next or a key religious figure moves from one state to another. And in our own lives we celebrate (and mourn) at the edges between states with ‘liminal’ moments when we turn our world upside down to mark a key transition with a hen party or stag do. Grayson Perry’s beautiful recent TV series about rites of passage highlights the importance of rituals and symbolic boundaries in our lives.
Gaining an edge
The best ideas and the most dynamic thinking also happens at the edges, where different cultures and beliefs and professions and ideas collide. Teams that include different people with different backgrounds tend to be more creative and effective. This may also apply to humans and machines. IBM’s Deep Blue beat Kasparov at Chess in 1997 and since then the best machine has always beaten the best human. But for many years after that teams combining machines and humans (known as ‘centaurs’) were stronger than machine-only teams. Human creativity and strategy partnered with machine brute computing force can be a powerful combination.
Human-machine edge issues will become increasingly important across a range of areas (note the recent debate about whether children should say ‘please’ to Alexa). Technologists, sociologists, philosophers, designers, psychologists, anthropologists, and people from all different walks of life need to talk together about them and what they mean for every aspect of our society. This space is already being explored by great organisations including the Leverhulme Centre for the Future of Intelligence (which champions the multi-disciplinary approaches we need), and pioneering individuals like Margaret Bowden (Britain’s ‘national conscience’ when it comes to all things Artificial Intelligence). Designing the edges between humans and machines is too important a task to be left to the big tech companies alone.
Olly Buston