Introduction: from the ethics to the politics of technology

This comment, the papers that accompany it in this journal’s special collection, and the meeting at which the papers were discussed and honed, were born of a frustration with how the emerging technology of self-driving, ‘autonomous’ vehicles (AVs) was being discussed within the social sciences and humanities (SSH).

The simple, seductive idea of a car driven by a computer has a long history, dating back at least to the 1950s (Norton, 2021). However, the last decade has seen governments and start-ups join a chorus of hype claiming that a truly self-driving car, powered by advances in artificial intelligence, was imminent. The promise is that the ‘autonomous vehicle’ will be able to mimic and then improve upon human driving. The technology would, the story goes, be able to solve traffic safety issues, improve the efficiency of the mobility system, save energy, cut pollution and give drivers back all of the time currently lost to driving. The scale of excitement and investment, taken together with a historical trajectory of previous mobility innovations, suggest that some form of transformation to people’s mobility, their lifestyles, their livelihoods and their surroundings is likely. The need for reflective critique informed by SSH scholarship therefore seems clear.

However, rather than questioning this narrative or exploring its assumptions and contingencies, some SSH researchers have chosen to swallow it whole, conducting studies that project ethical and social implications from what they regard as an inevitable technological revolution. Researchers in and around transport studies have often taken the technology for granted (Fagnant and Kockelman, 2015) and extrapolated scenarios for future mobility (for a review, see Stead and Vaddadi, 2019), or they have asked whether and how the public might be persuaded to accept such radical novelty (Othman, 2021; Jing et al., 2020). Researchers looking at AVs as a test case for artificial intelligence have tended to draw the key questions as those of ethics (Hansson et al., 2021). Philosophers and experimental psychologists have seized upon the superficial similarities between AV decision-making and the ‘trolley problem’ thought experiment to develop a cottage industry in AV applied ethics. In what its authors call “the largest experiment in moral psychology ever”, more than 2 million people responded to the ‘moral machine’ study of “how people want driverless cars to decide matters of life and death” (Bonnefon, 2021).

This applied ethics work has been much criticised (e.g., Etienne, 2022; Rodríguez-Alcázar et al., 2020; Lundgren, 2021; Roff, 2018; Bogost, 2018), including by us (Stilgoe and Cohen, 2021; Cohen et al., 2020; Mladenović et al., 2019). The framing fails to reflect the reality of the technology, and it offers nothing but distractions in terms of policymaking. The presumption is that the relevant rules are those that are programmed into a vehicle rather than those that regulate the technology from the outside. However, this narrow view of ethics remains convincing and convenient to those involved in developing the technology, who have few incentives to ask deeper questions. At a conference in Silicon Valley in 2018, one senior executive at a tech company with interests in self-driving vehicles told one of us that, when it comes to regulation, “the question is: do we kill the nun or the baby?” Similarly, at the beginning of the discussions within the Horizon (2020) Commission Expert Group that drafted recommendations on road safety, privacy, fairness, explainability and responsibility, a senior official from the EU directorate explained that the focus should not be on discussing if automation is desirable or feasible, and that increasing trust and acceptance of the public is the main objective. We think there are more important and more urgent objectives and questions.

Our aim here is to set aside technological determinisms (Wyatt, 2008), and shift attention from ethics to politics. Our starting point is in Science and Technology Studies (STS), with the insight that “artifacts can contain political properties” (Winner, 1980). Langdon Winner identifies two ways of looking at the politics of technologies—first by seeing how technological systems settle particular issues, granting rights to some and disadvantaging others; second, by analysing the political arrangements technologies require in order to function. His argument is that we often fail to account for these politics until it is too late to do anything about them. We sleepwalk through technological revolutions (Winner, 1980). While technologies have profound political effects, we lack the means through which we hold things to account in conventional politics. Updating this line of thinking for the digital age, Larry Lessig concluded that ‘Code is Law’ (2000), and it is a form of law that is rarely settled democratically. Some “inherently political technologies” such as nuclear power are authoritarian (Winner, 1980). Vaccines that eradicate serious diseases, can be seen as more emancipatory, particularly if intellectual property arrangements allow for distributed manufacture and control. The early Internet, characterised as inherently open and emancipatory by many of its pioneers, has seen a transformation that has resulted in an unprecedented concentration of corporate power.Footnote 1

Politics, according to Harold Lasswell (1936), is about “who gets what, when and how”. For Chantal Mouffe (2005), politics is “the set of practices and institutions through which an order is created, organising human coexistence in the context of conflictuality provided by the political”. When considering technology, working out who wins, who loses, who has power over whom, who decides, and how is hard enough in hindsight. We might be able to trace patterns of power through an existing sociotechnical system like the US road network and see how its emergence benefitted some over others. It is far harder to envisage the social constitution of a technology that is still emerging. There remains an important debate about whether the politics of technology should be seen as intrinsic and essential or as emergent properties of systems that are indeterminate (compare Winner (1980) and Woolgar and Cooper (1999) and, for a case study on geoengineering, compare Szerszynski et al. (2013) and Horton et al. (2018)). With digital technologies (Ruppert et al., 2017), there is no deterministic relationship between a new technology and its ‘data politics’. However, we can anticipate the ways that a technology will play a role in future power struggles.

The lesson is that any analysis of politics must pay attention to both the content and the contexts of technology, and recognise the uncertainties throughout. We should remain open to the possibility that technologies, while emerging, may be, through their development or regulation, repurposed to alternative ends. Crucially, we should not accept innovators’ characterisation of the affordances of a technology (Davis, 2020) nor of the imagined problems to which it is offered as a solution (Morozov, 2013) as either correct or inevitable. Neil Postman’s (1999) instructions for the interrogation of technology remain relevant (“What is the problem to which this technology is the solution?”; “Whose problem is it?”; “What new problems might be created because we have solved this problem?”; “What sort of people and institutions might acquire special economic and political power?”).

The task of anticipating the politics of emerging technologies before they are a meaningful part of most people’s lives is hard, but we should not cede the future to current innovators and risk reinforcing existing socio-political and socio-technical systems. We believe that the task of locating technological politics should be a constructive one, in order to help shape innovation in ways that are more democratically accountable. The dilemma first articulated by David Collingridge (1980)—that the more we discover about the effects of technologies, the harder these technologies become to steer—remains a challenge. We should not seek to predict, but we can anticipate, and our capacity to do so has improved since Collingridge’s time—especially in mobility systems. As Safiya Noble (2018) puts it, with respect to online algorithms, “the more we can make transparent the political dimensions of technology, the more we might be able to intervene.” The task is therefore one of constructive or ‘real-time’ technology assessment (Schot and Rip, 1997; Guston and Sarewitz, 2002), critically engaging with the imagined purposes of a technology and the processes by which it is being developed, tested and governed as well as seeking to anticipate its consequences.

Where are the politics of AVs?

Most of the papers in this special collection began life at a workshop that took place in London in December 2019, just before we heard about a virus that was soon going to radically disrupt our mobility. Other authors in the growing network of AV SSH researchers added their perspectives later. Peter Norton attended our workshop to present a paper that built on his history of the 20th century automobile (Norton, 2011). Norton’s paper outgrew our collection to become a book—Autonorama (Norton, 2021). From a range of disciplinary standpoints, including planning, science and technology studies, economics and engineering, the papers take aim at parts of the system. The implications of AVs will not flow from their supposed ‘autonomy’, but from the relationships that new devices will have with other parts of the world (Tennant and Stilgoe, 2021). Rather than focussing on just vehicles, which leads to trolley problem-type thinking, we should look at the systems in which vehicles will need to be embedded, or which will be reconfigured around them. Here, we organise our analysis of the politics of AVs in terms of safety, road rules, infrastructure, labour, imagined futures, and transitions.

The politics of safety

Even if AVs reduce the overall number of crashes and casualties on the road, the type and distribution of incidents will change. Road safety is currently riven with injustices (Culver, 2018), some of which are exacerbated by recent innovation. Drivers have benefited from improvements to their vehicles that have added mass, protective equipment and automated safety systems. Pedestrians have seen few improvements in technology and have been victims of the moral hazard created by others’ perceived safety. For example, since 2000, roads in the US have become safer for drivers and more dangerous for pedestrians (Tyndall, 2021).

Sarah Lochlann Jain (2004) has argued that the car has come to be regarded as a neutral instrument rather than a dangerous object, meaning that landscapes get redesigned thoughtlessly around driving. When crashes happen on the road, the patterns of blame often follow patterns of power that privilege what John Urry (2004) has called the ‘system of automobility’. Laws have evolved and been enforced to reflect driving as an unfortunate necessity, often exculpating drivers from the consequences of their own actions. Pedestrians may be blamed for being in a space that is not designated as theirs, and catastrophes may be reported by media and police forces as ‘accidents’. In our collection, Braun and Randell (2020) highlight how road violence has historically been ascribed to driver error and not recognised as an intrinsic property of automobility, creating a situation in which AVs can be presented as solution to the “driver problem”.

AV crashes, some of which have already proven catastrophic, reveal something of the politics of risk and blame. Elaine Herzberg was hit and killed in 2018 by an Uber in ‘autonomous’ mode as a result of choices made by Uber managers and engineers about acceptable safety and the appropriate balance between false positives and false negatives in the car’s algorithms. Questions of ‘how safe is safe enough?’ and ‘how do we know?’ are unavoidably political, demanding value judgements about the appropriateness of experimental technologies in public spaces (Stilgoe, 2021).

AVs will also introduce other risks onto the road. Human drivers are often bemoaned by AI researchers for the autonomy of their learning: each individual must learn to drive anew and will learn little from the mistakes of others. AV proponents make much of their systems’ ability to learn in connected fleets and to communicate with each other and with infrastructure (Stilgoe, 2018). However, this connectivity brings new systemic risks to cybersecurity or privacy, the assessment of which might either be downplayed by those whose interests are in vehicles rather than infrastructures, or exaggerated by those wishing to emphasise their systems’ autonomy.Footnote 2 Vehicles without drivers or other staff raise questions of personal safety, which are likely to be particularly acute for women. A contribution from this collection shows how analysis of the AV politics will require attention to the connections between different types of public concern about safety or security and different demographic variables (Lee and Hess, 2022).

The politics of road rules and infrastructure

The politics of driving is paradoxical. We are sold a dream of freedom and limitless opportunity, but when we are on the road, we are more regulated, surveilled and policed than in almost any other part of our lives. Back in 1947, Max Horkheimer wrote, “It is as if the innumerable laws, regulations and directions with which we must comply were driving the car, not we.” Cars might feel like a libertarian technology when compared to public transport, but the dangers inherent in such powerful and potentially destructive machines have forced governments to agree rules, norms and institutions to govern our behaviours. There are early signs, (e.g., Crawford, 2020) of a conservative critique of AVs that centres the desires of autonomous humans in the face of increased automation.

Encounters on the road are a form of everyday power struggle between users, whose interactions are shaped by rules and infrastructures that govern their behaviour. The arrival of AVs could affect these interactions in profound ways beyond immediate questions of safety (Tennant et al., 2021). Interactions are often ambiguous and rules and infrastructures may, as in the case of ‘shared space’ (Hamilton-Baillie, 2008) be designed to maintain ambiguity in the name of safety. AVs will benefit from greater certainty and we are likely to see, in Winner’s terms, technology settling disputes in favour of particular road users. Rules in different places could be rewritten and infrastructures rebuilt to prioritise some transport modes over others, enabling or proscribing AVs. Rule changes to protect vulnerable road users, such as those recently introduced in the UK, may make life harder for AVs by introducing uncertainties.

This collection, although predominantly from the social sciences and humanities, includes one paper that considers “the question of what constitutes proper driving behaviour in a complex driving scenario” from the engineering perspective of a company developing AVs (Bin-Nun et al., 2022). We see in this paper that the design of a vehicle necessarily involves proposals for the governance of such vehicles. AV developers often call their self-written rules their ‘driving policy’. In the coming years, we will see the negotiation of alternative ‘policies’ in more or less democratic ways by governments and standards-setting bodies. New risks need new rules. As Pattinson et al. (2020) argue in their contribution to our collection, partial automation already demands careful consideration of rules for human-machine interaction and consent, which must be embedded in interactive digital interfaces, within or outside of AVs.

The infrastructure that surrounds the car confers benefits and risks unevenly. Car-based architectures make other ways of moving around harder and contribute to what Sheller (2018) calls ‘mobility injustices’. Having to rely on a car for mobility needs can be classified as a type of transport poverty. Instead of liberating humanity from the system of automobility, AVs risk individualising and intensifying the existing automobility regime (Currie, 2018; Grindsted et al., 2022) if they follow the same strong path dependencies observed in relation to Smart City technology (Sadowski and Bendor, 2019).

If AVs are not as autonomous as we are led to believe by their developers, questions of infrastructure become an unavoidable part of any political analysis. Infrastructures, when they work, are often invisible to those that take them for granted, but upon investigation they only make sense as sets of relationships (Star and Ruhleder, 1994). Innovators are apt to talk about what their technologies are able to do. They are less likely to mention the conditions that constrain their technologies’ safe operation, such as types of road, network connectivity and the behaviour of other road users. They may talk about their achievements moving up the so-called ‘levels of automation’ set by the Society of Automotive Engineers, towards a Level 4 self-driving vehicle. These automation levels are narrowing our conceptualisations (Hopkins and Schwanen, 2021), such as an often neglected aspect of the ‘operational design domain’ in which the technology can be shown to work. A contribution to this collection elaborates on this irony, where autonomy is not about separation or isolation, but about consistent connection and relations of mutual influence (Ganesh, 2020).

The politics of labour

On the face of it, AVs would appear to directly threaten the livelihoods of human drivers. However, we know from similar technologies that automation displaces rather than directly replaces human labour (Suchman, 2007; Acemoglu and Restrepo, 2019). Even if people no longer appear in a robotaxi’s driving seat, they are required in tasks ranging from data-labelling, mapping, safety assurance and remote operation to customer support. AV developers claim that a job such as a safety driver, ready to take over if a system fails, is a temporary feature of a system while it is learning to drive. Their business case may rest upon cost-cutting through labour-saving. However, there are incentives to automate even if the robots do not prove cheaper than human labour.

The political implications of such shifts depend on the scale and scope of deployment. However, it seems clear that this would be another area of automation in which capital accrues power at the expense of labour. Protecting high-quality, well-paid work will be a key challenge. As one of our collection’s contributions underlines, policymakers should consider the impact of automation on specific segments of trucking workforce (Mohan and Vaishnav, 2022). Automated trucking is likely to see reconfigurations of drivers’ rights and responsibilities, some of which are already happening through surveillance technologies (Levy, 2022).

The politics of imagined futures and transitions

Claims about the future made by innovators should not be read as mere predictions, but as discursive world-making. The attempt to assert a particular future and crowd out other possibilities should be seen as explicitly political (Borup et al., 2006). Imagined futures are a way of organising resources—money, attention and work—in the present. And some of the more ‘forceful futures’ (Van Lente, 2000) will have effects on decisions about alternative mobility modes in the present. Imagined futures are not just about technology, although they often accentuate the technical; they also carry implied business models, attitudes to regulation and imagined problems to which their technologies offer solutions (Graf and Sonnberger, 2020; Mladenović et al., 2020). In this collection, Martin (2021) combines a multi-level perspective on transitions and imaginaries to show how AV visualisations by automobile manufacturers carry latent yet powerful meanings. Haugland (2020) uses the concept of sociotechnical imaginaries to consider how AVs are mobilised as a part of a national narrative but still fail to address problems that are particular to the Norwegian context of his case study (see also Olin and Mladenović (2022) for a Finnish analogue). Michalec and colleagues (2021) explore how diverse disciplines are brought together in the service of robotics research that might enable more robust automated futures.

The futures being built by AV developers and the policymakers that support them often imagine the public in ways that seem expedient in the short term, but risk long term public alienation. In this collection, alongside Lee and Hess’s (2022) survey of public concerns, Tennant and colleagues (2021) analyse how a UK parliamentary enquiry framed the public in terms of their faulty driving, their ignorance of the technology or their undue anxiety. Similarly, Van Wynsberghe and Pereira (2022) from this collection consider how novel methods of public engagement can contribute to reframing both the imagined social problem and its technological AV ‘solutions’. While we should pay close attention to current speculation about the future, therefore, we should certainly not take such things for granted. The futures that technology developers are imagining offer little value in terms of predictions, but they are an important source of qualitative data.

It has been notable that, for a technology that promises to disrupt future mobility systems, the economics of AVs are often imagined speculatively or deliberately postponed until after the technology has been shown to ‘work’ in a narrow technical sense (see Nunes and Hernandez (2020) for an analysis showing the fragility of economic assumptions behind AV claims). The dominant narrative surrounding AV start-ups suggests that companies are looking to follow other Silicon Valley innovators in developing platforms that are rapidly scaleable at low cost across existing infrastructures, much as Uber or Airbnb. This would have profound ramifications for mobility systems, for workers, for the finances of incumbents and for wealth inequalities if it were able to be realised. However, the contingencies we have identified in this paper and the other papers in the collection suggest that building AVs as a universally applicable platform will be all but impossible. Local authorities have in some cases sought active involvement in pilot projects in order to learn from and adapt the technology to their needs, but as McAslan and colleagues (2021) show in this collection, despite the numerous pilot projects, there is still little policy learning and leveraging for public benefits (McAslan et al., 2021).

Where should we look for the politics of AVs?

The papers in this collection begin to give some structure to an investigation of the politics of AVs that allows for some ‘real-time technology assessment’ (Guston and Sarewitz, 2002) in the years ahead. We hope to have made the case that the technology should be seen in inextricably relational terms. SSH research is therefore a rich and important part of any attempt to make sense of automotive automation. A methodology for studying the construction of AV politics should focus on a number of research sites. Many of the most important political battles will be contested on the road itself, in interactions between new and old types of road users, and in disputes about the re-allocation of road space and the upgrading of infrastructures. For now, tests of AV technologies on public roads offer an opportunity to anticipate some possible changes (Mladenović, 2019). Trials of technology in public may be unreliable research sites, however, as some of the most important contingencies may be hidden by innovators’ desire to publicise their technology’s potential rather than its limits (Marres, 2020).

We should also look for the politics of AVs in the laboratories that are birthing its prototypes. Going behind the scenes of innovation offers a view of additional contingencies. Some examples of ethnographic work with AV designers (e.g., Pink et al., 2020; Stayton, 2020) reveal the potential to not just study but also to contribute to emerging innovations. However, the dominant frame for such engagements remains one of user interaction, which means that non-users, bystanders and other citizens may remain excluded.

Given the prospective nature of the technology, we can also read likely politics in public visions of the future made by innovators, other stakeholders and policymakers. Again, such discourses must be read critically (Mladenović et al., 2020). They typically bring forward and amplify some aspects—such as the role of artificial intelligence—while downplaying other aspects—such as the role of infrastructure or the compliance of other road users. Whether on the road, in the lab or in wider discourses, we should regard the politics of AVs as something that in many cases the technology’s developers would like to keep hidden. SSH research should therefore also pay attention to what is being ignored, forgotten (Rayner, 2012) or made invisible (Star and Ruhleder, 1994) in the quest to remake mobility. While AV developers would like their technology to be broadly (if not universally) applicable and scaleable, AVs will in reality be attached to, enabled by and constrained by particular places and contexts (Porter et al., 2018). The geography of automation will therefore remain an important area of study.

The investment into AV technologies has been vast, which means that, even if the technology fails to realise the ambitions its early enthusiasts set for it, the various innovations being supported will have some impact. We are likely to see pressure exerted on other parts of systems to compensate for an AV’s limits. For example, AV innovators are likely to lobby for infrastructure or rule changes to make roads more machine-readable and more easily navigable. The reconstruction of worlds around AV imaginaries may be, as it was with the car, the most powerful way in which the technology’s power is expressed. We are also likely to see spin-offs repurposing AV technologies, either for incremental improvements to conventional automobiles or for uses in new domains, including the military (Verdiesen et al., 2021).

Ultimately, we hope that this collection of articles opens up and informs a wide-ranging body of SSH and interdisciplinary research that does not just explain, but also informs the ongoing development of a set of technologies that promise radical social transformations.