Autonomous vehicles, or self-driving cars, are likely to be seen more widely on roads in 2015.
Already, legislation authorising the use of autonomous vehicles has been introduced in the US states of Nevada, Florida, California and Michigan, with similar legislation being planned for the UK. To date, these laws have focused on legalising the use of autonomous vehicles and dealing, to an extent, with some of the complex issues relating to liability for accidents.
But as with other emerging disruptive technologies, such as drones and wearables, it is essential that issues relating to user privacy and data security are properly addressed prior to the technologies being generally deployed.
Understanding autonomous vehicles
There is no single, uniform design for autonomous vehicles. Rather, it is best to understand an autonomous vehicle as a particular configuration of a combination of applications, some of which – such as adaptive cruise control, lane departure warnings, collision avoidance and parking assistance – are already part of current car design.
The most well-known prototype, Google’s self-driving car, uses a variety of technologies, including: a laser range finder (LIDAR) that generates a detailed 3D map of the environment; radars; cameras for detecting traffic lights; and a GPS. Other projects, including prototypes being developed by Mercedes-Benz, Volkswagen, Toyota and Oxford University, use different combinations of technologies.
This means that the privacy and data security problems arising from autonomous vehicles depend upon the precise technologies applied in any particular design. Some generalisations are, however, possible.
The relationship between the virtual and the real
The rules (or “code”) governing the online world have been different to those that apply offline. For example, online activities invariably generate digital traces, including metadata, which can be used to build profiles of users.
With emerging technologies, such as drones, wearables and autonomous vehicles, we are increasingly seeing the transposition of virtual models onto the real. One consequence of the range of sensors and data collection devices being deployed (and interconnected) is that our offline activities can leave traces at least as extensive as those generated online.
One way to understand types of autonomous vehicles is by reference to the kind of data collected and the ways in which that data is processed. For instance, autonomous vehicles often incorporate event recorders, or “black boxes”, to provide essential information in the event of an accident. This raises questions about who has rights to this data and about who can have access to the data.
Anonymising data
There is an overlap here with questions of liability, as insurance companies have clear incentives to collect as much data about user behaviour as possible. The potential for intrusive surveillance of personal activities is particularly jarring, as the car has been an archetypal space of personal privacy and freedom.
A fundamental distinction must be drawn between self-contained autonomous vehicles, in which the data collected from sensor devices installed in the car are stored and processed in the vehicle itself, and interconnected vehicles, in which data is shared with a centralised server and, potentially, with other vehicles.
Regardless of whether a vehicle is self-contained or interconnected, design decisions have to be made about whether or not the data collected is anonymised or linked to individual users. If the data is not anonymised, especially with interconnected vehicles, this poses serious surveillance threats. After all, once the data exists, and especially if it is connected to a server, it is vulnerable to access by third parties.
It is possible to envisage implementations of autonomous vehicles where data about a particular user is linked to other data sources, such as an online profile, for purposes such as tracking or marketing. This might take the form of personalised advertising displayed in the car, or even adjusting a vehicle’s route so that it passes retail outlets which match a user’s imputed preferences.
What else is at stake: human autonomy and hacking
We are now familiar with technologies, such as predictive search, which in the online context, attempt to predict what we want to do and make more or less persuasive suggestions.
It is likely that some versions of autonomous vehicles will implement predictive technologies. In any case, the progressive delegation of human decisions to machines raises system-wide questions about the cumulative impact on human autonomy: the more people are habituated to decisions being made for them, the less likely they may be to make their own decisions.
We are also now depressingly familiar with the vulnerability of computer systems to malicious third parties. Just as effective data security is essential to online safety, autonomous vehicles must be designed with a high level of data security, especially given the potentially calamitous consequences of hacked vehicles. As interconnected data processing systems are progressively rolled out in applications such as wearables and autonomous vehicles, we seem likely to see an offline version of the same sort of perpetual guerrilla warfare played out online between information security and hackers.
Protecting privacy at the design stage
Autonomous vehicles promise significant social and economic benefits, especially in potential improvements to road safety. There are, nevertheless, considerable legal and regulatory challenges. As with other emerging disruptive technologies, it is vital that privacy and anonymity be properly protected at the design stage.
To date, in the face of significant challenges relating to the legality of autonomous vehicles and liability issues, the privacy rights of users have been relatively neglected. But unless the era of artificial intelligence is to be accompanied by us sleepwalking into ubiquitous surveillance, we must recognise that safety and security needs to be balanced against the legitimate rights of people to control their own data and to retain their fundamental rights to privacy.
David Lindsay is a board member of the Australian Privacy Foundation.
This article was originally published on The Conversation. Read the original article.
COMMENTS
SmartCompany is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.
The SmartCompany comment section is members-only content. Please subscribe to leave a comment.
The SmartCompany comment section is members-only content. Please login to leave a comment.