and the importance of looking back before looking forward
Have we gone through the introduction of autonomous vehicles before? In other words, have we gone through the introduction of a new, potentially hazardous but wonderfully promising technology?
Of course we have. Many times. And we make many of the same mistakes each time.
When the first automobiles were introduced in the 1800s, mild legislative hysteria ensued. A flurry of ‘red flag’ traffic acts were passed in both the United Kingdom and the United States. Many of these acts required self-propelled locomotives to have at least three people operating them, travel no greater than four miles per hour, and have someone (on foot) carry a red flag around 60 yards in front.
The red flag is a historical symbol for impending danger. Perhaps the most famous red flag traffic act was one in Pennsylvania that required the entire vehicle to be disassembled with all parts hidden in bushes should it encounter livestock (this bill was passed by legislators in both houses but was ultimately vetoed by the governor).
These acts seem humorous now, but to miss their key lessons and those from and other technological revolutions would be ill-advised.
The first red flag lesson is that society instinctively hoists red flags in the absence of information
We are seeing this now with autonomous vehicles. Why? Perhaps it because without information, people tend to focus on specific narratives and not on net benefit. Steven Overly’s article in the Washington Post talks about the reaction people will likely have to autonomous systems noting that humans are not ‘rational when it comes to fear based decision making.’ Overly quotes Harvard University’s Calestous Juma, who writes in his book Innovation and Its Enemies: Why People Resist New Technologies about how consumers were initially wary of refrigerators when they were first introduced in the 1920s. The prospect of refrigerators catching fire weighed heavily on people’s minds regardless of the obvious health benefits of storing food safely.
So what happened? Three things. The Agriculture Department publicly advocated the health benefits of refrigeration. And then once refrigerators became ubiquitous as a result of their efforts they became safer as manufacturers learnt from their mistakes. And the third thing deals with experience (which is the next lesson to be learnt).
The second red flag lesson is the consumers don’t trust experts.
Take the current issue of drunk driving. Autonomous vehicle proponents argue that autonomous vehicles will effectively eliminate crashes (and deaths) caused by drunk driving. And this makes theoretical sense – with 94 per cent of current vehicular crashes caused by human error, surely autonomous vehicles (which remove the ‘human’) would mean that these crashes should effectively be eliminated. But the broader population is not so sure.
A 2015 Harris Poll survey found that 53% of United States drivers believe that autonomous vehicles will reduce the prevalence of drunk driving. The same figure applies to distracted driving. This means that 47% of people can’t see the link between autonomous vehicles and fewer crashes caused by inebriated or distracted drivers. To be clear, 47% of the population ‘is not stupid,’ so the experts simply have not or cannot sell the safety message – yet.
The third red flag lesson is that governments (and the regulators they appoint) will control the deployment of new safety-critical technology.
Politicians are not scientists. They are a special subset of society who are inherently conservative in their thought process and are inherently inclined to demand red flags. They have their collective strengths and weaknesses, but what most of the voting public cannot empathize with is the responsibility they have for virtually everything. But perhaps today’s governments are more open-minded for autonomous vehicle technology, no doubt because they are hoping for commensurate economic benefits. Some are no doubt waiting for other governments to take the plunge, and set a precedent they can follow. But there is no doubt that some lawmakers are looking at the tangible economic benefits their city will hopefully reap if theirs is amongst the first to deploy this technology.
The fourth red flag lesson is that we tend to incorrectly gauge the performance of new technology using perspectives of the old.
In the 1800s, the main safety concern of self-propelled locomotives focused on those outside of the vehicle. So safety became a measure in which this technology would not induce panic from man, woman and beast alike. But we quickly learned that instead of looking outwards, automobile safety needed to look inwards. That is, we needed to focus on passenger safety in the event of a crash. As it turns out, livestock and pedestrians could easily live in a self-propelled locomotive world. Irish author and scientist Mary Ward became the first automobile fatality when she was ejected from the steam powered vehicle her cousin built. And when vehicles became more popular, it became clear that the passengers and drivers were more likely to be killed or injured than anyone else.
In the early 1900s, vehicles were built with hydraulic brakes and safety glass. Crash tests started in the 1920s. General Motors performed the first barrier test in 1934. So today, vehicle safety is largely about people inside it – not outside it.
Why is there limited focus on those outside vehicles? Because we have human drivers. Drivers who are assumed to be trained, licensed and able to avoid hazards and people. But this is about to change.
There are many more red flag lessons to be learnt, but for now we will stop at four.
So where to from here. Perhaps the most relevant red flag lesson is the last. The first two lessons are largely societal, and can be resolved by better communication with the driving population. And because autonomous vehicles are yet to hit the marketplace, we can assume that the (virtually every) car maker that is now investing in autonomous vehicles is yet to unleash their full marketing arsenal. Which they will. And we are seeing governments at all levels leaning further forward than others, probably because they think this will make more financial sense as mentioned above.
But we need to (much) better understand how we will create safe autonomous vehicles in a way that can be certified. Take the Tesla Model S that crashed into the side of a tractor trailer in Florida while in “Autopilot” mode, killing its driver. This was an event that many in the industry feared – the first public autonomous vehicle related fatality. The National Highway Traffic Safety Administration (NHTSA) Report into the situation surrounding the accident determined that the driver had seven seconds to do something about the tractor trailer in its path, but was clearly distracted. The driver is required to be attentive when autopilot mode is enabled.
But isn’t autopilot going to make drivers less attentive and cause more crashes? The answer is no – as the NHTSA report found that the Tesla Model S saw a 40% reduction in accidents with autopilot on (noting that some of the statistics that autonomous vehicle makers have used in the past to demonstrate safety have attracted widespread criticism). So if we don’t focus on the unfortunate narrative of the crash above, we can see a clear safety improvement with even a basic level of autonomy (assuming the figure in the report is right.)
But it is what Tesla did next that is telling. Tesla updated their vehicles on-board software to essentially enable it to better identify tractor trailers cutting the driving path. So a safe autonomous vehicle will be more like an i-OS or Windows operating system – one that is constantly maintained from afar in the same way Apple and Microsoft do. We won’t be able to slap a sticker of certification on an autonomous vehicle as it rolls out the factory door. The manufacturer’s ongoing support system will be as much a part of safety as the braking system. Moving from one-time certification to ongoing safety demonstration will likely be the most challenging aspect of autonomous vehicle reliability. And while the National Transportation Safety Board (NTSB) is still investigating the specifics of the Tesla Model S crash, there is a chance that any issue they identify has already been resolved without an expensive recall (both of which are good.)
And as we continue to experience autonomy, we must brace ourselves. The name of the driver killed in the Tesla crash was Joshua Brown. He has a family who mourn his loss. We cannot list people who are alive because of Tesla’s Autopilot Mode. And we won’t be able to list those who are alive in the future based on what Tesla learned from the crash. But we know they exist, even if they don’t themselves. We need to be thinking of them when we decide what red flags we choose to raise in the future.
Ask a question or send along a comment. Please login to view and use the contact form.
Leave a Reply