Technology advances before safety. It always has, it always will. Safety rules and regulations, from the development of stop signs and stop lights, the development of trucking safety rules by the federal government, to the development of product safety rules for medical devices have all, all been written in blood. What do I mean by this statement? Technology advances or changes, people are injured or killed as a result of the advance or change, and eventually safety rules and regulations are written and adopted to improve public safety. The price of safety is often innocent blood.
Over the years, tort law has developed a good framework for product liability to balance the manufacturer’s or designer’s liability for injury, with the dangerousness of the activity and ability to make the product safer. This carefully balance has worked well from the development of many items we now take for granted, elevators and escalators, cars and trucks, airplanes, medical devices of all types and even prescription drugs. But product liability laws are under attack in every statehouse and in Congress and safety regulations are routinely attacked as bad for business or bad for jobs.
Self-driving vehicle technology has been advancing rapidly over the past 4 years and it has left policy makers, manufacturers and safety advocates all scrambling. These vehicles are being tested on our roads right now, and they are expected to be in regular use on our roads in as little as 10 years. As the President of the Texas Trial Lawyers Association, I witnessed the upheaval first hand during the 2017 Texas Legislative Session. Bills were filed in the house and senate to encourage development of self-driving cars. Provisions were offered and debated to try and strike a balance between developing the technology and financial responsibility for the injuries and deaths that are certain to occur. Many of the proponents of the technology were attempting to have state mandated immunity (meaning no legal responsibility or accountability for injuries or deaths), others attempted to move the responsibility away from the programmer or manufacturer to the owner of the vehicle, and still others recognized that the time-tested product liability laws do not need to be changed and that to change them would make us less safe as these technologies changed. One thing was abundantly clear, there is not a commitment to comprehensive safety standards and real, money-changing-hands accountability, to allow safety to keep up with the technological development.
The recent death of a pedestrian by the self-driving Uber vehicle highlights the problem. Safety rules will need to be written from the blood of this pedestrian and other drivers and pedestrians. Allowing experiments on our roads without these rules is like driving cars without rules of the road in the early 1900s—accidents waiting to happen. These devastation causing wrecks are now happening in our midst. The sooner we act, the better.
Early indications are that this technology requires not only smart cars, but smart roads as well. Dirt, and grime on the vehicles or on road signs, weather, unplanned obstructions, and construction all can affect the LIDAR’s ability to perceive what is occurring and how the car needs to react. I understand that there have been occasions when the car misread a stop or yield sign for instance. Roads that do not have lane indicators up to standard, those that have debris in the road or otherwise can put the vehicle in a situation that it has not encountered before, creating danger in fairly routine situations.
In addition, there are several things that makes the situation potentially more dangerous and harder to control. AI or artificial intelligence is being developed for these vehicles. What that means is that the vehicles themselves learn from situations that they encounter after being manufactured and programmed. What choice is the car or truck going to make when faced with running over a child or hitting a tree and potentially harming the vehicle? What choice is the car or truck going to make if the choice is between injuring one person or twenty? Because AI allows the vehicle to learn as it goes, there is really very little control of those decisions unless that control is designed and built into the vehicle. Right now there are no standards that are being written into the software and no way to test the reasonableness of the choices made by the software developer. Injury lawsuits of the future may very well be about 1’s and 0’s programmed into a machine to prevent injuring the machine over a person.
Now for the really scary one, hacking the vehicles systems. Who controls the systems? Who is responsible if a person using a cell phone disrupts the electronics of a vehicle in motion? Who if anyone is responsible for these vehicles if they are hacked into and turned into weapons? What if a system flaw is found that allows an enemy to take over multiple vehicles in motion for a coordinated terror attack? Who is responsible if a simple glitch in the software caused the vehicle to shut down and reboot? It is one thing when my laptop has to shut down and reboot, but a vehicle or 18 wheeler traveling at 75 mph down the freeway? I don’t want to wait for a catastrophe or a series of attacks to be carried out by our own vehicles before safety concerns are addressed. The potential degree of harm and amount of harm that can be done is simply too great.
As a parting shot, please realize that the devastation that will happen if it is not addressed now is not a videogame. When a glitch occurs, the controller cannot just be turned off and reset. When a person is killed that person cannot be re-spawned as my boys would do when their character is killed in a videogame. Blood has already been spilled. Your legislators and representatives need to hear from you. They need to require that safety be designed into these systems and that real accountability is built into the system when folks cut corners and expose all of us to harm.