The Uber Autonomous Car Accident… Some additional thoughts

The fatal accident that occurred in March 2018  where the Uber Auto driven Volvo crashed and killed a person walking across the street had raked up many issues on the Technology and Law surrounding the development of driverless cars.

Now a detailed coverage of the aftermath of the accident in wired.com gives an analysis of the technology faults as well as the human issues behind the tragedy.

As per the report, it appears that Uber has been discharged of criminal charges of negligence and the human driver behind the wheel Rafaela Vasquez is blamed for not preventing the accident by timely intervention. The trial will continue and the final verdict may take some more time.

From the evidence discussed in the article, it appears that the Uber Software failed to recognize the obstacle and apply brakes. It is also said that the Car (Volvo) had its own emergency braking mechanism which was over ridden by the Uber system and Volvo claims that its system would have perhaps either stopped the Car or atleast prevented the fatality. This could mean that the Uber system was inefficient compared to the possible technical solution as offered by Volvo. This should make Uber vicariously liable for the accident.

However, whether the headlight system of the Car was good enough for the night driving could be a point of debate since it could not light up the victim earlier. Whether this was a fault of the Volvo or of the driver in setting the beam is not clear. This does not seem to have been discussed in the legal proceedings.

The video from the dashcam indicates that the victim suddenly appeared across the speeding car and perhaps it would have been impossible for any ordinary driver to spot the victim in the darkness that was around. Hence the accident could have perhaps happened in many other incidents of human driving under similar circumstances.

However it must be recognized that Uber was negligent for many reasons.

Firstly though the testing was not complete ,the safety of having two persons in the Car one to monitor the driving and the other to assist the driver was withdrawn. This left the driver alone and the “Automation Complacency” factor kicked in.

Secondly the real time monitoring of the driver was not resorted to for the fear of being considered as “Spying”.

Thirdly monitoring  of the driver behaviour through log monitoring was not good enough.

It is interesting to note that the driver refers to herself as the  “Operator”. The driver was not driving her own car and hence she was on duty when she was “Operating”  the automated machine. Hence there was no Privacy issue and no “Spying”. It was the duty of Uber to monitor the automated machine and its operator as a single unit of work which Uber failed to do.

It is unfortunate that Uber instead of taking the blame on itself made the “Operator” a sacrificial goat. The fact that the Victim herself was grossly negligent and by jaywalking across the road on a dark night was a contributory factor the accident, should protect the “Operator” from the charge of negligence.

Hopefully the trial with the Jury will find the “Operator” not guilty and accept that the death of the victim as an essential sacrifice for development of technology. However technology companies need to set their bars of declaring a software “Safe” at a much higher level than what they may be doing now and their liability should continue even after releasing the software. In this case the software was still under testing and hence the liability of Uber should have been recognized without much of an argument.

Though Uber has made a monetary settlement with the victim’s family, it is unfortunate that they have not protected the “Operator” who became the second victim of the accident both legally and financially. She ought to have been provided with a life time financial settlement and legal support to bail herself out of the charge of negligence even with her own lawyers.

This case should establish that any software developer who produces an AI led system should inherently be made vicariously liable both for the victims of malfunctioning as well as the operators who had minimal control on prevention of accidents.

The Cyber Insurance industry would perhaps come to the assistance of the companies to ensure that the cost of technology development ultimately gets distributed.

In the light of this development, the provision of Data Protection Act in India requiring “Algorithmic Transparency”, “Security Certification” and filing of a “Privacy By Design Policy”,  when personal data processing is handled by  automated systems is a welcome step. This will bring better accountability for the companies in at least absorbing the liabilities and preventing unfair liabilities on the user-operators including the employees assigned for testing.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.