Liability For Accidents Caused By Self-Driving Cars
Introduction
The use of driverless vehicles could be imagined as the end of accidents, but this is not the case since they could also be involved in accidents. When the components of these vehicles fail to operate as expected, the autonomous system might fail, resulting in an accident. A self-driving vehicle might encounter any weather condition as it is navigating, and the technology that enables it to sense the environment should work efficiently in any situation. A driverless vehicle could also cause an accident where neither the driver nor the manufacturer would be liable. It would be necessary for manufacturers of driverless vehicles to design their products in such a manner that they would have no defects. Even though there are entities that should be held liable if a self-driving car causes an accident, there is a need to protect customers without stifling self-driving innovations with the threat of endless lawsuits.
Several companies have already started manufacturing driverless cars that do not require any intervention from humans. For the self-driving vehicles to operate, they utilize a set of sensors, lasers, and cameras, and coupled with the use of global positioning systems, they detect obstacles and move through a preset route (Themsche, 2015). It is evident that with the utilization of the identified technology, the autonomous cars can efficiently navigate without any intervention from humans. The technology applied in these vehicles might, however, have to be engaged by a person so that they can be operated. The components that help the vehicles navigate might sometimes be faulty, and when this occurs, an accident might happen (Themsche, 2015).
In the light of the above, various factors can be pointed out as contributors when it comes to liabilities in case a driverless car is involved in an accident. Defects of the vehicle’s system, however, may result in an accident. The operation of conventional vehicles relies on the ability of the driver to sense his surroundings and make practical judgments for them to navigate. Vehicles which are not manned, however, need to acquire data for them to sense the environment (Themsche, 2015). There are various conditions that an autonomous vehicle needs to sense, failure to which an accident might occur.
Driverless cars should be equipped with the appropriate technology to facilitate navigation. For instance, the vehicles may measure the distance between them and an obstacle using the LIDAR system whereby laser beams can detect the elements in front (Themsche, 2015). When a fault occurs, and the system fails to detect whatever is in front of the vehicle, it is possible for an accident to happen. Some states, such as California and Nevada, have already permitted the use of driverless vehicles on the roads (Anderson et al., 2014). Such legislation, however, has to be accompanied by conditions which may expand the scope of the parties held responsible in case an accident occurs.
The majority of rules used on the road are suited for vehicles with a driver, and there is no provision for the self-driving cars. When an autonomous vehicle causes an accident, it could be assumed that it was because they lack a mechanism to understand the traffic signs. The manufacturers of driverless vehicles should, therefore, program them in such a way that they would adhere to all the traffic rules. Although the use of self-driving cars could increase in the future, most of the vehicles are currently operated by humans. The driverless cars should, therefore, be manufactured in consideration of the existing traffic rules. When such a mechanism is not in place, the car could easily malfunction while it is in passenger mode, and the manufacturer would be liable.
It would be necessary for manufacturers of driverless vehicles to design their components in such a manner that they would have no defects. A driverless car might not be crashworthy when it has flaws in its design (Anderson et al., 2014). It is evident that such a vehicle may encounter any weather condition as it is navigating, and the technology that enables it to sense the environment should work efficiently in any situation. A company that manufactures driverless vehicles should test them in every possible weather condition. Failure to test the vehicles in different conditions would make the manufacturer liable when an accident occurs in bad weather.
Since the driverless vehicles rely on the information from their computer system when navigating, they could encounter problems when they come across unscanned roads. It is the responsibility of the manufacturers to design the vehicle’s computer system in such a way that it updates itself with any form of unexpected changes on the traffic routes. Assuming that the majority of other driverless vehicles are designed in a similar manner, any defect in their design implies that the accident that results should be blamed on the shortcomings.
The model used by Google has a structure placed on the vehicle, and it utilizes radar sensors to evaluate the surroundings, detect traffic, and navigate (Gurney, 2013). At the same time, the driving environment would be mapped out by the laser rangefinders while potential obstacles can be detected by the vehicle radars and cameras (Gurney, 2013). When a consumer purchases an autonomous vehicle, he or she would probably assume that the technology would be sufficient for its operation, and there would be no need for intervention. A driverless car might cause an accident since it may fail to stop once the operator fails to control it safely during an emergency (Marchant and Lindor, 2012). The lack of such a mechanism makes the manufacturer liable when the vehicles cause an accident.
A driverless vehicle might be designed in such a way that it is unreasonably dangerous to the user. There could be defects in a product’s design when the risks associated with its use are more than what a user expects when using it in a manner that is reasonably foreseeable (Anderson et al., 2014). Different components in the vehicles work together so that navigation can be made possible. The only input that the individual who operates the vehicle gives is the direction of the intended route, after which the autonomous system takes over the driving (Marchant and Lindor, 2012). When such a vehicle causes an accident while the operator has followed all the instructions for its operation, the vehicle’s manufacturer should be held responsible.
Even though the design used by Google in its driverless vehicles requires an individual to be behind the wheel, they assist inexperienced drivers to drive themselves (Gurney, 2013). Moreover, the autonomous vehicles aim at increasing an individual’s productivity while using them (Gurney, 2013). Consumers who purchase the driverless vehicles would, therefore, assume that they would be safe enough in the vehicles even if they are not competent drivers. The other assumption that the cars’ users may have is that by engaging the vehicle’s autonomous system, there would be no need for their intervention. When such vehicles end up causing an accident, and it appears that the fault is not the driver’s, the vehicle’s manufacturer should be the one to blame.
People who have diminished capabilities to the extent that they would not drive well on their own could be one of the target groups for the use of driverless vehicles (Gurney, 2013). An example of such an individual would be one who knows how to drive, but their body may fail to react quickly to a dangerous situation. For such a person, an autonomous vehicle would be convenient for them since they would not be required to concentrate on the road. When such a vehicle malfunctions and the driver fails to react swiftly, an accident might occur. In this case, the operator would only be responsible if the manufacturer had warned against the use of the vehicle by a person with diminished capabilities. If that is not the case, the manufacturer should be responsible for the accident.
A user might not be familiar with the operation of a driverless vehicle, and it would only be sensible not to interfere with it unless it is necessary (Gurney, 2013). When an accident occurs, in this case, the autonomous system would probably be faulty, and the manufacturers should be liable. Autonomous cars are not designed to be entirely controlled by humans, and this is the reason that they are perceived to be safer than other vehicles. The manufacturers could also assume that designing the vehicles to be fully controllable by humans would minimize their safety. The design of the autonomous vehicles such as the lack of rear-view mirrors or standard indicators makes it difficult for an individual to operate them, and any error could lead to an accident.
Even though the manufacturers of driverless vehicles bear the greatest responsibility if an accident occurs, other parties might be responsible. Some of the self-driving vehicles are partially autonomous in the sense that the individual operating it would be required to control it when a collision is about to occur (Marchant and Lindor, 2012). For vehicles such as this one, the individual that operates it should be responsible for shifting from the autonomous system to manual control. The manufacturer might not be responsible when instructions require the operator to control the self-driving vehicle in specific traffic patterns or particular weather conditions (Marchant and Lindor, 2012). In this case, when an accident occurs, the driver would be partially liable.
A self-driving car could cause an accident where neither the driver nor the manufacturer would be liable. This situation might occur when the user has unrealistic expectations regarding the technology used by the driverless vehicle (Anderson et al., 2014). A collective majority of the vehicles use a variety of sensors to determine the obstacles that are ahead. However, they might fail to function effectively in certain weather conditions. Most of the driverless cars would, however, have strict instructions regarding the vehicle’s control in some situations (Marchant and Lindor, 2012). A consumer that ignores these instructions with the assumption that the vehicles would avoid obstacles should be blamed when an accident occurs.
It is necessary to identify the party which is liable for an accident involving a self-driving vehicle, whether it is the manufacturer or the user. One approach that might be used to protect users so as to avoid an accident would be the creation of a system that alerts an individual in case of failure of the autonomous technology (Weaver, 2013). The driverless vehicles should be manufactured in a way that they would come to a halt when the operator is unable to intervene after the failure of the autonomous technology. By implementing such a law regarding the manufacture of the vehicles, accidents would be minimized when the operator fails to respond to the car’s failure promptly.
A driverless vehicle might be involved in an accident when its system does not have a mechanism that makes it easier for an individual to take control of the vehicle (Weaver, 2013). Even though the person that operates the vehicle could assume that its autonomous system would always function properly, any error could be fatal. In this case, the vehicle’s components which are responsible for switching its control from the autonomous system to the operator would be at fault. The companies that manufacture driverless cars should, therefore, ensure that the system which engages the self-driving technology is easily accessible.
The defects in the autonomous system of a vehicle could be one of the reasons that contribute to errors resulting in an accident. The components used to facilitate the car’s unmanned navigation may occasionally be defective, and they should have the appropriate warnings (Anderson et al., 2014). The manufacture of such vehicles should, therefore, include a warning regarding the defective products. Once the users are aware that the autonomous system could fail, they would be more cautious when operating the vehicles.
The operation of autonomous vehicles without an individual behind the wheel might be dangerous, especially when the system becomes defective, and an accident takes place. For the manufacturers, they should have a requirement that makes it necessary for the operator to be seated where the driver is supposed to be at any moment (Weaver, 2013). This provision would eliminate any lawsuits that may arise when the vehicle causes an accident that could have been avoided through the intervention of a person.
Conclusion
Driverless vehicles should be equipped with the appropriate technology to facilitate their unmanned navigation. Components that are defective in the self-driving vehicles, however, may result in an accident. When it is established that a fault within the vehicle’s system resulted in an accident, it would not be practical to hold the original manufacturers liable for the errors. Although the manufacturers of driverless vehicles might bear the greatest responsibility if an accident occurs, users who disregard the instructions on how to operate the vehicles could be partially responsible when an accident occurs. Ideally, the components of a self-driving vehicle should function properly, failure to which the manufacturer would be liable when an accident occurs.
Read More