If an act brings happiness to a lot of people, therefore the act is moral (Floridi & Sanders, 2002). Therefore, if the AI consultant agrees to release the vehicles without the testing process, the cars may cause accidents on the roads thus leading to injury and death of the majority of the society. Thus the consequences of such a measure will result in an adverse effect. The consumers’ lives may be endangered by commissioning the vehicles for market use. John should, therefore, refuse to commission the vehicles since it will result in the prevention of road accidents that can have a negative impact on the people.
However, such a measure will also have a devastating effect on John. The consequences, therefore, will depend on the action that the artificial intelligence consultant undertakes. John, however, should undertake an action that will bring happiness to most people as opposed to acting that will only please a few people. For instance, if the consultant goes ahead and releases the vehicles without proper modeling, the managers of the car company will be happy since they will be the first to introduce the self-driven cars into the market.
The company, therefore, will penetrate the market and acquire a larger share of the market before its competitors can release their products. Such a scenario will result in the high demand and the subsequent generation of a lot of profits. However, the cars are likely to cause accidents that will harm the public who are the majority. The application of ethical principles can also play a great impact on the process of finding a solution to the dilemma. According to the deontological ethics, it is the duty of every person to do what is morally upright (Himma, 2003).
Hence a moral behavior is not dictated by its effect or the cause but by the obligations and rules that an individual is entitled to follow. Therefore, the artificial consultant has the duty of doing what is right in the company. Through refusing to release the vehicles for public use before modeling the remaining issues, the consultant will be doing his duty. The duty-based ethical theory expects every individual to undertake actions that aim at ensuring equality among the people in the society (Ramadhan et al. 2011). In case the consultant heeds the advice of the company’s managers, the vehicles can result in injuries and even loss of life on the roads.
Therefore, John has the obligation of doing what is right in the company through refusing to commission the vehicles for public use in the market. Although such decision will be opposed by the company’s managers, it is John’s duty to ensure that he undertakes actions that are expected by the society or actions that adhere to the societal rules. Besides, the rights-based principle can also be used to evaluate the dilemma facing John and the electric car company. According to Anderson (1992), the contractarianism ethical theory posits that the moral rights are based on the contractual agreement between two or more parties.
The agreement between John and the company was that the former undertakes an artificial intelligence examination of the self-driven cars before their release into the market. Hence it is not logical for the company’s managers to force John to sign off the vehicles as fit for use by the consumers if they know that anything that happens will be under John. The AI consultant, therefore, can use the contracts and rights theory to disagree with the decisions of the company’s managers and to leadership.
John should, therefore, should present the agreement to the CEO and the IT managers and outline the terms and conditions of the contract. Through following the provisions in the contract, the two parties can prevent possible litigation in the case of road accidents due to a premature introduction of the vehicles into the market. The agreement should remind the company that they should not force the AI, consultant, to release the vehicles before proper modeling are undertaken to ensure safety.
Read More