Artificial Intelligence and the Ethics Driving It
Artificial intelligence has many benefits, but each new benefit brings a new concern. The following is meant to help discuss some of the ethical concerns surrounding artificial intelligence and its subsets. Like any other issue, there are heated arguments on both sides of AI. The issue is finding the balance between allowing AI to help humans and not controlling humans.
Policy Governing Artificial Intelligence
Artificial intelligence has grown in popularity over the years, but there has not been much regulation put in place regarding the issue. There have been many calls to implement regulations but little action in relation to those calls. The EU recognizes that there is a need to have global rules for AI [21]. Creating global rules would help put forth a level playing field by which each country can develop responsible AI. The European Economic and Social Committee (EESC) argued that AI should be developed with a human-centered focus [20]. If artificial intelligence continues to grow unchecked, there are no limitations in how it can be used. The EESC’s statement on AI establishes that humans should always have final control over AI. This means that humans could always intervene and halt the “projected” action of AI.
Jobs and AI
AI has become one of the hardest-working employees. It does not require breaks and does not get paid a salary. This raises the flag- “What happens when AI takes a human’s job?” Edvard Duka discusses three possibilities for the future of AI and jobs.
Stalemate: AI does not continue to grow as expected. Hence minimal adjustments are needed to be made regarding AI and jobs.
Check: AI does grow larger, but the economy and job markets are able to adjust. There may be a short period of insecurity, but eventually, the markets return to normal.
Checkmate: AI grows exponentially without the ability to control it. Jobs are lost, and the economy goes into dishevel as governments are unable to adequately adjust [22].
It is hard to predict what role AI will play in job displacement, but there are a couple of indicators to look at. In the past, there have been many technological innovations, take, for instance, the internet. The internet may have caused some people to lose jobs, but now the internet job industry is booming, replacing some of the jobs it may have taken. Many proponents that support the rapid development of AI would conclude that AI is not different than other technological innovations. For companies that have implemented AI, research has shown that these companies also have added three new job categories: trainers, sustainers, explainers [23]. Trainers are responsible for training machine learning models. Explainers are the people whose sole focus is relaying technological terms to the business-oriented individuals within a company. Sustainers ensure that AI is operating ethically and properly [23]. They will help ensure that AI stays in check. Each of these jobs will have its own set of requirements and certain training.
The state of the chess metaphor (stalemate, check, or checkmate) will not be known until society arrives at one of them. It is for that reason there should be a failsafe in place for a checkmate scenario. All three scenarios have their own issues, but checkmate presents the most catastrophic. The check scenario is one of the most widely accepted scenarios and offers the most beneficial approach to human-AI interaction when related to the workforce.
Biased AI
Can a computer be biased, or can it be trained to be biased? Will human bias be injected into the AI? If individuals take a second to look across their community, it is evident that most people have a biased mindset, some unintentional and others on purpose. This includes but is not limited to racism and discrimination. As discussed earlier, AI consists of many different branches. When studying bias and AI, often time machine learning introduces the largest concern of bias in AI. Machine learning works by taking large sets of data, inputting the data, and then having the model predict or act upon that data. The data is collected by humans, which can result in an uneven, underrepresented sampling [24]. Consider AI used to spot cancer. The techniques used by this AI could possibly work better for people with white skin compared to people with darker colored skin. Another example of where biased AI would be detrimental is the hiring process. “Historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process” [10]. This historical discrimination played into the automated recruitment system’s selection. One of the largest deterrents of using AI is the question of whether a biased human race can design an unbiased AI platform. The issue is that although individuals try their best not to be biased, there may be unintentional biases.
Remote AI/Drones
Although drones are not specifically in line with AI, they still consist of similar technologies with similar ethical issues. The main focus of this section is drones used for military purposes. The military has used weaponized drones for many years now. Military personnel can be sitting at a secured base and be safely flying a drone over enemy territory. From the remote command center, missiles can be deployed to attack foreign enemies. This has brought rise to many discussions about what modern warfare should look like. Although purely fictional, the movie Eye In The Sky helps represent the struggle between the benefits of using drones and the ability to completely ostracize human compassion during a modern war. There are different levels at which to discuss military drones: controlled and artificially led.
Controlled
Currently, the military uses remote-controlled drones. These drones respond to a pilot’s input, but the pilot is not flying from inside the drone, rather thousands of miles away in a secure bunker. Most of these drones are used for surveillance, but there have been multiple drone strikes. The Bureau of Investigative Journalism reported that between 2010 and 2020, there have been approximately:
- 14,040 confirmed strikes
- 8,858 - 16,901 total deaths
- 910 - 2,200 civilian deaths
- 283 - 454 children killed
The preceding numbers are representative of drone strikes in Pakistan, Afghanistan, Yemen, and Somalia [25]. In order for controlled drones to act, the command chain must give authorization. The ethical decision of whether to strike is still maintained by humans [26].
Artificially Led
As advances are made in artificial intelligence, drones are becoming more and more valuable. They allow countries to keep their troops safe while performing air strikes abroad. More and more responsibility is given to the computer system directing these drones. In the future, these drones would be able to determine whether to strike a certain area. This issue returns to the conclusion that “[a] democracy cannot delegate the right to kill to a machine” [26].
Drones present a promising future for modern militaries. They are able to attack their enemy while keeping military personnel safe. In the coming years, there will need to be a code of ethics concerning drone usage. Unless that ethical code is established and followed, each country that has drones has the potential to remotely attack their enemies. As AI is implemented into military drones, human decision making is removed from the process of killing other humans. Are drone cameras, AI decision making, and safety plans enough to safely and ethically continue to develop military drones?
Autonomous Cars
Autonomous cars would use some of the same technology utilized by drones, disregarding the weaponry. Earlier, the benefits of autonomous/self-driving cars were discussed. Although there are many benefits, each benefit has an ethical concern lingering. One of the common scenarios concerning autonomous cars is, who to kill? The question itself is morbid, and the decision processing is no easier. For example, take the following scenario. There are three occupants inside of an autonomous car. There is one pedestrian walking on a sidewalk. The oncoming traffic hydroplanes and is about to crash into the autonomous car. The autonomous car can either continue in its projected path, or it can swerve off the road. By staying on the projected path, the autonomous car will be hit, and the occupants of the car will most likely be injured or killed. If the car veers off the road to avoid being struck by the vehicle that hydroplaned, the pedestrian on the sidewalk will be killed. Neither scenario is favorable, but one of them must be chosen. Patrick Lin presents the following situation.
Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80- year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another [27].
These scenarios must be pondered and debated regarding whether it is appropriate to put the fate of who survives in the hands of a machine. One online article started with the following title- “Your Self-Driving Car Will Be Programmed to Kill You-Deal With It” [28]. Research shows that people want cars that will save the most people (crashing into a car with one occupant as opposed to hitting a bus); however, the same people that say that are not willing to be the occupants of the car [28]. Will car manufacturers offer higher accident prevention ratings to individuals that are willing to pay for it? Due to the large unknowns of AI, there are limited amounts of regulations regarding autonomous vehicles in this specific area.
To ensure that individuals are treated equally, regulations need to be established for scenarios like those previously mentioned. The IEEE is a group of like-minded technologists that partake in research and development. The members of this group commit themselves “to treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression” [29, 30]. Programming a vehicle to kill based upon the above scenarios would go against the fundamentals of the United States. A possible solution to this issue is to require human intervention in situations where an occupant or nearby individuals could potentially be killed. If an algorithm is given a choice on who to kill, the response would be consistent and predictable. Is predictable the correct option? Either way, the future of autonomous cars requires more discussion to produce a community of developers that value life regardless of who a person is.
Table of Contents
- Introduction
- What is Artificial Intelligence
- How Artificial Intelligence Is Used Today
- Benefits of Artificial Intelligence
- Artificial Intelligence and the Ethics Driving It
- Social Media – A Doppelganger in the Cloud
- The Future of AI
- Conclusion
- References