Thought Leadership
The Ethics of AI
The ethics of artificial intelligence (AI) are part of the broader ethics of technology, specific to robots and artificially intelligent entities. As more and more countries around the world enter the race to develop AI and to release strategies in research and development (R&D) and education, ethics of the AI space must be considered.
By Mariana Ranzahuer
September 16, 2020
The ethics of artificial intelligence (AI) are part of the broader ethics of technology, specific to robots and artificially intelligent entities. They’re usually divided into two main categories: 1) the concern with the moral behavior of humans as they design, use, and treat artificially intelligent beings; and 2) the concern with the moral behavior of machines that use AI, also called artificially intelligent agents.
As more and more countries around the world enter the race to develop AI and to release strategies in research and development (R&D) and education, ethics of the AI space must be considered. It’s imperative that developers and innovators are aware of the risks and responsibilities of embracing this technology.
We’ve considered four pressing issues in the ethics of AI and the concerns that arise, as well as organizations that have pledged a commitment to addressing these concerns.
Job displacement and wealth inequality
Automation, or labor-saving technology, is technology in which processes and procedures are performed with minimal human assistance or interference. Automation is widely regarded as the future of work. Inventing ways to automate jobs creates opportunities for people to assume more complex job functions. And this enables the transition from physical labor to the more cognitive and analytical work of the strategic and administrative jobs that characterize our globalized society. Although that sounds beneficial, the concerns of job displacement and wealth inequality are among the primary arguments against AI.
For example, the trucking industry employs millions of individuals in the United States alone. Tesla CEO Elon Musk famously promised that self-driving trucks could become widely available in the next decade, resulting in job losses for many of these people. But the lower risk of accidents makes it seem like an ethical choice. The same ethical issues arise when we consider administrative tasks, food preparation, packaging, and many other jobs in developed countries.
According to a McKinsey Global Institute report, about 800 million people will lose their jobs to AI-driven robots by 2030. So the question becomes, should we move full steam ahead with developing and integrating AI into society, even if it means that people will lose their jobs—and possibly their livelihood? Some would argue that AI will result in more jobs since people will be tasked with creating and managing these robots.
This issue is related to another consideration in the ethics of AI—wealth inequality. Most modern economic systems require workers to produce a product or service, and the compensation for these workers is based on an hourly wage. But machines don’t get paid hourly and can fully contribute, remaining operable and useful and only requiring low ongoing costs. By using AI, companies can cut down on their human workforce, meaning salaries will go to fewer people. Individuals who have ownership of AI-driven companies will make the majority of the profits.
Mistakes
Intelligence, whether human or artificial, comes from learning. Machine learning, the application of AI that provides systems with the ability to automatically learn and improve from experience without being explicitly programmed, is based in part on this concept. But machine learning takes time to be useful. If trained well and fed good data, AI can perform well. Conversely, if fed bad data or made with internal programming errors, machines can be harmful. IBM’s Watson for Oncology was canceled after making unsafe and incorrect treatment recommendations to cancer patients. The engineers had trained the software on hypothetical cancer patients instead of real patient data.
If we’re to rely on AI to usher in a new era of labor, security, and efficiency, we need to ensure that machines perform as planned. AI is imperfect and makes mistakes. Ethical issues arise when we assess whether it makes greater or fewer mistakes than humans, how many lives have been harmed by human error, and whether it’s better or worse when AI makes these same mistakes.
Bias
It’s a common misconception that AI is infallible, precise, and neutral. Although capable of exceptional speed and processing capacity, machines are still designed by humans, who can be biased and judgmental. As AI becomes increasingly inherent in facial and voice recognition systems, these systems have real implications for businesses and people, and they’re vulnerable to the biases and errors of their human makers.
The data used to train these AI systems can also have biases. An MIT Media Lab study measuring how facial recognition technology works on people of different races and genders found that algorithms developed by Microsoft, IBM, and Face++ were more likely to misidentify the gender of black women than that of white men. Additionally, facial recognition technology used to identify criminals and prevent crime before it happens can be marred by bias too, like when software used to predict future criminals showed bias against Black people.
Biases can creep into algorithms in several ways. Machine bias is a growing concern in the ethics of AI and will likely become more significant as critical fields, like medicine and law, adopt the technology. Companies like IBM and Google have started researching and addressing bias, driving solutions—like creating documentation for the data used to train AI systems—to identify and eliminate bias as early as possible.
Singularity
Singularity is one of the most interesting issues in the ethics of AI to consider. It refers to a hypothetical point at which technological growth becomes uncontrollable and irreversible, resulting in unintended consequences—most notably, human beings no longer being the most intelligent beings on Earth. Although it sounds like something out of a sci-fi movie, some fear that the pace of technological innovation makes this a real possibility. When we consider the chance that AI can make mistakes, there’s also a chance that it can go rogue and create unintended consequences in its pursuit of harmless goals. The Pandora’s box element of developing the technology so much that it evolves to become self-aware and out of control brings the ethics of AI into play since legitimate concerns need to be addressed to prevent these scenarios.
AI ethics organizations
There are several organizations and corporations that have recognized the significance of the ethics of AI and have established initiatives to address ethical issues. Major tech companies Amazon, Google, Facebook, IBM, Microsoft, and Apple established a nonprofit partnership to develop best practices on AI technologies, advance society’s understanding of it, and serve as a platform for AI.
Other organizations, like the Future of Humanity Institute, the Center for Human-Compatible AI, and the Future of Life Institute, focus on aligning AI goal systems with human values.
Furthermore, there are now multiple efforts by national and transnational governments, as well as nongovernmental organizations (NGOs), to help ensure that AI is ethically applied:
- The European Commission has a High-Level Expert Group on Artificial Intelligence that makes recommendations on future-related policy development and on ethical, legal, and societal issues related to AI.
- The European Commission Digital Policy Development and Coordination unit published a white paper on excellence and trust in AI innovation.
- In the United States, the Obama administration created a Roadmap for AI Policy and released two prominent white papers on the future and impact of AI.
- The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies like AI.
In conclusion
AI innovation seems boundless at times, and its increasingly profound impact on business and society complexifies the ethical issues that come with its widespread adoption. The key will be to keep these issues in mind to analyze the broader implications for public safety, security, and quality of life.