Technology ethics (tech ethics) is a field of study that seeks to understand and resolve moral issues that surround the development and practical application of mechanical and electronic technology. Tech ethics focuses on subjects such as the relationship between technology and human values and well-being, the condition in which technological advances occur and the social repercussions for technological advancements.
The John J. Reilly Center for Science, Technology and Values at the University of Notre Dame compiles an annual list of what it determines to be that year’s top 10 ethical dilemmas in science and technology. In 2019, for example, the list includes DNA testing, pet cloning, the collection of data generated by children and 3-D printing schematics for weapons. In the United Kingdom, the House of Lords Select Committee recommended that ethics should be put at the center of artificial intelligence (AI) adoption to ensure cognitive technology is developed and used for the common good and benefit of humanity.
Although developers are still in the early stages of AI adoption, it's important for enterprises to take ethical and responsible approaches when creating AI systems. To that end, a non-profit institute founded by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn and DeepMind research scientist Viktoriya Krakovnahas have helped establish a set of guiding principles for AI ethics which are now referred to as the Asilomar AI Principles. Among other things, the Asilomar AI code of ethics mandates that:
- Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
- AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
- The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
Technology ethics problems and challenges
Another common subject of technology ethics in the enterprise surrounds the use of customer data on social media platforms. A company can collect information users put on a social media website to profile each user and use that information to target advertising to them. The same information can be sold and used to sway political opinions. This circumstance occurred in 2018 when it was revealed that the consulting firm, Cambridge Analytica was collecting information of around 50 million Facebook users. There was a large uproar (many people feel third-party companies collecting and selling user data is wrong) in which the occurrence became a major news story in 2018.
In 2018, the European Union enacted a new directive, called the General Data Protection Regulation (GDPR). GDPR aims to keep businesses transparent and expand the privacy rights of data subjects for EU citizens. Organizations outside of the EU must comply with GDPR regulations if they wish to be active in the EU market. Data policy rights included in the GDPR include the right for data to be erased upon request, letting users review data stored about them, giving users the right to request a company not utilize their data, letting users transfer their data, and allowing users to correct incorrect information.
The GDPR policy can be viewed as a general pushback to the ethical misconduct of the way data was being used by organizations like Cambridge Analytica, while also assuring any policy or data breaches by companies such as Facebook must be reported within 72 hours of occurrence.