12 Overview of Ethics in Artificial Intelligence

Delores James, Ph.D.

Overview

The debate over the ethics of artificial intelligence (AI) has been raging since the 1960s.  The discussion at that time focused primarily on the moral and technical consequences of automation (Samuel, 1960).  Over the past ten years, several research institutions, tech companies, and professional organizations have issued guidelines for ethical AI.

However, there is no universal agreement on what constitutes ethical AI, especially since AI is not limited to any discipline or industry (Jobin, Marcello, Ilenca, & Vayeba, 2019; Jobin, Ilenca, & Vaena, 2019).  Furthermore, there is a need to use a sociotechnological approach in the development and implementation of these guidelines regarding the social, legal, and ethical impact of AI on humanity and the social good (Jobin, Marcello, Ilenca, & Vayeba, 2019; UNESCO, 2021).

 

Definitions

Ethics—a system of moral standards that aims to discern right from wrong.

Sociotechnical—the study of the intersection of society and technology.

 

License

The UF Faculty Handbook for Adding AI to Your Course Copyright © by Dr. Alexandra Bitton-Bailey; Dr. David Ostroff; Dr. Delores James; Dr. Frederick Kates; Lauren Weisberg; Dr. Matt Gitzendanner; Megan Mocko; and Dr. Joel Davis. All Rights Reserved.

Share This Book

Feedback/Errata

Leave a Reply

Your email address will not be published. Required fields are marked *