The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The field of Data Science has seen rapid growth over the past two decades, with a high demandfor people with skills in data analytics, programming, statistics, and ability to visualize, predict
from, and otherwise make sense of data. Alongside the rise of various artificial intelligence
(AI) and machine learning (ML) applications, we have also witnessed egregious algorithmic
biases and harms – from discriminatory outputs of models to reinforcing normative ideals about
beauty, gender, race, class, etc. These harms range from high profile cases such as the racial
bias embedded in the COMPAS recidivism algorithm, to more insidious cases of algorithmic
harm that compound over time with re-traumatizing effects (such as the mental health impacts
of recommender systems, social media content organization and the struggle for visibility, and
discriminatory content moderation of marginalized individuals [400, 401]). There are various
strategies to combat and repair algorithmic harms, ranging from algorithmic audits and fairness
metrics to AI Ethics Standards put forth by major institutions and tech companies. However,
there is evidence to suggest that current Data Science curricula do not adequately prepare
future practitioners to effectively respond to issues of algorithmic harm, especially the day-to-
day issues that practitioners are likely to face. Through a review of AI Ethics standards and
the literature, I devise a set of 9 characterizations of effective AI ethics education: specific,
prescriptivist, action-centered, relatable, empathetic, contextual, expansive, preventative, and
integrated. The empirical work of this dissertation reveals the value of embedding ethical critique
into technical machine learning instruction – demonstrating how teaching AI concepts using
cases of algorithmic harm can boost both technical comprehension and ethical considerations
[397, 398]. I demonstrate the value of relying on real-world cases and experiences that students
already have (such as with hiring/admissions decisions, social media algorithms, or generative AI
tools) to boost their learning of both technical and social impact topics. I explore this relationship
between personal relatability and experiential learning, demonstrating how to harness students’
lived experiences to relate to cases of algorithmic harm and opportunities for repair. My
preliminary work also reveals significant in-group favoritism, suggesting students find AI errors
more urgent when they personally relate to them. While this may prove beneficial for engaging
underrepresented students in the classroom, it must be paired with empathy-building techniques
for students who relate less to cases of algorithmic harm, as well as trauma-informed pedagogical
practice. My results also revealed an over-reliance on “life-or-death reasoning” when it came to
ethical decision-making, along with organizational and financial pressures that might impede AI
professionals from delaying harmful software. This dissertation contributes several strategies
to effectively prepare Data Scientists to consider both technical and social aspects of their work,
along with empirical results suggesting the benefits of embedded ethics throughout all areas of
AI education.
Description
Thesis (Ph.D.)--University of Washington, 2024
