Zhao, BoPeng, Lizhi2024-10-162024-10-162024-10-162024Peng_washington_0250O_27074.pdfhttps://hdl.handle.net/1773/52527Thesis (Master's)--University of Washington, 2024Following the wake of ChatGPT’s release in late 2022, we have witnessed the launch of an “arms race” of generative AI technology as large language models (LLMs) entered a phase of rapid development and advancement, with promises of revolutionary transformations of work spaces and everyday life from major tech companies. As contestants and major power players like OpenAI, Google, Anthropic, and Meta enter the game, many ethical concerns have been raised regarding whether this technology will truly be the beginning of the next technological revolution, and if so, whether it will be beneficial to human society as a whole or only serve in the interest of a few. At the same time. Many AI scientists, researchers and industry leaders have come forth with claims that how we handle this new technology will be critical to the wellbeing or even survival of humanity in the future(citation). As the industry chases after the promise of the sparks of artificial general intelligence (AGI), that one day truly autonomous super AIs capable of outsmarting the human brain in generalized tasks can be achieved, discussions of AI alignment, the checks and balances that will keep AI acting and behaving according to human values and principles continue. Yet, even as we enthuse over the potential transformations this technology brings to our society, it is important to point out that true alignment requires the input from people across all backgrounds and walks of life. It would be concerning to leave the definition of “human values” in the hands of a few leaders behind closed doors. This thesis consists of papers exploring the potential ethical risks and concerns AI technology raises regarding accessibility and equity issues both in the AI industry and in the broader society centering two major questions: “who has access to AIs?” and “who builds the AIs?”. In the first paper, I will examine current tangible and intangible barriers to accessing AI subscriptions, and how performance of an AI differs across linguistic and geographical contexts, as a way of painting a bigger picture of the network of unfair representations behind the training, deployment and access of commercial AIs. In the second paper, I build on top of the previous theoretical foundations to connect current discussions in AI training and alignment ethics with interdisciplinary views and critiques on technology and society. I propose a framework that conceptualizes large language models as models of our society, interpreting AI as the reproduction and embodiment of the intricate power dynamics and inequalities fueled by social discourses and media representation. In this model I refer to as “layers of realities”, I explore the complex relationship between the physical world, the internet and digital media, and the world of large language models, in order to highlight the urgency to not see AI biases as a standalone issue within the industry, but a sign that alerts us to address issues in the physical and digital world such as unequal technology access and unfair mis/under-representation of marginalized identities in this increasingly digitized world.application/pdfen-USCC BYAI EthicsChatGPTDigital GeographyDigital GeopoliticsGlobal InequalityTechnology and SocietyGeographyGeographyArtificial Divides: Global AI Access Disparities and Constructions of New Digital RealitiesThesis