Article

Unmaking AI: A Framework for Critical Investigation

Details

Citation

Munn L, Magee L, Arora V & Khan AH (2025) Unmaking AI: A Framework for Critical Investigation. Critical AI, 3 (2). https://doi.org/10.1215/2834703x-12095973

Abstract
While generative AI image models are both powerful and problematic, public understanding of them is limited. In this essay, we provide a framework we call Unmaking AI for investigating and evaluating text-to-image models. The framework consists of three lenses: unmaking the ecosystem, which analyzes the values, structures, and incentives surrounding the model's production; unmaking the data, which analyzes the images ad text the model draws on, with their attendant particularities and biases; and unmaking the output, which analyzes the model's generative results, revealing its logics through prompting, reflection, and iteration. We apply this framework to the AI image generator Stable Diffusion, providing a case study of the framework in practice. By supporting the work of critically investigating generative AI image models, “Unmaking AI” paves the way for more socially and politically attuned analyses of their impacts in the world.

Keywords
generative model; stable diffusion; digital methods; critical AI studies

Journal
Critical AI: Volume 3, Issue 2

StatusPublished
Publication date31/10/2025
Publication date online31/10/2025
Date accepted by journal15/07/2025
PublisherDuke University Press
ISSN2834-703X
eISSN2834-703X

People (1)

Dr Vanicka Arora

Dr Vanicka Arora

Lecturer in Heritage, History