“I Just Don’t Want My Work Being Fed Into The AI Blender”: Queer Artists on Refusing and Resisting Generative AI
"I Just Don't Want My Work Being Fed Into The AI Blender": Queer Artists on Refusing and Resisting Generative AI
HCI Today summarized the key points
- •This article is based on research examining how generative AI has affected queer artists’ creative practices and their communities.
- •The research team interviewed 15 queer artists in the United States and found that they view art as relational work—caring for one another and leaving lasting traces in memory.
- •Participants regarded generative AI that trains on works without consent as art theft and labor exploitation, and resisted by refusing to use it and reducing online exposure.
- •They strongly opposed AI replacing artists, but they acknowledged—only in limited ways—some experimental possibilities, such as surreal or strange-looking images.
- •The article argues that protecting queer art requires designing different creative environments grounded in consent, slowness, and mutual care—not simply building better AI.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article reframes the HCI question from ‘what GenAI can produce well’ to ‘why people refuse it and how they manage or shape it.’ In particular, behaviors in art communities—such as non-use, refusal, source verification, and labeling—are shown not as mere preferences, but as issues of interface and institutional design. For UX practitioners and researchers, it offers strong clues on how to design trust, transparency, and pathways for meaningful intervention.
CIT's Commentary
The most important point in this piece is that what’s needed isn’t a way to make people use AI better, but a design that, in some contexts, allows and supports not using it altogether. The anxiety artists feel isn’t about lack of performance; it stems from opacity—an inability to know where and how their work is being consumed. That’s why, beyond a simple opt-out button, intervention paths that actually work—such as training-data tracking, refusal settings applied by default, and context-dependent labeling—seem far more important. However, as these mechanisms multiply, the burden also grows to prove ‘who made it as a real human.’ That cost, too, must be accounted for in the design. Even in domestic platforms and creative ecosystems, it’s necessary to take an approach that looks at creators’ relational context and revenue structures together, rather than simply importing the rules of global services.
Questions to Consider While Reading
- Q.What might an interface look like that allows authors and creators to easily confirm whether their work was used for training?
- Q.To make an AI-use labeling approach actually helpful, what information should be shown to whom, and at what point?
- Q.What would platforms need to change to make non-use or refusal look like a legitimate choice—not an ‘inconvenient exception’?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.