ICML 2026 Workshop

Culture × AI

Evaluating AI as a Cultural Technology

July 10–11, 2026  ·  Seoul, South Korea

A positive vision for culture in AI

Generative AI is increasingly recognised as a social and cultural technology. These systems process an enormous amount of social data to produce novel cultural artefacts, such as text, images, and videos. While much progress has been made in evaluating cultural aspects of AI, it has tended to focus on harm mitigation: identifying and preventing moral violations, the spread of bias and misinformation, and deviation from human values. But a more positive or constructive notion of culture in AI remains underdeveloped. How can we evaluate cultural aspects of AI technology in a way that not only seeks to avoid failure, but gives a more robust definition of success?

This workshop covers current approaches for evaluating cultural aspects of generative AI. Our primary focus is on work that aims to bring ideas and techniques from the humanities, arts, and qualitative social sciences upstream in AI development. We'll bring together a range of work at the intersection of culture and AI, with the goal of not just studying the effects of AI after deployment but also in actively shaping the design of the technology itself. The workshop will give special focus to research that seeks to articulate a positive vision for cultural AI.

A key theme in this workshop will be what we call Interpretive Technologies: approaches that take seriously the role of interpretive methods in understanding and improving AI systems. In many ways, the outputs of today's AI models resemble the kind of cultural artefacts traditionally studied by humanists. This means AI is not just a tool that produces culture; it is itself engaged in acts of interpretation. We invite submissions that take this seriously: work that asks how humanistic traditions of meaning-making, contextual sensitivity, and aesthetic judgment can be embedded in AI design, not just applied after the fact.

Crucially we welcome work that grapples directly with the tensions this research raises. Building AI systems that can "do" culture more effectively is not an unambiguous good: it poses real risks for artists, creative practitioners, and the broader cultural ecosystem. Ultimately, a positive vision for cultural AI will have to reckon with this. We encourage submissions that address these tensions in any form.

We invite submissions across disciplines

We welcome submissions from all researchers working at the intersection of AI and culture. While a wide range of Culture × AI research will be featured, priority will be given to work that takes a position on how cultural AI can achieve a "positive" outcome beyond avoiding negative ones. For example: rather than developing the capacity to pursue pluralistic outcomes, a project could show how a specific cultural value could be supported by or reflected in an AI system — particularly where this extends rather than replaces human agency. We encourage submissions that do not fit neatly into mainstream AI discourse, as well as those that have potential for concrete, tangible societal impacts in application areas like sustainability and healthcare.

Arts & Humanities

  • Frameworks for cultural AI
  • Definitions of "successful" performance in cultural tasks
  • Interpretive and hermeneutic approaches to AI evaluation

Machine Learning

  • Methods for culturally-situated ML
  • Examples of "successful" performance in cultural tasks
  • Evaluation metrics and datasets for cultural AI

Human-Computer Interaction

  • Beyond pluralism: picking a cultural value and designing for it
  • Human-AI ensembles to support meaningful human experience
  • Augmenting/preserving human creativity, agency, etc.

Social & Cognitive Sciences

  • Qualitative assessment of cultural impact of AI
  • Democratic processes for culturally inclusive AI
  • Case studies across domains and communities

Format: 3–6 pages of content (no priority given to longer papers), with unlimited pages for references and appendices. Please submit using the ICML 2026 template.

Anonymity: Papers may feature authors or be submitted anonymously.

Types welcome: Submission of preliminary results and works-in-progress are encouraged. Position papers are welcome, as long as they feature substantive engagement with the humanities or social sciences.

Submission deadline: 1 May 2026

Acceptance notification: 14 May 2026

Submission link available soon.

The Team

Cody Kommers

Lead Organizer

Postdoc in Interpretive Technologies
The Alan Turing Institute

culturexai.workshop@gmail.com

Drew Hemment

Co-lead Organizer

Theme Lead, Interpretive Technologies, The Alan Turing Institute
Professor of Data Arts & Society, University of Edinburgh

Canfer Akbulut

Organizer

Senior Research Scientist
Google DeepMind

Meredith Martin

Organizer

Professor, Institute Director for Digital Humanities
Princeton University

Adam Sobey

Organizer

Mission Director for Sustainability, The Alan Turing Institute
Professor in AI and Engineering, University of Southampton

Matt Wilkens

Organizer

Associate Professor, Information Science & Digital Humanities
Cornell University