UC CHaT Conference on "Science and Technology in the Anthropocene"
April 2–3, 2026 · Annie Laws Room, 407 Teachers College, 2600 Clifton Ave, Cincinnati, OH 45221
Artificial intelligence (AI) is increasingly shaping not only scientific and technological practices but also the environmental conditions under which human life unfolds. This conference brings together philosophers, cognitive scientists, social scientists, artists, and other humanists to examine how human experience should be understood in a world populated by non-biological intelligent systems whose development and operation intensify existing environmental pressures. The aim is to foster broadly accessible, non-technical discussion across the humanities on the ethical, social, and epistemological challenges posed by AI in the context of the climate crisis. Topics include algorithmic bias and justice, the environmental costs of AI, embodied and non-biological forms of intelligence, AI in art and perception, governance and responsibility in environmentally constrained contexts, participatory approaches to technological futures, and the implications of AI for the sciences.
Schedule
Thursday, April 2
| Time | |
|---|---|
| 9:00 | Registration + Coffee |
| 9:30 | Jelle Bruineberg — “Embodied cognition meets the attention economy” |
| 10:30 | Coffee Break |
| 11:00 | Jack Holdier — “The Actual Lovelace Objection: On Epistemic Trash and AI Surrogates” |
| 11:30 | Anna Pederneschi — “Peer Reviewed by AI” |
| 12:00 | Lunch + Poster Session |
| 1:30 | Jay McKinney — “AI and Monumental Ignorance” |
| 2:00 | Tim Elmo Feiten + Collin Lucken — “Selective Participation on the Mesoscale” |
| 2:30 | Shannon Proksch — “Organic Oranges and Velvet Sunsets: Human Music in the Age of AI” |
| 3:00 | Shannon McMullen — “In the Air and in the Weeds: Gardens at the Intersection of New Media Art, Nature and AI” |
| 4:00 | Afternoon Break |
| 4:30 | Reception — Tabula Rasa Gallery, College of Design, Architecture, Art, and Planning |
Friday, April 3
| Time | |
|---|---|
| 9:00 | Registration + Coffee |
| 9:30 | Shen-Yi Liao — “Critical 4E Cognitive Science” |
| 10:30 | Coffee Break |
| 11:00 | Ella Zhang — “AI as the New ‘Other‘“ |
| 11:30 | Renkai Ma — “Aligning Technology Governance with Lived Experience” |
| 12:00 | Lunch + Poster Session |
| 1:30 | Rebecca Korf — “Can feminist epistemology help manage value influence in AI?“ |
| 2:00 | Sina Fazelpour — “Aspirational Affordances of AI” |
| 3:00 | Break |
| 3:15 | Zachary Peck — “Will AI kill us all?: A reflection on the implications of integrating AI into higher education” |
| 3:45 | Robin Zebrowski — “From Mad Science to Furious Hype: Doing AI in a Generative (R)Age” |
| 4:45 | Closing Remarks |
Keynote Abstracts
Jelle Bruineberg — "Embodied cognition meets the attention economy"
Herbert Simon's credo that "A wealth of information creates a poverty of attention" is generally taken to be foundational for the attention economy, the economic system in which human attention is the scarce commodity. My aims in this talk are twofold: first I want to show that the attention economy rests on shaky conceptual foundations that are untenable in the light of contemporary cognitive science of attention. Second, I draw on principles from embodied cognition to provide an alternative diagnosis of our current situation. I will argue the problem is not so much an abundance of information, but an abundance of always available action possibilities, making selecting the relevant course of action more difficult.
Shannon McMullen — "In the Air and in the Weeds: Gardens at the Intersection of New Media Art, Nature and AI"
In this artist talk, McMullen will discuss her creative engagements with machine learning algorithms through her collaborative art practice with Fabian Winkler. From looking at flowers as a machine, to sun-seeking plant-machine hybrids; from a delicate weed-pulling robot to birds that call you from their gardens on a telephone, Studio McMullen_Winkler have been creating unconventional new media gardens since 2007. In these gardens, the machine is the plant, the gardener, the observer, or as in their most recent project Ecophones, the listener. Ecophones allows visitors to connect with birds across the globe through modified rotary telephones linked to web-based sound streams. The artwork focuses on the acoustic presence/absence of bird song as an indication environmental health, cultural imagination, and avian listening in the age of AI. By enabling visitors to listen remotely to other species and to the world of living beings it asks questions about the epistemology and technologies of listening: from tape recorders and record players to AI and cell phones, how has the evolution of acoustic technologies shaped our sense of listening to birdsong and with it an understanding of ecological rhythms?
Shen-Yi Liao — "Critical 4E Cognitive Science"
According to 4E cognitive science, our cognitive capacities depend on, and have been transformed by, the environments we have made. Most early works of 4E cognitive science tend to focus on the upside of agent-environment interactions: they have made us smarter and allowed us to do more, despite the same, limited brain power. However, the recent critical turn focuses on the downsides of agent-environment interactions. Critical 4E cognitive science, as we call this emerging research program, adopts the 4E frameworks and concepts but with a critical lens: questioning its hidden assumptions and leveraging its theoretical resources for social criticism. In particular, we distinguish two kinds of normative critique that bring out the downsides of agent-environment interactions. Prudential critiques focus on how agent-environment interactions can shape an agent's cognition to undermine their own interests. Political critiques focus on how agent-environment interactions can shape an agent's cognition to contribute to the reinforcement and exacerbation of unjust social forces.
Robin Zebrowski — "From Mad Science to Furious Hype: Doing AI in a Generative (R)Age"
AI was once a (fairly) respectable academic discipline and project, concerned with ontological questions about the nature of minds and practical questions about how to make machines do things that we consider smart when humans do them. But the dawn of Generative AI (GenAI) has totally upended the academic discipline, and has enabled multiple ongoing ethical crises. GenAI should probably be regulated in the same vein as weapons of mass destruction and harmful drugs. This talk asks how we should think about the contemporary AI movement, largely dominated by GenAI. I offer a brief history of AI (and AI hype) in 4 methodological waves to situate this moment historically. I also identify ten related-but-distinct dimensions of ethical failure in relation to GenAI specifically, and I demonstrate how these systems work in a technical sense, which guarantees most of these problems cannot be solved with faster tech or better training sets. Indeed, many of the troubling ethical failures are central effects of how these systems work. Ultimately, I argue that educational institutions have the biggest role to play in AI's future, but not in uncritical adoption of these technologies and problematic partnerships with the tech companies invested in seeing profits from such systems. People must learn how these systems work and understand how we are complicit across multiple ethical failures, and work together to reclaim AI from the harmful ways it is being deployed.
Sina Fazelpour — "Aspirational affordances of AI"
As generative artificial intelligence systems permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. How should we understand the psychological and societal stakes of AI's role in shaping our aspirations? In this talk, I hope to advance the discourse surrounding these concerns by making three contributions. First, I introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition, and in particular exercises practical imagination. I show how this concept can ground productive evaluations of the risks of AI-enabled representations and narratives. Second, I provide three reasons for scrutinizing of AI's influence on aspirational affordances: AI's influence is potentially more potent, but less public than traditional sources; AI's influence is not simply incremental, but ecological, transforming the entire landscape of cultural and epistemic practices that traditionally shaped aspirational affordances; and AI's influence is highly concentrated, with a few corporate-controlled systems mediating a growing portion of aspirational possibilities. Third, to support such a scrutiny of AI's influence, I introduce the concept of aspirational harm, which, in the context of AI systems, arises when AI-enabled aspirational affordances distort or diminish available interpretive resources in ways that undermine individuals' ability to imagine relevant practical possibilities and alternative futures. Through case studies, I illustrate how aspirational harms extend the existing discourse on AI-inflicted harms beyond representational and allocative harms, warranting separate attention.