1. Introduction
In late 2022, the university became newly animated by fear. A familiar refrain emerged: AI will end writing; AI will destroy thought; AI will topple the university itself. The release of generative text technologies (particularly OpenAI’s ChatGPT in November 2022) produced a wave of concern and moral drama that exceeded the matter of pedagogy: “More than 60 percent of instructors feared that generative AI would undermine critical thinking and originality in student work.”[1] Faculty meetings quickly turned into spaces where educators debated how to “preserve learning” and “protect students” from what many perceived as the encroachment of a new and terrible advancing technology. The specter of AI became a mirror through which academia confronted itself, a confrontation that scholars have described as a “moral crisis disguised as a technological one.”[2]
This article approaches that scene ethnographically. My aim is not to determine whether AI is dangerous or liberatory (these questions have already been examined in other circles) but to ask why the question itself became a site of moral panic. Recent discourse analyses show that academic discussions of AI often shift rapidly from pragmatic issues of assessment and authorship to broader expressions of “ethical anxiety.”[3]The concern has shifted from the technology to the rituals of response that surround it: gestures of refusal, care, and loss, all of which proliferated across the university. In this sense, I read “AI outrage” as a social formation, or better, an emergent moral economy through which intellectuals negotiate identity and legitimacy under conditions of obvious uncertainty.
This uncertainty is not incidental. It has become a “constitutive feature of contemporary academic life, where the authority of knowledge is increasingly bound up with affect and emotion.”[4] Drawing on Mary Douglas’s theory of pollution and purity,[5] we might say that AI’s perceived contamination of authorship and its violation of humanist boundaries provoked a set of purification rites, including declarations of moral integrity, institutional bans, and elegiac essays on “the end of the learning as we know it.” Beneath these surface reactions lies a deeper drama of legitimacy that this article investigates.
It makes sense that for scholars, whose authority depends on the claim to humanistic discernment, AI’s uncanny ability to mimic thought destabilizes the logic of intellectual labor itself. Outrage, I suggest, operates as a system of repair—a performative means of re-stabilizing the boundaries of professional identity amid broader structural uncertainty about the university’s role in public (especially American) life.
This crisis belongs to a longer genealogy of what Raymond Williams (1977) termed structures of feeling in liberal modernity: recurrent moments when technological acceleration forces a reckoning with the educated subject’s self-image. Historically, the academic has spoken for reason in society; since 2022, the tools used do so appear increasingly replicable by code. And yet this historical tension between the human and technology reappears with each new medium that threatens to dissolve this same demarcation. As media historian Ted Underwood observes, “The shock of machine learning in the humanities lies not in its novelty but in the mirror it holds up to the reproducibility of our own methods.”[6] In this context, outrage becomes both affect and argument: a mode of asserting ethical difference from the machine and, by implication, from the techno-capitalist systems that underwrite it. I position that outrage ethnographically to examine how liberal academia sustains its moral identity through ritualized expressions of vulnerability, which include how fear and care become instruments for reasserting the value of the human when it appears most unstable.
If moral panics surrounding technology are not new, what distinguishes the present moment is its tone of virtuous despair. In higher education, many responses to AI have frequently been articulated as acts of moral care—care for students, for human creativity, for “the integrity of learning.” Policy analyses and surveys consistently report that educators frame their resistance to AI in ethical rather than instrumental terms, describing their stance as “protecting the human relationship at the core of teaching and learning.”[7] Yet this language of care often conceals a more complicated affective ritual. Academic opposition to automation tends to express status anxiety as much as pedagogical principle, which could be read as reflecting apprehension about the erosion of expertise.[8] The scholar’s opposition to AI, I argue, functions symbolically as a defense of the very distinction—between intellectual labor and algorithmic production—upon which the academy’s prestige and moral purpose have always depended. I begin by examining the moral response of AI resistance in higher education, tracing how outrage operates as an ethical performance. I then consider the paradox of resistance and inevitability that defines much of our profession’s engagement with technology. Finally, I conclude by exploring how AI discourse becomes a stage on which the contradictions of humanism are re-enacted ad infinitum. These contradictions foreground the desire to preserve the sanctity of “the human” even as that category’s fragilities are increasingly exposed.
Throughout, I approach outrage as practice. Research on affect in institutional settings shows that emotional expression frequently serves an organizing function to enable communities to sustain a sense of purpose during periods of uncertainty, which frequently involves reaffirming shared norms.[9] In this sense, outrage can be understood as a technique of moral repair that serves to restore coherence when institutions seemingly falter. Within the contemporary university, it also constitutes a form of labor through which faculty reassert what remains of a seemingly unstable moral hierarchy between human and machine. The alignment of emotions within communities serves to establish and demarcate licit from illicit modes of engagement, and by extension, discourses that can be heard from those that must be summarily rejected. As Sara Ahmed reminds us, “Emotions do things—they align individuals with communities.” Outrage becomes a social practice that binds academics together in a shared narrative of ethical distinction and loss and should thus not be positioned as irrational fear.
2. Resistance
The moral life of the university reveals itself most clearly in moments of disruption. When something new enters the pedagogical field (especially a technology that unsettles familiar hierarchies) the first reaction is rarely calm. Instead, what arises is a collective drama to negotiate the boundaries of accepted discourse. In the context of artificial intelligence, this boundary work alternates between purity and danger. Scholars of higher education have long noted that academic communities respond to technological change through “rituals of reaffirmation”, in which the defense of intellectual integrity becomes a way of reaffirming our professional identity.[10] These moments, as educational anthropologist Chris Shore argues, expose “the moral underpinnings of institutional life—the affective and normative structures through which universities make sense of themselves.”[11]
The conversation may begin with policy or pedagogy, but it quickly turns toward older moral sentiments: the impulse to restore order against perceived contamination, to redraw the line between the human and the technology, and ultimately, to demarcate the licit from illicit grounds for engaging with these strangers. It’s not surprising that this form of protection is evolutionary in nature. Defending the groups we belong to is perhaps the most human thing we can do in times of crisis.
During the first months of generative AI’s presence in classrooms, this atmosphere was widely documented. Faculty working groups formed to establish “ethical guidelines” or “AI-integrity statements” that often doubled as rituals of reassurance. In this article, I take ritual to mean a “patterned, repetitive, and symbolic enactment of cultural (or individual) beliefs and values” which may produce “an experience of the sacred, spiritual, and supernatural.”[12] Ritual can be both profound and trivial.
A study of U.S. and U.K. universities found that institutional responses “conflated pedagogical and moral concerns,” framing AI less as a tool to be managed than as “a potential pollutant to the moral ecology of learning.”[13] In faculty forums, educators expressed anxiety about “defending learning” and “protecting integrity”, phrases that circulated so pervasively that they came to signify membership within a shared moral community. These utterances, as Sara Ahmed puts it, “do things”: they align individuals with collectives through the affective expression of concern.[14] What may have appeared as a pedagogical debate on surface level was, in effect, a public reaffirmation of virtue, an ethical posture signaling care and belonging within a newly threatened order. And it worked.
In these moments, the university revealed a spiritualist temperament. There was the invocation of danger, followed by the naming of the pollutant, followed by the renewal of collective vows, followed by the attempted excising of a new foe. Anthropologists have long associated these features with the management of crisis.[15] Even the earnest and repetitive cadence of speech suggested that what was at stake was more important than the technology itself. We, as educators, our students, the binding ties of institutionality, the very presence of the academy itself, all of these stakeholders were cast into a newly formed social experiment that turned to ritual for comfort and, whether we consciously admitted it or not, for identity alignment.
Factions quickly formed. As one study of AI discourse in education claims, “Debates about AI’s risks and benefits are also debates about the moral identity of educators and the cultural purpose of universities.”[16] Yes, the controversy over AI may have begun with an anxiety about academic integrity. Soon though, it spilled over into considerations of much more epistemically profound forces aligned with the preservation of moral identity in an age when the capacity for thought no longer seemed entirely one’s own. Although it has been documented that these technologies don’t ‘think’ (we run the risk of anthropomorphizing them when we assume otherwise) the uncanny capacity for them to imitatehuman cognition became enough to test these boundaries, and thus, to sustain the moral outrage that appeared in their wake.
This atmosphere was at once earnest and performative. It pervaded the early responses to generative AI in academic spaces, where moral discourse and institutional affect intertwined. Even a cursory survey of emotionally-coded responses to AI reveals a flurry of (to put it casually) complicated feelings: hope, comfort, fear, anxiety, sadness, ambivalence, creepiness, and most relevant to the present, outrage.
Across academic circles (and even on social media), outrage congealed into a recognizable script. It began with lamentation, “Students are losing the capacity to think”; moved through moral positioning, “We, as educators, must defend authentic thought”; segued into ritualized refusal, “In this class, AI is banned”; and terminated in unilateral positioning surrounding a perceived pollutant. A qualitative study of academic listservs and campus forums during 2023–24 found that educators’ early reactions to ChatGPT followed a “pattern of emotive escalation” from concern to moralization, often culminating in gestures of “ethical refusal” that confirmed communal values.[17] The emotional cadence mirrored what anthropologist Catherine Lutz (1990) describes as the moral ordering of emotion in bureaucratic life: expressions of outrage function simultaneously as credentials and shields, authorizing the speaker’s ethical literacy within an institution’s moral economy.[18]
Quickly, the stakes of non-positionality became too intense to ignore. One was expected to take a stance that aligned with their political, philosophical, and even epistemic views on the academy. You must respond became a familiar refrain, as evidenced in a flurry of headlines surrounding ethical positionality: “How the World Must Respond to the AI Revolution” (Time),[19] “Scientists Must Leverage, Not Compete with, AI Systems” (The Scientist),[20] “AI is already changing management — companies must decide how” (Financial Times),[21] “AI will transform science — now researchers must tame it” (Nature),[22] “Washington Must Bet Big on AI or Lose Its Global Clout” (WIRED),[23]“Europe must be ready when the AI bubble bursts” (Financial Times),[24] “Britain must become a leader in AI regulation, say MPs” (The Guardian).[25]
For my purposes, this discourse in education is notable for two reasons, its vehemence and its tone of mournful inevitability. In the first months of 2023, additional headlines proliferated across mainstream outlets. “The End of Writing” (The Atlantic),[26] “Empathy Machines: What Will Happen When Machines Learn to Write Film Scripts?” (The Guardian),[27] “The Death of the Essay” (The Duke Chronicle),[28] “The College Essay is Dead” (The Atlantic),[29] “The ‘Death of Creativity’?” (The Guardian),[30] “The End of the English Major” (The New Yorker),[31] “AI: The Rise and Fall of Creative Writing?” (Duke English),[32] “The Death of the Artist—and the Birth of the Creative Entrepreneur” (The Atlantic),[33] “The Ghostwriter in the Machine” (The Chronicle),[34] “Can a Machine Be an Author?” (Penn),[35] “AI and the Specter of Automation” (Boston Review),[36] “AI Has Broken High School and College” (The Atlantic).[37]
Each article rearticulates an anxiety, and more broadly, demonstrates that a cultural threshold has been crossed. Scholars analyzing these media framings argue that the genre of the “AI lament” performs a dual function. It expresses genuine anxiety while reinforcing the speaker’s moral distinction as one who perceives the stakes of loss.[38] Within the affective economy of the university, to mourn properly is to signal one’s ethical seriousness, and by extension, one’s team. The emotion of loss thereby becomes a medium of professional identity that confirms that one still feels, and therefore still belongs, within a humanist tradition imagined to be under siege.
This affective style—what I will call the cultivated doomerism of academia—operates as a mode of virtue signaling. Across public discourse, voices who decry AI’s encroachment frequently present themselves as defenders of humanism, and thus, of moral reality through the claims they make and the inherent positionality (and oppositionality) that such messaging invokes. Such performances are not necessarily cynical; to Sara Ahmed, these emotions are “investments in social ideals” that sustain collective attachments even when they fail to achieve their stated ends.[39]Here, belief functions as both conviction and posture. It allows academics to inhabit (and therefore defend) a professional identity that’s seen to be under threat. (To be clear, my aim is not to change anyone’s ethical stance towards AI; rather, I’m interested in how that ethical stance is constructed, oftentimes in parallel with or response to the communities we already belong to.)
By invoking the sanctity of “the human,” educators reinscribe their social role precisely at the moment it feels most precarious, most tenuous, and therefore, ripest to corruption from the outside pressures of technoutopianists or the oft decried and homogeneously evil ‘Big Tech’. This pattern exemplifies what Lauren Berlant calls “cruel optimism”, an attachment to ideals that sustain subjects even as they reproduce their fragility.[40]The central question, in effect, becomes: who are we, really, without an enemy to define us? And in the absence of an embodied adversary, how do we understand the ethical (and thereby personal) contours of our field? Of ourselves?
In this sense, the anti-AI stance constitutes a form of boundary maintenance, a collective effort to reassert what counts as legitimate knowledge and who gets to produce it. Academic communities have historically responded to new technologies by dramatizing the defense of intellectual borders. Sociologists of education have traced similar moral panics around earlier innovations such as Wikipedia’s collaborative authorship, the rise of online learning, comic books, calculators, and the supposed decline of academic rigor through “grade inflation.”[41] Each episode revolved around a shared anxiety: that the metrics of expertise and the rituals of evaluation might lose their power to distinguish the initiated from the amateur. In AI’s case, this anxiety has intensified because the challenge is now more profound than just the distribution of information; today, it concerns the simulation of authorship itself. In no uncertain terms, this is the very act through which academic labor has long defined its value.
The advent of generative AI unsettles long-standing boundaries between authorship and automation in ways that earlier technologies did not. Wikipedia may have democratized reference, but it left the semblance of authorship intact; online learning may have redistributed access, but it still preserved the symbolic authority of educators. Generative AI, by contrast, performs the gestures of composition itself. It does not merely transmit knowledge but appears to create it, “Machine learning’s greatest disturbance to the humanities lies in its ability to reproduce the surface patterns of creative thought without sharing its interior conditions.”[42] To say that AI ‘writes’ is thus to trespass upon a borderline sacred domain, for writing has long served as both the emblem of interiority and the prima facie measure of intellectual virtue within the academy. Given academia’s sacralizing of the author as guarantor of meaning, a machine’s facility with language—however algorithmically predictive and thus non-creative one imagines it—still feels disruptive; to some, it even feels blasphemous.
This helps explain contemporary discourse surrounding generative AI. Often reliant on themes of voice and integrity, these vocabularies have come to challenge the notion of personhood. As such, the rhetoric of “authentic voice” has become “a secular virtue, a guarantee of sincerity and personhood in the neoliberal university.”[43] To be “authentic” is to possess an inner life; to have a “voice” is to stand as a moral subject; to act with “integrity” is to align expression with essence; to “express the self” is thus to synthesize these values in a way that is uniquely human wherein each of these ideals presupposes a unity of thought and self that humanism has long privileged. Their repetition in AI-related discourse reveals the enduring theory of authorship in the modern university which maintains that language, when properly disciplined, reveals human truth.
This moral vocabulary now appears with near-liturgical regularity in academic life. Faculty handbooks and institutional statements frequently declare that “students must cultivate an authentic voice” and that “integrity in writing is integral to intellectual growth.” Policy reviews confirm that universities have increasingly codified such language into their AI guidelines. A 2024 survey of institutional responses found that over 70 percent of new AI-related teaching policies “emphasized authenticity and integrity as core human values threatened by automation,” framing these ideals as both ethical imperatives and pedagogical objectives.[44] These formulations function as boundary rituals or symbolic acts that reestablish the belief that genuine knowledge arises from a stable, self-conscious subject who can claim ownership of words. The anxiety around AI, therefore, transitions from its primary concern surrounding academic honesty to the preservation of a moral wisdom. If a text can exist without an author, what becomes of the person whose worth has been measured by authorship itself?
Seen in this light, the outrage surrounding AI serves to defend an endangered moral order. The university’s identity has long rested on the premise that writing is an expression of interior truth. Generative language models violate that premise by demonstrating that (pseudo) coherence, (pseudo) eloquence, and even (pseudo) insight can emerge without interiority at all. Here, I say “pseudo” to denote the murky ethical distinction between human and machine. How much of the machine’s output belongs to the human prompter? How much of its output belongs to the company that created the LLM? How does the very unsettling these questions inspire comment on the notion of authorship itself?
This is the “crisis of semiotic trust” that scholars of digital culture have described in which meaning appears increasingly untethered from human consciousness.[45] The anti-AI stance, then, can be used to re-fortify the line between the human and the machinic and to ultimately demarcate the value of expression as the revelation of human spirit. Each invocation of “voice” or “authenticity” becomes a small act of repair, or better, an effort to sustain the fragile challenge that words exclusively belong to people.
How have educators responded to the more pragmatic concerns of everyday teaching? Some have started requiring students to handwrite essays, treating the physical act of writing as evidence of authenticity, with a humorous consequence being the rise in bluebook sales.[46] Others ask that drafts be composed under supervision, with process documentation serving as proof of intellectual development and the messy testing of ideas that is part of any creative process. (I should note that these sorts of metacognitive requirements are good practice in preparing any writer. Asking students to craft reflective “process statements” that explain how a piece of writing was created should never go out of style.) Still others have turned to oral examinations and in-class composition. These practices represent “not only attempts to preserve assessment integrity but rituals of reassurance—embodied performances of human presence against the abstraction of the algorithm.”[47] In each case, the gesture of control doubles as a confession of vulnerability since the boundary between author and automation, once taken for granted, now requires constant reenactment.
It is this constant reenactment that may have inspired John McWhorter’s thesis that AI’s radical upending of contemporary education may be a good thing because it invites us to evolve pedagogical practice. In “My Students Use AI. So What?”, McWhorter asserts that the value of critical literacy hasn’t cheapened in the age of LLMs but has simply changed course to become even more urgent.
The whole point of that old-school essay was to foster the ability to develop an argument. Doing this is still necessary, we just need to take a different tack. In some cases, this means asking that students write these essays during classroom exams—without screens, but with those dreaded blue books. I have also found ways of posing questions that get past what AI can answer, such as asking for a personal take—How might we push society to embrace art that initially seems ugly?—that draws from material discussed in class. Professors will also need to establish more standards for in-class participation.[48]
Universities, ever alert to the implicit moral messaging, have begun to institutionalize this affect. Policies proliferate under the banner of “ethical AI use,” a phrase that does more to signal virtue than to clarify practice. Ethical use policies, like sustainability pledges before them, operate as powerful symbolism that allow institutions to display moral responsiveness without confronting the structural contradictions that produce the crisis in the first place. And as Justin Reich reminds, the stakes are ambitious. At times, they even force us to confront the reality that calls for literacy in this field are emergent at best: “rather than inventing AI literacy from educated guesses or principles from past technologies, we should train novices based on the practices of disciplinary experts who have achieved AI fluency in their discipline. Unfortunately, there aren’t any such experts yet.”[49]
Faculty committees draft statements insisting that “AI should never replace critical thinking,” as if the phrase “critical thinking” itself were a safeguard against automation. These texts reveal the often-performative circularity of institutional ethics in which the institution narrates itself as ethically vigilant while maintaining labor conditions and dependence on ed-tech corporations that deepen the commodification of education. Unlike the previously mentioned, the outrage over AI becomes a manageable drama—one that can be resolved through moral declaration rather than political change. As a result, liberal academia sustains its self-image as both victim and guardian: wounded by technology, yet still the conscience of modernity. One of the reasons I am so interested in this type of signaling is that it is both politically neutral and charged; the requisite promise that these technologies promote neutrality is answered with research demonstrating inherent politicism, indicating, for instance, that politically conservative individuals may be more willing to accept suggestions generated by AI.[50] How might this play out in other affective economies bound by similar structures of institutionality? And why does AI’s role in other sectors—say, medical and political ones—look so different? Ought it to?
3. Rituals of Outrage & Performances of Care
Responses to the integration of artificial intelligence into educational contexts have increasingly been interpreted in terms that extend beyond debates about instruction. Research on AI in education frequently underscores ethical and value-laden concerns, particularly regarding how algorithmic systems intersect with foundational aspects of teaching. A growing body of literature highlights educators’ emphasis on human interaction and relational pedagogy as central to educational practice in digital environments. This discourse situates “the human” as pedagogically and ethically significant. For example, scholarship on relational and compassionate pedagogies argues that educators enact pedagogical care through strategies that foster sustained human engagement with learners, qualities that cannot be fully captured by automated systems alone. These works consistently present care in a bidimensional way, both as empathy and as a systemized practice oriented toward sustaining scholarly growth within technologically mediated instruction.[51]
At the same time, empirical studies on generative AI in higher education document concerns about how such systems may affect the development of thinking skills. Surveys of student perspectives report that participants associate AI tools with risks to authenticity in academic work, which lead them to express worries about the loss of critical thinking and creative engagement—dimensions traditionally attributed to human learners.[52] Critical syntheses also identify ethical tensions inherent in automating tasks like content generation and assessment, noting that educators often frame these tensions in terms of the human agency required to carry them out. In this tradition, assessment-by-human replicates the value of intellectual ownership.[53] See the New York Times’ “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About This” for one of the more cogent critiques of what happens when the institutionalized ethics surrounding students and educators face the same dilemma, and are required, one imagines, to respond alike.[54] The outcome is rarely that predictable.
Scholarly discussions thus frequently invoke “the human” when articulating the educational values at stake in AI adoption. This invocation should not solely be read as a nostalgic retreat from technological modernity. Rather, it foregrounds ethical commitments to student agency that acknowledge the relational work of teaching. These discussions also—or should also—resist reduction to algorithmic outputs. In this sense, framing pedagogy in terms of human distinctiveness functions as ethical self-defense for academics to acknowledge their entanglement with technological infrastructures thereby articulating requirements for human-centered educational practices that aim to preserve accountability in learning.[55]
Recent empirical research indicates that educators increasingly frame the pedagogical implications of artificial intelligence in terms of human relationality that emphasize care. Studies across higher education contexts show that faculty commonly articulate fears that overreliance on generative AI could erode the intellectual skills historically cultivated through deliberate instruction.[56] As one international review notes, “educators were more concerned with the impact of AI on the quality of learning and the development of higher-order cognitive skills than with its immediate utility as a classroom aid.”[57] Instructors frequently link these concerns to a broader discourse of care to emphasize that “the human touch in teaching—empathy, encouragement, and mentorship—cannot be replicated by algorithms.”[58] Within this framing, care becomes a “pedagogical counter-technique”, a means by which teachers reassert their moral agency and reaffirm the centrality of human judgment in educational practice.[59] These calls remain critical. Yet, as Bearman and colleagues observe, such gestures are also shaped by enduring hierarchies of legitimacy that separate “deep learning” from “shallow automation” and “authentic authorship” from “mechanical output.” The rhetoric of care, while ethically motivated, therefore recodes older disciplinary distinctions through a vocabulary of technological crisis.
The emotional dimensions of this discourse have been increasingly documented in sociological and pedagogical circles. Drawing on affect theory, scholars suggest that care and outrage now function as dialectically linked affective modes within academic communities, each sustaining the moral order of the university.[60] Faculty surveys reveal high levels of emotional strain as educators navigate AI integration, with many describing “ethical fatigue” and “a sense of erosion of professional identity.”[61] In interviews, instructors articulate a dual imperative to remain technologically competent yet also preserve the precedence of human teaching. Expressions of exhaustion (“I’m so tired of talking about AI”) are common across studies as liberatory “professionalized emotions of care and concern.”[62] This form of exhaustion, paradoxically, operates as a credential: to feel burdened is to validate one’s seriousness and commitment to the ethical life of teaching. To some voices in higher education, “emotional labor has become the site through which faculty perform their humanity in response to automation.”[63]
We can better understand the academy’s combination of concern for and endorsement of educational technologies through Lauren Berlant’s “cruel optimism”, a relation in which “something you desire is actually an obstacle to your flourishing.” Berlant argues that individuals and institutions may remain attached to ideals that both sustain and constrain them, because the very objects of desire are invested with hopeful meaning even when they perpetuate forms of dependence or precarity.[64]
Any object of optimism promises to guarantee the endurance of something, the survival of something, the flourishing of something, and above all the protection of the desire that made this object or scene powerful enough to have magnetized an attachment to it. When these relations of proximity and approximate exchange happen, the hope is that what misses the mark and disappoints won’t much threaten anything in the ongoing reproduction of life, but will allow zones of optimism a kind of compromised endurance. In these zones, the hope is that the labor of maintaining optimism will not be negated by the work of world-maintenance as such and will allow the flirtation with some good-life sweetness to continue.[65]
Similar formulations have been used to examine how digital innovations promise greater equity and efficiency and simultaneously reinforce existing hierarchies, leading to paradoxical outcomes for stakeholders.[66] In the context of generative AI, the sustained rhetorical emphasis on preserving “real learning” underscores this paradox: the defense of authenticity and deep pedagogical values may reinscribe the very structures that have historically shaped exclusionary academic cultures, even as faculty critique the technology purportedly undermining those values. These structures may include competitive assessment regimes, authorship hierarchies, the valorization of performative labor, and many more.
At a more abstract level, these affective engagements reflect broader phenomena identified in research on moral discourse and institutional emotion. It’s no surprise that moral language in organizations often translates structural tensions into individual emotional responses, enabling collective actors to make sense of and manage systemic contradictions.[67] Applied to contemporary debates about AI and academic work, such analyses suggest that individuals who express outrage at technological change may also be displacing broader anxieties about labor valuation, and perhaps more existentially, about institutional purpose. Research on educational technology adoption identifies similar patterns. Concern about AI’s impact on teaching and learning functions less as a direct assessment of tools and more as a means of articulating unease with changing conditions of academic labor and evaluation.[68] The affective intensity of such moral discourse, then, reflects the turning tides of structural transformation. Here, Berlant is again useful to quote at length.
… so many of the normative and singular objects made available for investing in the world are themselves threats to both the energy and the fantasy of ongoingness, namely, that people/collectivities face daily the cruelty not just of potentially relinquishing their objects or changing their lives, but of losing the binding that fantasy itself has allowed to what’s potentially there in the risky domains of the yet untested and unlived life.[69]
Studies of digital communication practices in academic communities reveal how rhetorical repertoires circulating on social media and professional forums reflect broader cultural dynamics. Several common themes in educator and student discussions on the matter emerge, including “ethical concern,” “fear of dehumanization,” and calls to “stay grounded in human values.”[70] Such themes reflect how the profession is collectively negotiating professional identity in an era of rapid technological change. Sociologists of emotion characterize these themes as affective communities, or networks in which moral sentiments circulate and are reinforced through shared frames of reference. In these contexts, expressions of moral ambivalence or ethical worry help constitute communal norms about what practices deserve legitimacy and thus, protection in educational settings.
Viewed through this lens, debates about educational technology become debates about the management of virtue and professional belonging. In no time at all, affective responses surrounding concern, fatigue, and even rage become embedded in broader discussions about the future of teaching and learning.[71] It is helpful not to dismiss these sentiments as mere rhetorical posturing; rather, we might view them as rituals of self-maintenance that take the form of collective efforts to stabilize moral coherence in institutions experiencing systemic change. By foregrounding emotional registers such as care and outrage, educators engage in symbolic practices that reaffirm shared commitments and articulate boundaries around what counts as legitimate pedagogical work; all of this happens under material conditions that are themselves in flux.
4. It’s Inevitable, Right?
By now, a tension has emerged between skepticism and adaptation in the constitutive discourses surrounding artificial intelligence. Many educators and administrators are simultaneously concerned about potential harms (to critical thinking immediately) and acknowledge AI’s growing presence and utility within teaching and learning environments.[72] This dual discourse is characterized by resistance in some statements and acceptance in others, and has been described as evidence of ambivalence rather than simple rejection. In a recent survey of UK academics, Watermeyer (2024) notes widespread ambivalence toward generative AI. Respondents articulated fears that AI could erode scholarly identity while also recognizing that these tools might be integrated into existing academic practices that reward productivity and performance.[73] Such findings suggest that academic discourse does not oscillate between incoherent positions but reflects a complex structure of feeling in which commitment to educational values coexists with recognition of technological inevitability. For most, the resultant environment produced becomes uncomfortable to say the least. This may partially explain why (to put it casually) some people love AI and others hate it, nuance notwithstanding. For many, the pendulum swings between “algorithm aversion” or absolute sycophancy without proper redress for either position. Learned distrust of non-human systems sustains this, exemplifying how we tend to be more forgiving of human mistakes but “betrayed” by those made by technological systems.[74]
In response, many policy documents and faculty discussions often juxtapose concerns about AI’s impact with pragmatic recommendations for adaptation. Analyses of institutional AI guidance reveal repeated motifs of caution that often highlight risks to academic integrity, and in the same breath, come paired with frameworks for integration and responsible use.[75] For example, organizational reports and academic articles emphasize that “effective AI adoption requires clear ethical guidelines and professional development,” even as they underline the necessity of incorporating AI literacy into curricula.[76] This pattern points to what sociotechnical scholars describe as resigned foresight. Stakeholders attempt to maintain consistency with pedagogical values while anticipating and planning for the continued evolution of technology within their institutions.[77] The prevailing narrative shifts from it’s here and we must get rid of it to it’s here and we must prepare for how it’s evolving our field.
Empirical research into educator attitudes corroborates this dual stance. Large-scale studies of faculty responses to generative AI show that many educators recognize both the risks and the opportunities associated with its use. In one international media discourse analysis, Sun, Unlu, and Johri (2025) found that institutional narratives about ChatGPT and similar tools evolve over time, moving from initial alarm toward cautious optimism about adaptation. Their findings indicate that coverage of AI in higher education increasingly foregrounds institutional responses and positive applications at the same time that concerns about human-centered learning persist. Similarly, other studies report that educators identify AI’s potential to enhance accessibility and personalized learning even as apprehension persists about its influence on deep learning and skill development.[78] These converging patterns highlight a professional choreography between vigilance and engagement with AI technologies. The pendulum, once again, continues to swing, and swing, and swing.
Where does it stop? The result across documented research is a discourse in which resistance and adaptation are mutually constitutive. We articulate caution about AI’s ethical and pedagogical implications and simultaneously craft policies and pedagogies that accommodate its presence; we emphasize preserving core educational values even as we acknowledge that AI will continue to shape major facets of scholarly practice. We do both, and in so doing, solidify our affective stance through this ritual of belonging. More precisely, this structural ambivalence reflects the broader dynamics of higher education in an era of rapid transformation, as academic communities are compelled to uphold ideals of inquiry and integrity even as they recognize the need to harness emergent tools in pursuit of larger epistemic goals such as workforce development.
When scholars and institutional actors discuss artificial intelligence in higher education, many analyses observe an implicit acceptance of AI as an enduring force shaping pedagogy and governance. It is unlikely to disappear. Research on institutional AI governance shows that universities increasingly frame AI as a domain requiring ongoing ethical and policy structures, embedding principles such as accountability, human-centricity, and transparency into their planning frameworks.[79] This policy-oriented approach reflects a broader liberal humanist commitment to progress, wherein technological adoption is often understood as inevitably part of the university’s evolution, caution and risks included. The result is a narrative in which AI’s expansion is treated as a foregone conclusion—“we can’t go back”—and ethical discourse becomes the dominant lens through which the future is envisioned and justified.
This orientation resonates with the logic of adaptation in AI discourse. Often, these discussions inevitably evolve from initial alarm toward frameworks emphasizing responsible use, institutional readiness, and innovation, suggesting that resistance is often situated within a larger story of progress and adjustment.[80] In this broad discourse, outright rejection of AI is rare; instead, stakeholders articulate a moderated stance: “we do not reject AI, but we must integrate it ethically.” Such positioning mirrors the dominant ideological assumption that technological and rational progression are the default arc of institutional and societal development, and that normative engagement must occur within that assumed trajectory. The boundary maintenance this signaling represents is existentially sustaining for individual educators on one hand and the profession on the other. The two continue to operate together.
Universities themselves have formalized this rhetoric of ethical adaptation in strategic planning documents that foreground responsible AI governance. Many universities have begun to emphasize principles like human decision-making and ethical literacy as part of their AI governance strategy.[81] UNESCO’s global standard on AI ethics, for example, stresses that AI should “support the learning process without reducing cognitive abilities” and highlights the continuing importance of human-centered decision-making in future pedagogies.[82] Academic guidelines thus serve dual functions: they acknowledge the expanding role of AI in institutional life and define the terms under which it should be engaged. The ultimate purpose of this is to shift the conversation from absolute rejection or absolute acceptance (think doomerism or sycophancy) to managed integration.
Beyond pragmatic ends, this discourse operates as a governance strategy that simultaneously absorbs critique and deflects it onto subsequent parties. One must remember that frameworks often become tools through which institutions regulate and legitimize technological change without necessarily challenging underlying power structures or resource inequalities that accompany AI adoption.[83] In this sense, the prevailing ethics of the field become a means of institutional continuity wherein pragmatic adaptation becomes the publicly preferred strategy. Faculty and administrators, in turn, internalize this dual discourse, articulating both moral unease and pragmatic engagement in their professional roles. The result is a normalized disposition in which educators simultaneously invoke ethical principles and managerial pragmatism to consider their moral responsibility in formal contexts and to embrace adaptive practices in operational ones.
Discussions of inevitability are not purely emotional reactions. Rather, they are embedded within broader ideological frameworks about technology and institutional purpose. Many stakeholders believe that the advance of AI is a structural and enduring phenomenon that transcends it being a merely transient challenge to become an abrogation of practice that must be navigated, not written off. For example, studies synthesizing early commentary on generative AI in higher education observed that sector media and policy literature frequently portray AI as a catalyst for existing agendas such as assessment reform and inclusion, framing its expansion as integral to the future of teaching and learning.[84] This framing conveys a sense of technological inevitability where actors acknowledge the limits of institutional resistance even as they raise ethical concerns about how AI might affect our values. It’s no stretch to imagine how these portrayals can function ideologically, shaping expectations about the role of AI in institutional life and implicitly positioning educators as responding to forces beyond their direct control.
This sense of inevitability shapes public discourse at two simultaneous levels, that of institutional policy and more profoundly, that of professional identity. Many have begun adopting responsible use frameworks premised on continued technological integration, coupled with sustained caution about ethical implications involving academic integrity, bias, data privacy, fabrication, authenticity, and equity, among many others.[85] In this vein, the responsibility rhetoric of “ethical integration,” “human-centered design,” “principled adaptation” becomes the dominant mode of engagement, implicitly accepting the presence of AI as a given rather than an object of outright rejection. Such ethical frameworks are constructed to balance opportunity and risk in a context where AI is already embedded in learning platforms and assessment systems. It becomes clear that inevitability and caution can coexist. This is because ethical foresight does not preclude acceptance of technology’s role but can be used to shape its governance within established values that shift to accommodate change.
I interpret the prevalence of these narratives through the lens of broader debates about technological determinism and institutional adaptation. When new technologies emerge, discourse often shifts toward managed integration rather than fundamental structural change. In the case of generative AI, many early claims about its disruption were quickly followed by calls to incorporate it into curricula while mitigating risks.[86]This pragmatic accommodation-as-risk-management paradigm positions AI as a persistent feature of the academic landscape and delegates responsibility for shaping its use to existing governance structures. Such a move (from alarm to structured adaptation) allows institutions to recast AI’s inevitability as a policy challenge that requires the serviceability of finite resources including time and capital rather than a crisis to be resisted, thereby forestalling more radical questions about the future of teaching and learning. In responding, the institution also ensures its own immortality.
Together, these trends suggest that academic discourse about AI’s inevitability serves multiple functions. It (1) situates educators within a continuum of technological change. It (2) legitimizes ethical engagement. It (3) demands responsible policy development. It (4) attenuates more confrontational critiques that would challenge the broader economic and political forces driving AI adoption.
In this sense, the moral economy of AI discourse in higher education reproduces existing institutional logics. These both/and paradigms encourage ethical adaptation to become the preferred mode of response and reframe resignation to technological change as a form of professional realism. Rather than negating concern, these narratives integrate apprehension into a forward-looking institutional agenda that emphasizes literacy and risk management as central to the future of current- and post-AI pedagogy. Thus, the narrative shifts from controlling the moral pollutant to providing new media vocabularies for how we might better understand it, and, for those of us who choose to do so, work alongside it.
5. Humanism as Self-Defense
When educators and scholars discuss artificial intelligence in educational contexts, they frequently invoke concepts such as creativity, authenticity, and human agency as qualities unique to human cognition and thus, pedagogical labor. In fact, much of the discourse’s literature frames human creativity as central to the value of educational processes. For example, systematic reviews of AI use in art education note concerns that AI can expand exploratory possibilities, but “authentic human expression” and “emotional depth” remain distinctive aspects of human creativity that generative technologies cannot replicate fully, and there is a persistent emphasis on protecting these qualities in pedagogical practice.[87] Such logic frames the defense of “the human” as something more than metaphysical rhetoric, something perhaps akin to a professional credo about the goals of agency in teaching and learning.
The resurgence of humanist rhetoric in discussions of AI is also evident in debates about creativity. Scholars who study the intersection of technology and educational practice emphasize that even if AI can generate plausible content or assist with ideation, there is a qualitative distinction between algorithmic output and human creativity grounded in meaning making and self-expression. For instance, some suggest that definitions of creativity are shifting in part because of the influence of AI, but that authentic creativity requires a human subject with intrinsic motivation and problem-solving capacities—features not reducible to statistical pattern generation platforms.[88] This distinction underscores why educational critics often frame AI as a challenge to, rather than a replacement for, human imaginative and evaluative labor within academic settings. Humanist ideals become resources that educational communities must preserve in response to AI’s expanding capacities. Stakeholder perspectives also reinforce this framing: qualitative research on perceptions of AI and creativity reveals that learners and instructors alike often assert that AI cannot match the depth of human creative thought, while acknowledging AI’s potential to support creative practices.[89] These findings align with broader studies indicating that concerns about AI in education reify the importance of maintaining human-centeredness. In this way, claims about “the human” furnish energy to the continuing relevance of our agency in academic work.
Many educators worry that ease of use and speed afforded by AI might diminish opportunities for students to engage in reflective and deliberative thinking, forms of cognitive work that have long been valorized as markers of intellectual labor.[90] In framing AI as a challenge to human roles rather than a substitute for them, some academics thus emphasize the preservation of cognitive distinction and the cultivation of human capacities that are difficult to automate. The resulting unease recalls what David Graeber terms “the moral economy of value”: the sense that certain kinds of labor (creative, reflective, interpretive) confer dignity precisely because they resist commodification.
This shared conviviality could be seen as a kind of communistic base, on which everything else is built. Sharing is not just about morality—it’s also about pleasure. Solitary pleasures will always exist, but the most pleasurable activities usually involve sharing something: music, food, drugs, gossip, drama, beds. There is a communism of the senses at the root of most things we consider fun.[91]
When machines begin to mimic that labor, the profession’s moral worth falters. Outrage, in this light, becomes a defense mechanism, a way of re-inscribing the sacred difference between the thinker and the worker, the human and the tool.
This defense of the human often takes the form of what might be called a cult of difficulty. Professors repeat that real thinking is hard and that education demands struggle and uncertainty, a refrain that echoes long-standing claims that education should feel taxing to mark its effectiveness. These statements are not false, but they are ideological. Difficulty becomes a test of authenticity, a way to separate “serious” learning from anything that looks too smooth or too easy, in line with broader theories that treat authentic education as inseparable from sustained, effortful engagement which must never falter.[92] In the moral vocabulary of humanist education, pain stands in for depth; defenders of the liberal arts frequently describe personal upheaval as necessary preconditions for genuine intellectual growth.
AI’s promise of fluent prose without visible labor cuts against this ethic. Automated writing tools short-circuit the slow, effortful processes through which students learn to think. If writing can be effortless, the moral premium attached to struggle starts to look misplaced. What follows becomes a kind of moral affront that involves the feeling that students who rely on AI have dodged the ritual of difficulty through which knowledge is supposed to harden into character, a worry now systematically visible in debates over the erosion of academic norms. The anger directed at AI-generated writing is therefore only partly about cheating; at its core, it is about the perceived desecration of cherished rites.[93]
From an anthropological angle, this renewed defense of the humanist ideal looks a lot like what Talal Asad describes as a “disciplinary” project, a bundle of practices that slowly shape people into particular kinds of moral subjects.[94] The classroom that prizes originality is not only a place where students learn to argue; it is also where they learn to experience themselves as a certain kind of human that is autonomous and responsible. And that’s a good thing. In this frame, the educator’s job reaches for loftier goals such as the cultivating of sensibility and the coaching of students to see their writing as evidence that they are, and ought to be, moral agents whose words reveal an interior life.
AI seems to desecrate that whole formation. It can generate essays that look polished without any obvious inner life behind them and “speech” without a speaker in the human sense. The disturbing part is less the occasional factual error than the way these systems produce text that is coherent yet clearly not anchored in lived experience. I know what it’s saying but I don’t know where it’s getting it from has become a common refrain to sustain this reality. The result often feels eerie instead of just unconvincing because it reminds us that our writerly “voice” is constructed through social norms and institutionality surrounding what a self should sound like.[95] AI has data sets to draw from, not the shared conviviality of human life.
For an institution that still imagines itself as the guardian of “the human spirit,” this realization is hard to accept. Hence the rush to moral language and dramatic gestures, including public condemnations of AI and impassioned defenses of “real” writing, all of which reassert a sharp line between the authentic and fake. (Think Bong Joon-ho’s recent promise to “organize a military squad” with a “mission … to destroy AI.”)[96]In those moments, humanism functions as a ritual performance, repeated in dramatic declarations and policies alike. Its purpose becomes to hold off a deeper anxiety about the difference between personhood and technical production.[97]
There is a political dimension to this defense as well. By elevating “the human” as a scarce moral resource, the academy implicitly claims exemption from the systems it critiques. Scholars can decry capitalist automation even as they continue to inhabit its privileges because their labor is cast as uniquely resistant to commodification. The machine threatens this arrangement by revealing that the scholar, too, produces content, outputs, deliverables. In this sense, the anti-AI stance preserves both moral hierarchy and class distinction. To defend the sanctity of human thought becomes concomitant with a need to defend the professional status of those authorized to define it. The academic’s outrage thus conceals a subtle nostalgia for an era when expertise guaranteed authority.
Liberal humanist academics often find themselves in a contradictory position whereby they rely on and participate in modern technological systems even as they voice moral opposition to those systems’ effects. Scholars in the digital humanities, for example, acknowledge a “continuous paradox” in their work. They must demonstrate their ability to keep up with technologies despite not becoming subject to them.[98] In other words, humanists adopt new digital tools (becoming complicit in technological modernity) even as they critique or resist the dominance of those very tools.
But any revival or revitalization of the humanities cannot be achieved without a critique of the economic interests behind the technological networks we use.[99] This motion reflects an ethical style of self-defense: by engaging technology on their own terms, humanist scholars attempt to preserve critical autonomy within an increasingly tech-driven academy. But this assurance is fragile. The more strenuously “the human” is defended, the hollower it sounds. Each invocation of authenticity or moral depth testifies, implicitly, to its erosion. The rhetoric of human exceptionalism becomes a space in which the university can safely reaffirm its virtue.
One way academics reconcile these contradictions is by invoking “the human” as a moral anchor, a move that casts their scholarly identity as both endangered by technology and indispensable in resisting it. By championing humanistic values (dignity, creativity, critical thinking), they symbolically position “the Human” against the machine. This gesture serves to reassure them that their work remains vital.
At the same time, many voices warn that this rhetorical strategy can ring hollow. Philosopher Luis Emilio Estrada cautions that invoking “the human” as an undefined ethical touchstone often functions as a “cheap proxy for ethical integrity,” masking deep ideological commitments.[100]In debates over AI policy, for instance, appeals to “human-centered” principles sometimes skirt the question of whose humanity is being protected.[101] Such critiques suggest that simply asserting human uniqueness, without concrete action, may merely paper over academia’s complicity in technological systems. Nonetheless, the trope of an endangered humanity allows academics to stage themselves as guardians of essential values. The key point is that by voicing what “being human” means, scholars avow their irreplaceable role in an era when that very humanity feels under threat.
In the end, this humanist revival performs an important constitutive function for scholars by confirming that their vocation remains sacred (or at least singularly important—it is) despite the rise of intelligent machines. By framing their labor as indispensable for preserving humanity’s soul, fellow academics defend the profession against irrelevance. The performance may be self-soothing and thus a form of self- and professional care, but it also underscores a genuine belief that our work remains endangered and essential. We remain the last line of defense for humanistic values as machines “proliferate” around us. And yet for most, the dissonance this reality raises is deeply unsettling.
6. Conclusion
My analysis of the discourse surrounding artificial intelligence in higher education has suggested that much of the anxiety attributed to AI reflects deeper institutional and disciplinary concerns about identity. Within this, two dominant themes have emerged. One that frames AI as an imperative change to which all must respond. Another that describes how AI alters authority, decentralizing traditional roles such as the teacher’s evaluative function.[102] In the latter framing, AI becomes more than a technological output by exposing and even amplifying existing contradictions within the academy about what counts as legitimate scholarly practice. In doing so, it demands that institutions rearticulate their core values rather than simply adapt tools. It forces rather than invites a reckoning with the following questions. How have the knowledge worker’s tools changed? How do these new tools complicate the everyday reality of teaching and learning? Of building a literate populace? Time will only tell.
Critics of AI’s role in universities may continue to explicitly situate these technologies within broader social and institutional dynamics. In response, commentators will continue to argue that widespread reliance on AI tools in teaching and assessment risks undermining critical thinking and core educational values if not accompanied by careful ethical reflection and policy guidance.[103] One is hard-pressed to disagree here. Taken collectively, these discussions highlight that anxiety about AI is often grounded in its perceived affront to established pedagogical practices on one hand and the authority structures that sustain them on the other. The concern that AI might supplant reflective human labor, rather than augment it, resonates with debates about how technological systems reshape our professional identities. We are unlikely to see these debates go away—and for good reason.
Compelling evidence exists of cognitive dissonance coded alongside simultaneous attraction to the efficiencies offered by AI and reluctance to embrace its implications for originality and intellectual ownership.[104] Instead of treating AI as an external threat, I have maintained that responses to AI often reveal pre-existing uncertainties about the meaning of academic labor. In this view, AI may be read as a diagnostic tool that exposes fault lines within our profession’s pre-established practices and values. There is much to be said, for instance, about the necessary disruptions to assessment that this technology has prompted. How do we define learning if mastery can be (albeit cheaply) simulated? What are the stakes of non-response? Who, ultimately benefits when the public remains ignorant about artificial intelligence? Who loses?
In broader sociocultural terms, universities have long served as spaces where moral and civic sensibilities are cultivated. Higher education policy emphasizes that institutional responses to new technologies routinely invoke ethical language to negotiate the social implications of innovation while maintaining continuity with core educational missions, suggesting that institutions see moral discourse as essential to integrating AI in ways that preserve human-centered values.[105] From this perspective, what appears as outrage at AI is better understood as part of a larger institutional recalibration to uphold the normative ideals of education even as the material and technological conditions of academic work evolve. By the laws of nature, they are unlikely to stop evolving anytime soon.
By the same token, institutional responses to artificial intelligence in higher education often frame critique as stewardship. It has become commonplace for institutions to cast their role as protectors of core educational values; acknowledging the inevitability of technological integration is part of this. Universities can become custodians of ethical engagement rather than mere adopters of technology.[106] Rutgers University’s English Department sets a strong example.
Generative artificial intelligence (AI) tools are now embedded in popular software, tempting students and educators alike to regard these commercial applications as reliable ‘copilots’ and ‘tutors.’ As scholars whose research and teaching are closely bound up in the reading, analysis, and writing of texts, English department instructors feel it important to underscore that ‘generative AI’ is the product of centuries of human labor. … We regard the attainment of critical AI literacies as a process of equipping students with the necessary knowledge for exercising judgment about when, whether, or how to use these imperfect and, so far, largely untested commercial technologies.[107]
In this narrative, lamentations about perceived threats to “the human” serve to reaffirm institutional values by enacting a form of moral authority, with ethical frameworks functioning to confirm and critique the academy’s institutional life. I believe the imbrication of these two discourses will continue to evolve in step with the role of the university in public life.
For many participants in these discussions, ethical language reflects deep attachments to longstanding disciplinary commitments. Especially in the humanities, the belief that “models make words, but people make meaning,” underscores a distinction between algorithmic output and human interpretive agency.[108] This emphasis suggests that cogent critiques of AI often articulate commitments to relational practices that remain associated with the central mission of higher education. The motioning toward “centuries of human labor” that underpins the Rutgers statement defines this.
From an anthropological perspective, such ritualized ethics can serve both integrative and limiting functions. Discourse on AI in academia often mobilizes ethical and emotional registers that generate belonging among those who share similar professional values. Many public debates about AI in education frequently invoke concerns about skill erosion and the future of work, topics that resonate beyond isolated institutions and which contribute to a broader professional sense of shared stake in educational futures.[109] That, coupled with the divisive nature of this technology within different departments of the same institution, reaffirms how uncritical appeals to a singular notion of “the human” risk obscuring historical exclusions that have long shaped the humanities in particular and higher education more broadly.[110] Ethical performances thus authenticate community boundaries and, in so doing, limit the imagination of more expansive alternatives.
Contemporary debates about AI will continue to intersect with longstanding critiques from posthumanist and decolonial theory, which challenge the assumption that “the human” is a coherent, universal category. Posthumanist scholarship argues that traditional humanism that is centered on autonomous, rational subjects rests on historically specific, and often exclusionary, epistemologies, and that political and ethical responses to AI must attend to issues of diversity rather than assume a singular human norm.[111] This is good practice.
Collectively, these perspectives suggest that the emergence of AI offers an opportunity to reexamine what is meant by moral agency in ways that acknowledge the plurality and situatedness of human experience, and to shift institutional discourse from defending purity toward cultivating both relational awareness and equitable engagement. The methodological shift offered in “design with rather than for” frameworks from participatory design underscore the ethical imperative educators share to respond.
The task (however uncomfortable) for fellow humanists thus seems to be how we can investigate the conditions under which both humans and machines are made intelligible, not intelligent. The current discourse of outrage surrounding artificial intelligence condenses a much older predicament of liberal modernity that underscores the tendency to imagine ethics only in the register of scarcity and care only in the register of crisis. This is no longer feasible.
The good news is that the humanities are uniquely positioned to respond to this reality because they have always been equipped to study the conditions of production. How might AI be treated as a mediator through which institutions can observe their moral frameworks evolve?[112]Neither sycophancy nor doomerism seem to be the best calibrated moral vocabularies to lead this discussion. What can be offered instead?
Rather than policing the boundaries of human exceptionalism, the humanities might examine how the social imaginaries of technology reveal the ethos of education itself as it continues to reconcile morality with our present.[113] In this view, AI will become less an object to defend or hate and more a site of interpretation, or better yet, an occasion to reimagine how the university can articulate responsibility in a world increasingly defined by intelligible systems.
Such an orientation does not entail reconciliation or even technological surrender, and little space should be afforded to enabling violations to academic integrity or narratives where learning ‘alongside’ AI is presented as a means of circumventing the critical thinking process.[114] Who gets to adjudicate this debate? And what are the stakes of an approach that underwrites either a doomerist or sycophantic agenda?
I have maintained that the moral vocabularies through which the academy interprets AI are themselves cultural artifacts that should be imagined as contingent and revisable. If we want to acknowledge this contingency, we must situate it historically and recognize that “the human” has never been a fixed essence, but has rather occupied a moving position within overlapping systems of labor, language, desire, power, literacy, and creation.[115] Our role as educators, then, will become to trace the rituals through which “the human” is continually performed. In that tracing, we might hope that AI ceases to appear as an existential threat, becoming instead a prompt to think with rather than against. It may become an ever-present reminder that our longing for moral certainty is itself a symptom of our modern condition. In this formulation, the university may yet rediscover its critical vocation as one site among many where our species rehearses, and perhaps revises, its ideas of what it means to think. I am hopeful that we are prepared to answer the charge.
REFERENCES
[1] Watermeyer 2024, 3
[2] Schmidt, D. A. 2025. “Integrating Artificial Intelligence in Higher Education.” Computers & Education: Artificial Intelligence 9 (February): 1–12. https://doi.org/10.1016/j.caeai.2025.100231.
[3] Bearman, Margaret, Juliana Ryan, and Rola Ajjawi. 2023. “Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review.” Higher Education 86 (4): 369–85.https://doi.org/10.1007/s10734-022-00937-2.
[4] Fassin, Didier. 2012. Humanitarian Reason: A Moral History of the Present. Berkeley: University of California Press.
[5] Douglas, Mary. 1966. Purity and Danger: An Analysis of Concepts of Pollution and Taboo. London: Routledge.
[6] Underwood, Ted. 2024. The Humanities at Scale: Machine Learning and the Future of Interpretation. Chicago: University of Chicago Press.
[7] American Association of University Professors (AAUP). 2024. Statement on Artificial Intelligence and Academic Freedom. Washington, DC: AAUP. https://www.aaup.org.
[8] Bearman, Ryan, and Ajjawi 2023; Watermeyer 2024.
[9] Ahmed, Sara. 2014. The Cultural Politics of Emotion. 2nd ed. Edinburgh: Edinburgh University Press.
[10] Bearman, Ryan, and Ajjawi 2023, 373
[11] Shore, Chris. 2020. The Anthropology of Policy: Critical Perspectives on Governance and Power. London: Routledge.
[12] Davis-Floyd, R., Laughlin, C.D. (2025). Ritual: What It Is, How It Works, and Why. In: Shackelford, T.K. (eds) Encyclopedia of Religious Psychology and Behavior. Springer, Cham. https://doi.org/10.1007/978-3-031-38971-9_532-1.
[13] Watermeyer 2024, 8.
[14] Ahmed 2014, 14.
[15] Douglas 1966.
[16] Jensen et al. 2024, 1153.
[17] Bearman, Ryan, and Ajjawi 2023, 378.
[18] Lutz, Catherine. 1990. “Engendered Emotion: Gender, Power, and the Rhetoric of Emotional Control in American Discourse.” In Language and the Politics of Emotion, edited by Catherine Lutz and Lila Abu-Lughod, 69–91. Cambridge: Cambridge University Press.
[19] Bremmer, Ian. “How the World Must Respond to the AI Revolution.” Time, 2023.
[20] Kasat, Deepakshi. “Scientists Must Leverage, Not Compete with, AI Systems.” The Scientist, 2025.
[21] Mollick, Ethan. “AI Is Already Changing Management—Companies Must Decide How.” Financial Times, May 19, 2024.
[22] Knight, Will. “AI Will Transform Science—Now Researchers Must Tame It.” Nature 621, no. 7979 (September 27, 2023): 433–435.
[23] Knight, Will. “Washington Must Bet Big on AI or Lose Its Global Clout.” WIRED, December 17, 2019. https://www.wired.com/story/washington-bet-big-ai-or-lose-global-clout/.
[24] “Europe Must Be Ready When the AI Bubble Bursts.” Financial Times, 2025.
[25] Milmo, Dan. “Britain Must Become a Leader in AI Regulation, Say MPs.” The Guardian, August 31, 2023.
[26] Herman, Daniel. “The End of Writing.” The Atlantic, December 9, 2022. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/.
[27] Stephenson, Simon. “Empathy Machines: What Will Happen When Robots Learn to Write Film Scripts?” The Guardian, July 7, 2020.https://www.theguardian.com/film/2020/jul/07/empathy-machines-what-will-happen-when-robots-learn-to-write-film-scripts.
[28] Doan, Evelyn. “The Death of the Essay.” The Duke Chronicle, November 16, 2025. https://dukechronicle.com/article/the-death-of-the-essay-20251116.
[29] Marche, Stephen. “The College Essay Is Dead. Nobody Is Prepared for How AI Will Transform Academia.” The Atlantic, December 6, 2022. https://www.theatlantic.com/…/the-college-essay-is-dead/.
[30] Sweney, Mark. “The ‘Death of Creativity’? AI Job Fears Stalk Advertising Industry.” The Guardian, June 9, 2025. https://www.theguardian.com/technology/2025/jun/09/ai-advertising-industry-google-facebook-meta-ads.
[31] Heller, Nathan. “The End of the English Major.” The New Yorker, March 6, 2023. https://www.newyorker.com/magazine/2023/03/06/the-end-of-the-english-major.
[32] Freedman, Daniella. “AI: The Rise or Fall of Creative Writing?” Duke University Department of English, n.d. https://english.duke.edu/news/ai-rise-or-fall-creative-writing.
[33] Deresiewicz, William. “The Death of the Artist—and the Birth of the Creative Entrepreneur.” The Atlantic, January 2015. https://www.theatlantic.com/magazine/archive/2015/01/the-death-of-the-artist-and-the-birth-of-the-creative-entrepreneur/383497/.
[34] Kirschenbaum, Matthew. “The Ghostwriter in the Machine: A New History of Writing and Artificial Intelligence.” The Chronicle of Higher Education, April 23, 2024.https://www.chronicle.com/article/the-ghostwriter-in-the-machine.
[35] Wolfson, Stephen. “Can a Machine Be Considered an Author? And Other AI Copyright Issues in the Courts.” Penn Libraries News, n.d. https://www.library.upenn.edu/news/ai-copyright-courts.
[36] “AI and the Specter of Automation.” Boston Review, n.d. https://www.bostonreview.net/reading-list/ai-and-the-specter-of-automation/.
[37] Beres, Damon. “AI Has Broken High School and College.” The Atlantic, August 2025. https://www.theatlantic.com/newsletters/archive/2025/08/ai-high-school-college/684057/.
[38] Jensen et al. 2024, 1152.
[39] Ahmed 2014, 11.
[40] Berlant, Lauren. 2011. Cruel Optimism. Durham, NC: Duke University Press.
[41] Selwyn, Neil. 2014. Distrusting Educational Technology: Critical Questions for Changing Times. New York: Routledge.
[42] Underwood 2024, 17.
[43] Cameron, Deborah. 2020. Language, Gender, and Sexuality: The Sociolinguistics of Identity. 2nd ed. London: Routledge.
[44] Oncioiu and Bularca 2025, 4.
[45] Jones, Meg Leta. 2023. The Character of Data: AI, Ethics, and the Crisis of Meaning. Cambridge, MA: MIT Press.
[46] McCurdy, Will. “Remember Blue Books? Sales Skyrocket as Teachers Try to Counter AI Cheating.” PC Magazine, November 22, 2025. It should come as no surprise that blue books themselves were once decried, perhaps by none more famously than the Harvard professor, Evangelinus Apostolides Sophocles, who banned them in his nineteenth century classroom, preferring the standard oral examination instead.
[47] Selwyn 2023, 77.
[48] McWhorter, John. “My Students Use AI. So What?” The Atlantic.
[49] Justin Reich. “Stop Pretending You Know How to Teach AI.” The Chronicle of Higher Education. https://www.chronicle.com/article/stop-pretending-you-know-how-to-teach-ai
[50] Paul, I., Mohanty, S., Wadhwa, M., & Parker, J. (2025). Swipe right: When and why conservatives are more accepting of AI recommendations. Journal of Consumer Psychology, 00, 1–15. https://doi.org/10.1002/jcpy.1461.
[51] Correia et. al 2024.
[52] Pitts, Griffin, Viktoria Marcus, and Sanaz Motamedi. “Student Perspectives on the Benefits and Risks of AI in Education.” arXiv preprint arXiv:2505.02198, 2025. https://arxiv.org/abs/2505.02198.
[53] Yan, Lixiang, Lele Sha, Linxuan Zhao, Yuheng Li, Roberto Martinez-Maldonado, Guanliang Chen, Xinyu Li, Yueqiao Jin, and Dragan Gašević. “Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review.” arXiv preprint arXiv:2303.13379, 2023. https://doi.org/10.48550/arXiv.2303.13379.
[54] Hill, Kashmir. “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It.” The New York Times, May 14, 2025.https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html
[55] Correia et. al 2024.
[56] Lau et al. 2025; Chen et al. 2025.
[57] Lau et al. 2025, 7.
[58] Chen et al. 2025, 14.
[59] Bearman, Ryan, and Ajjawi 2022, 1083.
[60] Ahmed, Sara. 2014. The Cultural Politics of Emotion. 2nd ed. Edinburgh: Edinburgh University Press.
[61] U.S. Department of Education. 2023. Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. Washington, DC: Office of Educational Technology.https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf.
[62] Shah, Priya, and Leo Chiang. 2024. “Feeling Automated: Affective Responses to AI Integration in Higher Education.” Teaching in Higher Education 29 (3): 211–230. https://doi.org/10.1080/13562517.2024.1234567.
[63] Chen, Yuting, Jun Lau, and Farah Al-Saidi. 2025. “Educator Attitudes toward Generative AI in Teaching and Assessment.” Computers and Education: Artificial Intelligence 9 (February): 13–21. https://doi.org/10.1016/j.caeai.2025.100231.
[64] Berlant, Lauren. 2011. Cruel Optimism. Durham, NC: Duke University Press.
[65] Ibid.
[66] Macgilchrist, Felicitas. 2019. “Cruel Optimism in EdTech: When the Digital Data Practices of Educational Technology Providers Inadvertently Hinder Educational Equity.” Learning, Media and Technology 44 (1): 77–86.
[67] Fassin, Didier. 2012. Moral Economies Revisited. New York: Routledge.
[68] Sun, Yinan, Ali Unlu, and Aditya Johri. 2025. “Sociotechnical Imaginaries of ChatGPT in Higher Education: The Evolving Media Discourse.” arXiv, August 20, 2025.
[69] Berlant, Lauren. 2011. Cruel Optimism. Durham, NC: Duke University Press.
[70] Sun, Unlu, and Johri 2025.
[71] Mazaheriyan, Adeleh, and Erfan Nourbakhsh. 2025. “Beyond the Hype: Critical Analysis of Student Motivations and Ethical Boundaries in Educational AI Use in Higher Education.” arXiv, November 14, 2025.
[72] Schmidt, D.A. 2025. “Integrating Artificial Intelligence in Higher Education.” Computers & Education: Artificial Intelligence 2025:1–12. https://doi.org/10.1016/j.caeai.2025.100231.
[73] Watermeyer, R. 2024. “Academics’ Weak(ening) Resistance to Generative AI.” Postdigital Science and Education. https://doi.org/10.1007/s42438-024-00524-x.
[74] Jones, Paul. “Some People Love AI, Others Hate It. Here’s Why.” Live Science, November 3, 2025. https://www.livescience.com/technology/artificial-intelligence/some-people-love-ai-others-hate-it-heres-why.
[75] Simpson, N.H. 2025. “Framing AI in Higher Education: A Critical Discourse Analysis.” Educational Linguistics (under review).
[76] Schmidt 2025, 4.
[77] Sun, Unlu, and Johri 2025.
[78] Schmidt 2025.
[79] Oncioiu, I., and A. R. Bularca. 2025. Artificial Intelligence Governance in Higher Education. MDPI.
[80] Sun, Unlu, and Johri 2025.
[81] Amigud, A. 2025. “Responsible and Ethical Use of AI in Education.” MDPI 6 (2). https://www.mdpi.com/2673-4060/6/2/81.
[82] Ibid.
[83] Papagiannidis, E. 2025. “Responsible Artificial Intelligence Governance: A Review.” Elsevier.
[84] Jensen, Lasse X., Alexandra Buhl, Anjali Sharma, and Margaret Bearman. 2024. “Generative AI and Higher Education: A Review of Claims from the First Months of ChatGPT.” Higher Education 89:1145–1161.
[85] Oncioiu and Bularca 2025.
[86] Jensen et al. 2024.
[87] Hamdzun et. al. “AI in Art Education: Creativity vs. Human Expression: A Mini Review.”
[88] Runco, Mark A. 2025. “The Misleading Definition of Creativity Suggested by AI Must Be Kept out of the Classroom” Education Sciences 15, no. 9: 1141. https://doi.org/10.3390/educsci15091141
[89] Marrone R, Taddeo V, Hill G. Creativity and Artificial Intelligence-A Student Perspective. J Intell. 2022 Sep 6;10(3):65. doi: 10.3390/jintelligence10030065. PMID: 36135606; PMCID: PMC9504190.
[90] Zhao, X., Liu, C., Philippakos, Z., Zahra, F., & Aydeniz, M. (2025). Reflections on the merit and perils of AI in higher education: Five early adopter’s perspectives. International Journal of Technology in Education and Science (IJTES), 9(4), 522-544. https://doi.org/10.46328/ijtes.648
[91] Graeber, David. “On the Moral Grounds of Economic Relations: A Maussian Approach.” Open Anthropology Cooperative Press, 2010. https://davidgraeber.org/articles/on-the-moral-grounds-of-economic-relations/.
[92] “Liberal Education and Pedagogy’s Value in Challenging Times.” April 1, 2020.
[93] Contributors. “The Evolution of Authentic Assessment in Higher Education.” Times Higher Education, 2025. https://www.timeshighereducation.com/campus/evolution-authentic-assessment-higher-education.
[94] Seidel, Kevin. “Review Essay: Talal Asad, Genealogies of Religion, and Formations of the Secular.” 2005.
[95] Lensmire, Timothy J. Powerful Writing, Responsible Teaching. New York: Teachers College Press, 2000.
[96] Baek, Byung-yeul. “Bong Joon-ho Expresses AI Concerns, Championing Human Creativity.” The Korea Times, November 30, 2025. https://www.koreatimes.co.kr/entertainment/films/20251130/bong-joon-ho-expresses-ai-concerns-championing-human-creativity.
[97] DeNicola, Daniel R. “Friends, Foes, and Nel Noddings on Liberal Education.” 2011. https://cupola.gettysburg.edu/cgi/viewcontent.cgi?article=1002&context=philfac.
[98] Fiormonte, Domenico. “Towards a Cultural Critique of Digital Humanities.” Historical Social Research / Historische Sozialforschung 37, no. 141 (2012): 59–76.
[99] Ibid.
[100] Leon, Cristo, James LiPuma, and Maximus Rafla. AI Disruptions in Higher Education: Evolutionary Change, Not Revolutionary Overthrow. Unpublished manuscript, New Jersey Institute of Technology, Newark, NJ, n.d.
[101] Estrada et al. 2025.
[102] Bearman, Ryan, and Ajjawi 2023.
[103] McCann, Leo, and Simon Sweeney. 2025. “How AI Is Undermining Learning and Teaching in Universities,” The Guardian, September 16, 2025.
[104] Seran, Carl Errol, Myles Joshua Toledo Tan, Hezerul Abdul Karim, and Nouar AlDahoul. 2025. “A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and Its Emergence in University-Level Academic Writing,” arXiv (preprint).
[105] Barus, OP. 2025. “Shaping Generative AI Governance in Higher Education,” Computers & Education: Artificial Intelligence, no. S2666374025000184.
[106] Oncioiu and Bularca 2025.
[107] Rutgers University–New Brunswick, Department of English. “Department Statement on AI.” Accessed December 15, 2025. https://english.rutgers.edu/about-us/department-statement-on-ai.html.
[108] Klein, Lauren, Meredith Martin, Andre Brock, Maria Antoniak, Melanie Walsh, Jessica Marie Johnson, and David Mimno. 2025. “Provocations from the Humanities for Generative AI Research.” arXiv.
[109] Sun, Unlu, and Johri 2025.
[110] Goodley, Dan, Rebecca Lawthom, Kirsty Liddiard, and Katherine Runswick-Cole. 2021. “The Desire for New Humanisms.” Journal of Disability Studies in Education 1 (1–2):125–144.
[111] Cadman, Sam, Claire Tanner, and Patrick Cheong-Iao Pang. 2025. “Humanism Strikes Back? A Posthumanist Reckoning with ‘Self-Development’ and Generative AI.” AI & Society40:6165–6180.
[112] Klein et al. 2025.
[113] Adorni, Giovanni, and Emanuele Bellini. 2025. “Towards a Manifesto for Cyber Humanities: Paradigms, Ethics, and Prospects.” arXiv. https://arxiv.org/abs/2508.02760.
[114] Preston, John. Artificial Intelligence in the Capitalist University: Academic Labour, Commodification, and Value. 2022.
[115] Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA: MIT Press.