Helping leaders prepare for a future where AI is a part of life.

The Assignment is Dead. Long Live the Assignment.
Let’s hold a moment of silence for one of our favorite tools: the five-page essay on a classic text, assigned on Monday and due in a week (or maybe at another point in the semester). For decades, this assignment was a reliable workhorse. It was a decent proxy for whether a student did the reading, understood the core concepts, and could structure a coherent argument. It was gradable, scalable, and familiar. And now, it’s dead. An AI can now read The Great Gatsby, identify the major themes, and write a B+ essay on the failure of the American Dream in less time than it takes a student to find the book on their shelf. Ok, ok, I’m exaggerating and pushing it too far on what AI does…because a student who knows how to use it decently at all can have it write an A paper! Yes, that’s more accurate. It can produce the polished, final artifact – the very thing we’ve graded for years – has been rendered almost meaningless as a measure of individual effort and understanding. The death of this classic assignment has triggered a wave of panic across higher education, and that panic has led to a predictable, and entirely wrong, response: a pedagogical arms race against the machines. This is the “AI-proofing” craze. It’s a frantic effort to create assignments so convoluted, so inconvenient, so analog that an AI couldn’t possibly complete them. We see it in mandates for in-class, handwritten essays on blue books, the

The University Is Drowning in AI Memos. Faculty Need a Lifeline.
Another email from the Provost’s office or some well-meaning soul in central administration. Subject: “Updated Guidance on Responsible AI Use.” How many does that make this year? You open it with a sigh. It’s three pages long, written in a dialect of corporate-speak that only exists in university administration (or drafted from GenAI, as though we can’t tell they were using it). It’s a masterclass in saying nothing, filled with toothless platitudes about “academic integrity,” vague suggestions to “innovate responsibly,” and ominous warnings about “unauthorized use.” The document’s primary function is clear: to absolve the university of liability, not to empower educators. It’s everything and nothing. It’s a document written by a committee to protect an institution, not to enlighten a single person standing in front of a classroom. This is the state of AI in higher ed. While universities are busy forming task forces and issuing memos, faculty are on the front lines of a pedagogical revolution with no map, no compass, and certainly no useful air support. The gap between the view from the central office and the reality on the ground has never been wider. The Two Students in Your Classroom The memos from on high talk about AI as a single, monolithic threat to be contained. But you know the truth is far more complex. In your classroom right now, there are two dramas playing out. Student A is “getting by.” They use ChatGPT the way a college student a decade ago used Wikipedia for a

Your University’s AI Strategy is Almost Certainly Backwards
Somewhere on your campus, in a sterile, beige conference room, the “Presidential Blue-Ribbon AI Task Force” is meeting for the third time this month. Around the table sits the CIO, a lawyer from the General Counsel’s office, the head of university communications, and a well-meaning Dean who’s been tasked with herding the cats. They’re looking at charts. They’re talking about server capacity, data security, enterprise licenses, and risk mitigation. They’re drafting another university-wide policy—a document destined to be equal parts threatening and useless. Notice who isn’t in that room. There’s no art history professor who just discovered a student using Midjourney to create stunning, historically-informed pastiches. There’s no nursing instructor figuring out how to use AI simulators to train clinical reasoning. There’s no philosophy Ph.D. wrestling with how to teach Kant’s Categorical Imperative when students can ask an AI to write a perfect essay on it in 30 seconds. In short, the people actually on the front lines of the AI revolution—the faculty—are conspicuously absent (or have a token representative). And that is why your university’s AI strategy is almost certainly backwards. The dominant approach to AI in higher education has been a top-down, centralized, command-and-control model. It’s a strategy more dictated by Administration than fostered and empowered by Administration. It’s a strategy focused on plumbing and policy. And it is a strategy that is doomed to fail. The Two Failed Models of AI Strategy Universities have defaulted to two modes of thinking when it comes to AI, both of

AI Literacy is First Aid. Your University Needs Surgeons.
Imagine your entire faculty has just completed a mandatory First Aid certification. They can all define “aneurysm,” apply a tourniquet, and perform CPR on a dummy. They are, in a word, literate in emergency medical care. Now, ask one of them to perform open-heart surgery. The absurdity of that request is the exact situation facing higher education today. We are in the midst of “Peak AI Literacy.” The landscape is saturated with awareness-level initiatives: These efforts aren’t useless. Like First Aid, they establish a baseline and can prevent immediate harm. They make administrators feel like they are moving the conversation from “What is AI?” to “AI is here.” But the diffusion of GenAI is unlike anything we’ve seen. Adoption is broader and faster than any major technology in modern history. Awareness is already here. But awareness is not a strategy. And literacy is not fluency. Acknowledging a challenge is not the same as being equipped to solve it. The bigger point here is that our piecemeal efforts are developing a campus full of first-aiders with a few survival trick while the moment demands a generation of skilled surgeons. The Glossary-and-a-Prayer Approach to Faculty Development Let’s be honest about what most “AI Literacy” training entails. It’s a glossary of terms (LLM, generative, prompt), a list of popular tools, and a well-meaning but vague discussion about ethics and cheating. It’s a “Glossary-and-a-Prayer” approach. We give faculty a few new words and pray they can figure out the rest. This is insufficient because