Helping leaders prepare for a future where AI is a part of life.

When AI Can Build the Course in 15 Minutes, What’s Left for Faculty?

Recently, UGA Online shared a powerful & innovative example of “AI in the middle” of teaching. The shared how an AI Course Starter they built can take a faculty member’s syllabus, rubrics, and course documents and auto-build a course shell in the LMS in about 10–15 minutes. Here’s what their system does: Crucially, it doesn’t create the content or videos. It handles the structure and LMS plumbing so faculty can stay in their disciplinary lane. They can spend their time refining, editing, and making pedagogical decisions instead of wrestling with Brightspace LMS. I love that direction. It’s innovative. It’s helpful. It’s exactly the kind of infrastructure that can reduce friction for faculty and support more coherent learning experiences for students. But what is impressive and helpful in the moment may be a signal that we need to think more about. I will suggest it signals and raises a deeper question that higher ed can’t afford to ignore. Today’s Win vs. Tomorrow’s Risk From an institutional perspective, leadership will see one very clear benefit today: “We can save faculty a lot of time and frustration.” True (check). Good (check). Much needed (check). However, in a world of declining enrollments and tightening budgets, those same leaders who are impressed today may see something very different 5–10+ years from now: “If this infrastructure can do more of the design work, maybe we don’t need as many people doing it…or teaching it.” If an AI agent can build a coherent course shell, align outcomes,

Read More »

The Best AI in Higher Education Resources, Ranked

I asked AI to rank the best AI in higher education resources, and told it to be brutally honest. It lead to this… The flood of “AI in Education” guidance is a firehose of noise. Every university, consultant, and think tank is churning out webinars, white papers, and articles, each making big promises about AI in education. Much of it may have a novel point here and there. Much of it is just more tumbleweed blowing in the winds of what is the Wild West of AI in education. It’s either hopelessly abstract policy written by committees who haven’t been in a classroom in a decade, or it’s a list of “5 cool ChatGPT prompts” that are a novel bandaid when surgery is needed. After reviewing dozens of these resources, I realized the problem: we’re asking the wrong questions. We’re stuck asking the theoretical question, “What is AI?” Or we stuck in an immediacy-novelty-panic loop, asking “What the heck do I do to teach with AI in my class on a Tuesday morning?’ But the only way to answer that Tuesday question consistently is to unpack and expand one word mentioned above – teach. When we focus the full meaning of teach, it leads us to ask a foundational question: ‘How do I build the deep AI fluency and pedagogical skills to effectively teach in the age of AI?’“ So, seeing this dilemma, I did something meta. I used AI to analyze and rank the top resources for AI in

Read More »

Your AI Syllabus Policy is a Teaching Manifesto. Write it Like One.

It’s late July or early January. The academic calendar, a document of serene certainty in a chaotic world, tells you the semester is approaching. And so, the ritual begins: “Syllabus Week.” You open last semester’s or last year’s document. You update the dates, tweak the reading list, and then you freeze. You’ve arrived at the section on Academic Integrity. A year ago, this was boilerplate. A simple, clear statement you hadn’t touched in years. Now, it feels like a minefield. The elephant in every classroom – Artificial Intelligence – is staring you down, and it demands a response. What do you write? This single question has launched a thousand panicked emails, department meetings, and frantic searches for a quick fix. The result has been a wave of AI syllabus policies that are, for the most part, pedagogically bankrupt. They are documents born of fear, not foresight. Before we build a better one, let’s dissect the three deadly sins of AI syllabus design that have become rampant across higher education. The Three Deadly Sins of AI Syllabus Policies 1. The Total Ban: The Futility of Prohibition The most common gut reaction is to simply forbid it all. “The use of any generative AI tools for any assignment in this course is strictly prohibited and will be treated as plagiarism.” It feels clear. It feels decisive. It is also a complete fantasy. Banning AI is like banning the calculator in a math class or the internet in a research methods course. You

Read More »

The AI Intern is Here. Your Entire Curriculum is Now Obsolete.

Let’s run a quick experiment. Imagine giving the following prompt to a student as a final capstone project for a business degree—a project that would typically take an entire semester, a team of four students, and dozens of hours of work. Prompt: “Act as a junior strategy consultant. Our client is a mid-sized, family-owned winery in upstate New York looking to expand its direct-to-consumer sales. Your task is to develop a comprehensive market expansion strategy. You have a budget of $250,000 for the first year. Your final deliverable, due tomorrow, must include: 1) A market analysis of the national DTC wine market, including key competitors and consumer trends. 2) A detailed customer persona for the ideal target market. 3) A multi-channel marketing and sales plan with a detailed first-year budget breakdown. 4) A 15-slide presentation deck summarizing your findings and recommendations.” In 2023, this would have been a rigorous, challenging project. By Fall 2025, it is a trivial, one-hour task. Because you are no longer giving this prompt to a student. You are giving it to “AgentAI,” the latest generation of autonomous AI models, and it will execute the entire project flawlessly while the student gets a cup of coffee. It will crawl the web for real-time market data, access business databases, build the persona, allocate the budget, write the report, and design a presentation deck that is more polished and professional than anything a team of exhausted 21-year-olds could produce. The AI will get an A+. The student will

Read More »

The Assignment is Dead. Long Live the Assignment.

Let’s hold a moment of silence for one of our favorite tools: the five-page essay on a classic text, assigned on Monday and due in a week (or maybe at another point in the semester). For decades, this assignment was a reliable workhorse. It was a decent proxy for whether a student did the reading, understood the core concepts, and could structure a coherent argument. It was gradable, scalable, and familiar. And now, it’s dead. An AI can now read The Great Gatsby, identify the major themes, and write a B+ essay on the failure of the American Dream in less time than it takes a student to find the book on their shelf. Ok, ok, I’m exaggerating and pushing it too far on what AI does…because a student who knows how to use it decently at all can have it write an A paper! Yes, that’s more accurate. It can produce the polished, final artifact – the very thing we’ve graded for years – has been rendered almost meaningless as a measure of individual effort and understanding. The death of this classic assignment has triggered a wave of panic across higher education, and that panic has led to a predictable, and entirely wrong, response: a pedagogical arms race against the machines. This is the “AI-proofing” craze. It’s a frantic effort to create assignments so convoluted, so inconvenient, so analog that an AI couldn’t possibly complete them. We see it in mandates for in-class, handwritten essays on blue books, the

Read More »

The University Is Drowning in AI Memos. Faculty Need a Lifeline.

Another email from the Provost’s office or some well-meaning soul in central administration. Subject: “Updated Guidance on Responsible AI Use.” How many does that make this year? You open it with a sigh. It’s three pages long, written in a dialect of corporate-speak that only exists in university administration (or drafted from GenAI, as though we can’t tell they were using it). It’s a masterclass in saying nothing, filled with toothless platitudes about “academic integrity,” vague suggestions to “innovate responsibly,” and ominous warnings about “unauthorized use.” The document’s primary function is clear: to absolve the university of liability, not to empower educators. It’s everything and nothing. It’s a document written by a committee to protect an institution, not to enlighten a single person standing in front of a classroom. This is the state of AI in higher ed. While universities are busy forming task forces and issuing memos, faculty are on the front lines of a pedagogical revolution with no map, no compass, and certainly no useful air support. The gap between the view from the central office and the reality on the ground has never been wider. The Two Students in Your Classroom The memos from on high talk about AI as a single, monolithic threat to be contained. But you know the truth is far more complex. In your classroom right now, there are two dramas playing out. Student A is “getting by.” They use ChatGPT the way a college student a decade ago used Wikipedia for a

Read More »

Your University’s AI Strategy is Almost Certainly Backwards

Somewhere on your campus, in a sterile, beige conference room, the “Presidential Blue-Ribbon AI Task Force” is meeting for the third time this month. Around the table sits the CIO, a lawyer from the General Counsel’s office, the head of university communications, and a well-meaning Dean who’s been tasked with herding the cats. They’re looking at charts. They’re talking about server capacity, data security, enterprise licenses, and risk mitigation. They’re drafting another university-wide policy—a document destined to be equal parts threatening and useless. Notice who isn’t in that room. There’s no art history professor who just discovered a student using Midjourney to create stunning, historically-informed pastiches. There’s no nursing instructor figuring out how to use AI simulators to train clinical reasoning. There’s no philosophy Ph.D. wrestling with how to teach Kant’s Categorical Imperative when students can ask an AI to write a perfect essay on it in 30 seconds. In short, the people actually on the front lines of the AI revolution—the faculty—are conspicuously absent (or have a token representative). And that is why your university’s AI strategy is almost certainly backwards. The dominant approach to AI in higher education has been a top-down, centralized, command-and-control model. It’s a strategy more dictated by Administration than fostered and empowered by Administration. It’s a strategy focused on plumbing and policy. And it is a strategy that is doomed to fail. The Two Failed Models of AI Strategy Universities have defaulted to two modes of thinking when it comes to AI, both of

Read More »

AI Literacy is First Aid. Your University Needs Surgeons.

Imagine your entire faculty has just completed a mandatory First Aid certification. They can all define “aneurysm,” apply a tourniquet, and perform CPR on a dummy. They are, in a word, literate in emergency medical care. Now, ask one of them to perform open-heart surgery. The absurdity of that request is the exact situation facing higher education today. We are in the midst of “Peak AI Literacy.” The landscape is saturated with awareness-level initiatives: These efforts aren’t useless. Like First Aid, they establish a baseline and can prevent immediate harm. They make administrators feel like they are moving the conversation from “What is AI?” to “AI is here.” But the diffusion of GenAI is unlike anything we’ve seen. Adoption is broader and faster than any major technology in modern history. Awareness is already here. But awareness is not a strategy. And literacy is not fluency. Acknowledging a challenge is not the same as being equipped to solve it. The bigger point here is that our piecemeal efforts are developing a campus full of first-aiders with a few survival trick while the moment demands a generation of skilled surgeons. The Glossary-and-a-Prayer Approach to Faculty Development Let’s be honest about what most “AI Literacy” training entails. It’s a glossary of terms (LLM, generative, prompt), a list of popular tools, and a well-meaning but vague discussion about ethics and cheating. It’s a “Glossary-and-a-Prayer” approach. We give faculty a few new words and pray they can figure out the rest. This is insufficient because

Read More »
Shopping Cart
Scroll to Top