Let’s hold a moment of silence for one of our favorite tools: the five-page essay on a classic text, assigned on Monday and due in a week (or maybe at another point in the semester).
For decades, this assignment was a reliable workhorse. It was a decent proxy for whether a student did the reading, understood the core concepts, and could structure a coherent argument. It was gradable, scalable, and familiar.
And now, it’s dead.

An AI can now read The Great Gatsby, identify the major themes, and write a B+ essay on the failure of the American Dream in less time than it takes a student to find the book on their shelf. Ok, ok, I’m exaggerating and pushing it too far on what AI does…because a student who knows how to use it decently at all can have it write an A paper! Yes, that’s more accurate. It can produce the polished, final artifact – the very thing we’ve graded for years – has been rendered almost meaningless as a measure of individual effort and understanding.
The death of this classic assignment has triggered a wave of panic across higher education, and that panic has led to a predictable, and entirely wrong, response: a pedagogical arms race against the machines.
This is the “AI-proofing” craze. It’s a frantic effort to create assignments so convoluted, so inconvenient, so analog that an AI couldn’t possibly complete them. We see it in mandates for in-class, handwritten essays on blue books, the banning of all digital devices, or the creation of labyrinthine prompts designed to confuse a language model.
Now, there may be a time and place for that. But let’s be clear: many who think this way may or may not be trying to hold on to the bygone days of how education was pre-GenAI. And this “holding on” approach is not the pedagogy our students need. This is panic. It’s a desperate defense of old methods, not a thoughtful engagement with a new reality. It’s a losing battle that wastes faculty time, frustrates students, and, worst of all, actively works against the goal of authentic learning.
The goal isn’t to outsmart the AI. The goal is to make the AI irrelevant to the core task of learning.
The challenge isn’t to design assignments that AI can’t do. The challenge is to design assignments that require students to think. It’s to become more process oriented and think of a process in which AI can be a tool, a subject, a sparring partner, or a sidekick, but never a substitute for the human brain in the driver’s seat.
This requires a fundamental shift in our thinking about what an “assignment” is for. It’s a move away from evaluating the final artifact and toward assessing the student’s process. This is where the META AI Assessment Redesign Framework and the TEACH Framework provide lenses for radical, yet practical, redesign.

From Artifact to Process: Five Redesign Strategies
Instead of tweaking old assignments, let’s build new ones from the ground up, using a process-centric approach. Here are five detailed strategies you can adapt and use immediately.
1. The Process Portfolio: Making the “How” the “What”
The traditional essay assesses the final product. The Process Portfolio assesses the entire journey of creation. It’s a simple but powerful shift that makes the student’s thinking, revision, and reflection the primary object of evaluation.
The Old Way: “Write a 10-page research paper on the impact of social media on teenage mental health. Due in four weeks.”
The New Way (Process Portfolio):
- Week 1: The Proposal & AI Brainstorm. Students submit a one-page proposal outlining their research question. They are required to use an AI tool (like ChatGPT, Gemini, CoPilot, or Perplexity) to generate three potential avenues of inquiry and a list of 20 keywords. They must submit the raw AI output and a simple reflection on which avenue they chose and why the AI’s other suggestions were less compelling.
- Week 2: The Annotated Bibliography. Students submit an annotated bibliography of at least five scholarly sources. For each source, they must include a paragraph written by an AI summarizing the article, followed by their own paragraph critiquing the AI’s summary. Did it miss the nuance? Did it fail to grasp the methodology? This forces them to read more critically than ever before.
- Week 3: The “Messy” First Draft. Students submit a complete but unpolished draft. The focus here is on structure, evidence, and argument, not on perfect prose. This is submitted alongside a record of their AI usage—prompts they used, outputs they received, etc.
- Week 4: The Final Polish & Metacognitive Reflection. Students submit the final, polished paper. However, it is accompanied by a final reflection that answers the question: “How did you use AI in this project? Where was it most helpful? Where was it a hindrance? How did your own thinking diverge from or build upon the AI’s outputs?”
Why it works: Suddenly, the act of “cheating” with AI becomes part of the assignment itself. The focus is no longer on a pristine final draft but on the student’s ability to document their process, critique their tools, and articulate their own intellectual journey. You aren’t grading a paper; you’re grading their thinking.
2. The Live Critique & Defense
This model borrows from the graduate thesis defense and applies it to undergraduate assignments. It’s built on a simple premise: it’s one thing to suThis model borrows from the graduate thesis defense and applies it to undergraduate assignments. It’s built on a simple premise: it’s one thing to submit a document; it’s another thing to own and defend its ideas in real-time.
The Old Way: “Submit your analysis of the marketing campaign.”
The New Way (Live Critique): Students work in small groups to produce a report, a proposal, or a piece of analysis. They are free to use AI tools extensively in the creation of the document. The document itself receives only a completion grade.
The real assignment is a 15-minute live presentation and Q&A with the instructor. During this session, the instructor can probe for understanding with questions like:
- “Your report mentions Porter’s Five Forces. Can you explain the ‘Threat of New Entrants’ in the context of this specific company, without looking at your notes?”
- “Paragraph three makes a strong claim. What’s the single best piece of evidence you found that supports it? What’s a counterargument you considered and rejected?”
- “If your core recommendation is wrong, what is the most likely reason you’ll be wrong?”
Why it works: This method makes the AI-generated text a starting point, not an endpoint. It shifts the cognitive load from writing to synthesis, internalization, and oral communication. A student who simply downloaded the report from an AI will be exposed in seconds. A student who used the AI as a research assistant to build their own deep understanding will shine.
3. The Editorial Challenge
Increasingly, students encounter sophisticated machine-generated content in their academic and professional lives. Instead of simply consuming or producing this information, they must develop the skills to critically assess, refine, and authenticate it. By building assignments around the act of editing and fact-checking, educators can teach students to engage actively with digital texts, elevating their ability to identify subtle inaccuracies, strengthen arguments, and support claims with reliable evidence.
The Old Way: “Write a summary of the historical context of Shakespeare’s Macbeth.”
The New Way (Unreliable Narrator): Provide students with a pre-written, AI-generated document that is deliberately flawed. For instance, give them an AI-generated biography of a historical figure that contains three factual errors, one invented source, and a subtle but important misinterpretation of their primary motivation.
The assignment is not to write something new, but to act as a world-class fact-checker and editor. The student must submit a corrected version of the document using a “track changes” feature, with comments explaining why each change was made and citing the correct sources.
Why it works: This directly teaches digital fluency, critical consumption of information, and the crucial skill of verification. It inverts the usual dynamic: instead of asking the student to create, you’re asking them to critique and deconstruct. You are rewarding skepticism and accuracy, two of the most valuable skills in an information-saturated world.

4. The “Bring Your Own Bot” Debate
Instead of banning the bots, invite them into the classroom for a structured debate. This works exceptionally well for topics with clear opposing viewpoints.
The Old Way: “Write an essay arguing for or against the use of nuclear energy.”
The New Way (BYOB Debate):
- Preparation: Divide the class into two sides. Each side is tasked with using AI tools to build the strongest possible case for their position. They must gather evidence, formulate arguments, and anticipate counter-arguments, all with the help of their AI “teammate.”
- The Debate: Conduct a formal in-class debate. Students are not allowed to read directly from AI-generated scripts but must use the knowledge they’ve built to argue their points.
- The Reflection: The final written component is a post-debate reflection where students analyze the strengths and weaknesses of both their own and the opposing side’s arguments. They must also include a critique of their AI’s performance: “Where did my AI partner give me strong, well-supported arguments, and where did it provide weak or easily-refuted points?”
Why it works: This reframes AI from a solo cheating device to a team-based research tool. It emphasizes argumentation, public speaking, and critical evaluation of AI-generated content within a competitive, engaging format.
5. Reverse-Engineering the Prompt
This is a sophisticated assignment that focuses on one of the most important emerging skills: understanding how to communicate with AI models effectively.
The Old Way: “Write a short story in the style of Ernest Hemingway.”
The New Way (Reverse Engineering): Provide the students with a piece of AI-generated text. This could be a poem, a piece of code, a business plan, or a short story.
The assignment is for the student to determine and document the prompt that likely created this output. This requires them to:
- Analyze the style, tone, and structure of the output.
- Identify keywords and constraints that must have been in the prompt.
- Experiment with an actual AI, trying to recreate the output through iterative prompting.
- Submit a final report that presents their “most likely prompt” and a detailed analysis of why specific words and phrases were necessary to achieve the result.
Why it works: This is a masterclass in AI fluency and critical thinking. It teaches students how these models “think” and demystifies the process of prompt engineering. It’s a challenging, puzzle-like task that is almost impossible to complete without genuine engagement.
The Future of Rigor is Not Avoidance, It’s Engagement
Redesigning assignments for the age of AI isn’t about lowering our standards. It’s about elevating them.
It’s about having the courage to abandon our comfortable, familiar assessments when they no longer serve their purpose. It’s about shifting our focus from the easily automated work of producing polished artifacts to the deeply human work of critical inquiry, creative synthesis, and intellectual struggle.
This is a moment of opportunity. It’s a chance to build assignments that are more engaging, more authentic, and more focused on the durable human skills that will matter long after the next generation of AI tools arrives. Yes, it takes time. Yes, it takes effort to do. But let’s stop fighting a defensive battle and start designing the future of learning.
Next step for assignments: Check out the META AI Assignment Redesign Framework, a guide to moving beyond “AI-proofing” into meaningful redesigns.
Next step for strategy: Download the free TEACH Starter Toolkit to map your own approach across the five domains. You can get it by signing up for the newsletter.