There is a moment that rarely makes the headlines, because it does not announce itself. It happens quietly, in the gap between a tool becoming available and a tool becoming assumed. We crossed that gap with AI sometime in 2025.
The signal was not a product launch. It was a change in the questions people stopped asking. In 2023, every boardroom, every faculty meeting, and every marketing strategy session was consumed by some version of: can it do this? By 2025, that conversation had ended. Not because the tools stopped improving. They have not. But because capability ceased to be the bottleneck.
That shift is the subject of this essay. Because what we do with it in business, in marketing, and in higher education turns out to hinge on a single discipline that most organizations have not yet built, and that the AI industry has been surprisingly slow to name.
That discipline is verification.
I have been tracking how people actually adopt AI tools-not how they say they do, but what behavioral data and longitudinal tracking reveals-across three annual cohorts of students. Four hundred and sixty-four students. Three years.
In 2023, the dominant emotion was fear: what I call the Fear Era. In 2024, fear had been replaced by something that looked, from a distance, like progress: high conviction, low integration. What I call the Hype Era. By 2025 and into 2026, something genuinely different had emerged: the Utility Era, in which students stopped asking whether AI was legitimate and started asking whether it was useful for the specific task in front of them right now.
As AI became infrastructure, it became the assumed starting point rather than the exciting alternative. As a result, the instinct to verify its outputs quietly eroded. Familiarity had set in.
The same pattern shows up in every sector I study, though it manifests differently depending on what is at stake and who bears the cost when verification fails. Each of the three sectors below has a dedicated report in the Navigate AI 2026 State of AI series, with the full data, frameworks, and implementation guidance. What follows is the diagnostic case for why verification is the right frame for each one.
Higher Education: The Verification Deficit
The standard framing of AI in higher education went like this: AI had made cheating easier, students would cheat if given the opportunity, and the institutional response was to detect, deter, and discipline. Detection tools proliferated. Turnitin launched an AI classifier. GPTZero raised venture capital. The arms race was on.
What this framing missed was the structural problem underneath it. The question "did the student use AI?" was becoming not just difficult to answer but progressively unanswerable. More important, it was the wrong question. The question that has always mattered in education — the one that accreditation, credentialing, and the entire enterprise of degrees rests on — is: does this student understand what they turned in?
My longitudinal research adds a finding that may reframe institutional strategy entirely. In a latent class analysis of 275 students, I identified three segments of AI engagement: Believers (~45%), >Adopters (~30%), and Skeptics (~25%). When I looked to understand what predicted a Believer becoming an Adopter, the strongest predictor was not trust in AI, not perceived usefulness, not technical experience.
It was growth in ethical readiness (OR = 1.51, p < .001): the confidence to know how to use AI responsibly, where the boundaries are, and when to verify independently.
Governance that creates clarity accelerates adoption. Governance that creates ambiguity suppresses it. The full implications of this finding, and the verification framework that follows from it, are in the Education report.
Marketing: The Verification Economy
Marketing arrived at the verification problem from a different direction, but it has landed in the same place. Adoption moved faster here than almost anywhere else: Jasper's 2026 survey found 91 percent of marketing teams actively using AI tools, up from 63 percent just one year prior. Content output tripled in some organizations. Production timelines compressed from weeks to days.
And then, quietly, the numbers that do not make the vendor slide decks started appearing.
More tools, more content, less delight. The gap between AI capability and customer experience is not closing. In some dimensions, it is widening. When every major brand uses the same foundational models, prompted with similar instructions, optimized for similar engagement benchmarks, the statistical mean of creative output converges. The content is well-written. It is on-brand in a generic sense. And it is indistinguishable from what the three closest competitors produced last week.
I call the environment this creates the verification economy: a market in which consumers increasingly cannot tell the difference between what is genuine and what is generated.
My research adds a second dimension: the Confidence Paradox. In a pilot study of 71 future marketing practitioners, I examined the relationship between frequency of AI use and actual confidence in AI skills. The correlation: r = −0.02. Essentially zero. Using AI more does not make people more skillful. It makes them more practiced at using AI. Those are not the same thing.
The marketing report details how to govern AI deployment in ways that protect the human capabilities it most threatens to displace: creative judgment, relational authenticity, brand coherence, and the editorial eye that decides which of the ten things AI generated is actually worth publishing.
Business: The Workflow Verification Gap
The business case for verification starts with a number that should be more widely quoted than it is: BCG describes approximately 60 percent of organizations as reporting minimal gains from AI despite significant investment. McKinsey corroborates this. While 88 percent of organizations use AI regularly in at least one function, the population of what BCG calls High Performers — organizations extracting measurable earnings impact from AI — sits at roughly 5 to 6 percent.
The gap between near-universal adoption and single-digit meaningful impact is not a technology gap. The models available to the 5 percent and the 95 percent are largely the same. The difference is operational discipline, and at the center of operational discipline is a specific capability: the ability to define what good looks like for an AI output, verify that the output meets that standard, capture failures when it does not, and feed that information back into the workflow.
The distinction that most usefully separates organizations on the value curve is not which AI systems they use. It is where on a three-category spectrum their deployment falls: Toy (AI as a helpful assistant that speeds up existing work), Tool (AI that eliminates human steps and redesigns how work is actually done), or Zombie (a pilot with all the surface characteristics of success and none of the measurable business impact, where Deloitte estimates most AI investment currently lives).
Almost every AI workflow redesign looks like it is failing at the 90-day mark. This is not failure. It is a well-documented pattern across a century of general-purpose technology adoption. The organizations that set a six-month measurement horizon before launch and resist the urge to pull the plug at 90 days are the ones that reach the part of the J-curve that justifies the investment.
Three Sectors. One Discipline.
The verification deficit in education, the verification economy in marketing, and the workflow verification gap in business are not three separate problems. They are three expressions of a single transition that AI has forced on every organization that has adopted it seriously.
For the first fifteen years of the AI era, the hard problem was getting AI to produce something useful. That problem is largely solved. The hard problem now is knowing whether what AI produced is actually good, accurate, appropriate, trustworthy, and aligned with what you intended — and building the infrastructure to answer that question consistently, at scale.
The organizations pulling ahead are the ones that built proof systems before they were legally required to. Not after a public failure. Not after a compliance audit revealed the exposure. Before.
We used to ask: Is your organization using AI? That question has been answered.
Now, the question is: what does good look like, who is checking, and what happens when the check fails?
Baltrip (2026a, 2026b). Working papers. · BCG (2025). AI at Scale. · Deloitte (2026). State of AI in the Enterprise, N=3,235. · HEPI (2025). Student Academic Experience Survey. · Jasper (2026). State of AI in Marketing, N=1,400. · Jisc (2025). AI in Higher Education. · McKinsey (2025). State of AI Global Survey. · Adobe (2026). Digital Trends Report, N=3,000. · Smartly (2026). AI in Advertising, N=450.
