We're Still Early: AI, Education, and the Biggest Opportunity Everyone Is Overlooking: my thesis - Updated as of March 4th 2026
A lot is happening in AI right now. It's genuinely hard to keep up. But I still think we're in the early days - and I don't say that lightly.
Look at people outside the tech bubble. Most of them are still using ChatGPT as their first and only AI tool. Many only heard of Claude this past week - not because of a product launch or a benchmark, but because of politics. The Anthropic-Pentagon situation brought it into the mainstream. President Trump ordered every federal agency to cease using Anthropic's technology after Defense Secretary Pete Hegseth labeled the company a "supply chain risk to national security." Hours later, OpenAI rushed in to announce its own Pentagon deal - which Sam Altman later admitted "looked opportunistic and sloppy." The fallout was immediate: Claude surged to number one on the iPhone App Store, and Anthropic's user base grew over 60% since January.
That's what it took. Not a technical breakthrough - a political controversy. That tells you everything about where we really are in the adoption curve.
We're in March 2026. ChatGPT still holds approximately 68% of the AI chatbot market share and recently surpassed 900 million weekly users. Claude, despite being the favorite of Silicon Valley and embedded in 70% of Fortune 100 companies, draws roughly 3 million weekly web visitors compared to ChatGPT's 30 million. Only about 35% of people use AI tools daily.
Meanwhile, 78% of organizations say they've "adopted AI," and 83% call it a top priority - but Gartner now places generative AI in the "trough of disillusionment." The gap between enterprise adoption rhetoric and actual productive use remains enormous. The industry is shifting from capability races to deployment reality, and that transition is exposing how early we still are.
Bold statement, but it's true: we are still very early.
AI is going to be transformative for healthcare, information access, scientific research, and enterprise productivity. Those are well-covered narratives with billions in funding behind them. But the biggest thing we're collectively overlooking is education.
And I don't mean "AI tools in the classroom." I mean something more fundamental: we haven't figured out how AI fits into the human learning process. We're in infancy here - and the data backs that up.
Global student AI usage jumped from 66% in 2024 to 92% in 2025. An estimated 86% of college students now use AI as their primary research and brainstorming tool. ChatGPT and Grammarly dominate with 66% and 25% student usage, respectively.
But here's the uncomfortable truth: most of this usage is substitutive, not educational. Students are using AI to write the essay they procrastinated on. They're using it to get answers without going through the process of learning. And that process - testing yourself, failing, going back to the textbook chapter by chapter, working through practice problems, applying what you've learned - that is how knowledge actually sticks.
Harvard recently published research on "preserving learning in the age of AI shortcuts," directly confronting this tension. The OECD's Digital Education Outlook 2026 raises the same alarm. When AI makes it effortless to get the answer, the incentive to actually learn collapses.
Yes, you can use AI to pass the test. But then you haven't learned anything. You've just let AI do the job. And that's a profoundly different relationship with knowledge than anything we've dealt with before.
The coding world mirrors this perfectly. AI coding tools like Cursor, GitHub Copilot, and Claude Code are everywhere now. They're genuinely impressive - they can get you from zero to halfway there. If you have no idea what you're doing, you can describe what you want in natural language and get working code on localhost.
But the quality gap is real and measurable. AI-generated code creates 1.7x more issues than human-written code. Technical debt increases 30-41% after AI tool adoption. Cognitive complexity rises 39% in agent-assisted repositories. One study found that while experienced developers believed AI made them 20% faster, objective measurement showed they were actually 19% slower.
There's a deeper problem too: the "almost right" phenomenon. AI produces code that is 95% correct, with subtle bugs - hallucinated library methods, off-by-one errors, security flaws - that require deep debugging. Sometimes it takes longer to fix the AI's output than to write the code from scratch.
And even when you get the code working on localhost, then what? How do you deploy it? How do you handle authentication, database migrations, CI/CD pipelines? Those are the next steps where the learning curve kicks back in, and the AI scaffolding starts to crumble. You can tell the difference between code written by someone who knows what they're doing and code generated by someone prompting their way through. The output reveals the understanding - or the lack of it.
I don't know how much people are actually learning through this process. There's a learning curve, but it's a different kind of learning curve - and it's still significant.
Here's where it gets philosophically interesting. We've opened Pandora's box, but we don't know what's inside or how to use it. So many things are being thrown at you - new models, new tools, new frameworks every week. There's this challenge of choosing the right things to absorb when the firehose never stops.
But for the people who do know exactly what they're doing? AI has enabled a textbook case of Jevons paradox.
The original paradox, from 19th-century economist William Stanley Jevons: when you make a resource more efficient to use, people don't use less of it - they use more. Coal-efficient steam engines didn't reduce coal consumption; they made coal so useful that demand exploded.
The same thing is happening with AI and cognitive labor. If you're a power user - if you already have deep domain expertise and you know exactly what to ask for and how to evaluate the output - AI has enabled your output to go 10x. But you're not saving time. You're spending more time doing more things. The efficiency gains don't translate into leisure; they translate into expanded ambition and expanded workload.
The data confirms this: companies that embraced AI tools early are now seeing their most aggressive adopters show signs of burnout. To-do lists expand to fill every hour AI freed up. The bottleneck has shifted from information access to attention, judgment, and prioritization. Some companies are now experimenting with "AI productivity caps" and four-day work weeks specifically for teams that adopt AI tools - an acknowledgment that the paradox is real and unsustainable.
Jevons paradox plays perfectly into this moment: AI doesn't reduce the amount of work. It raises the ceiling of what's possible, and ambition fills the gap.
So here's where all of this converges into a thesis.
The AI in education market is projected to grow from $9.58 billion in 2026 to nearly $137 billion by 2035 - a 34.5% CAGR. That sounds impressive until you look at the investment side: EdTech venture capital hit $2.4 billion in 2024, an 89% decline from the pandemic peak of $20.8 billion. That's the lowest level in a decade.
There's a massive mismatch between the size of the opportunity and the capital flowing toward it. Everyone is funding AI infrastructure, foundation models, and enterprise SaaS. Almost no one is seriously funding the question of how humans actually learn in an AI-native world.
More than two-thirds of teachers (68%) have received zero formal AI training. The tools exist, but the pedagogy doesn't. The integration doesn't. The habit formation doesn't. People use AI once or twice - it's a novelty. But it's not becoming something they can't live without, because no one has cracked the experience that makes it indispensable for actual learning.
Think about the levels: K-12 students, university students, working professionals reskilling, autodidacts teaching themselves new domains. Each has different needs, different cognitive frameworks, different motivations. And right now, the dominant use pattern across all of them is the same: AI as a faster search engine. Not AI as a tutor. Not AI as a thinking partner. Not AI as a Socratic method engine that forces you to work through problems rather than handing you answers.
You can live without AI search, because it's just faster search. You can use AI for writing, but then you're bypassing the learning process. The sweet spot - the thing that would make AI genuinely transformative for education - is still undiscovered. We haven't found it yet.
This is why I'm building PantheonAI. The thesis is clear to me as of March 4th, 2026:
“The biggest unlock in AI isn't making models smarter. It's making humans smarter through AI.”
The tools are there. The capabilities are extraordinary. But the bridge between "powerful AI" and "AI that actually makes people learn, grow, and retain knowledge" - that bridge barely exists. And the market is not funding it at anywhere near the level it deserves.
The people who figure out how AI fits into the human learning process - not as a shortcut, but as a genuine amplifier of understanding - will build the most important companies of the next decade. Not the most hyped. Not the ones raising the biggest rounds. The most important.
That's where I'm focused. The rest is noise. I haven’t figured it out, yet. And thats why we keep building to learn how to ask the right questions, which could lead us closer to the right approach.