AI has no Taste, yet!
Steve Jobs once said something that I think about almost every week: "The only problem with Microsoft is they just have no taste."
He wasn't talking about aesthetics. He wasn't talking about fonts or rounded corners. He was talking about something deeper. The ability to look at a thousand possible directions and feel which one is right. Not calculate. Not optimize. Feel.
That gut-level instinct for what matters, what resonates, what will still be beautiful in ten years. That's taste.
And right now, AI doesn't have it.
Not even close.
The Mimicry Machine
Here's what the best AI models in the world can do in early 2026: Claude Opus 4.6 can sustain autonomous work for over fourteen hours straight. GPT-5.2 can write prose that sounds warm and conversational. Gemini 3 can reason across text, video, and audio simultaneously. These systems can code, analyze, summarize, plan, and produce at a speed and scale that would have seemed like science fiction five years ago.
But ask any of them to make something great, and you start to see the cracks.
A recent study from Wharton found that when people use AI to generate ideas, they all tend to converge on the same ones. Not because the ideas are bad. Because they're probable. AI language models are, at their core, prediction engines. They calculate what word, what sentence, what structure is most likely to come next based on everything they've consumed. And when millions of people feed similar prompts into similar models, they get back variations of the same patterns.
The result is a strange new kind of mediocrity. Technically competent. Grammatically flawless. And completely forgettable.
Research published in Science Advances confirmed this at scale: generative AI boosts individual creativity on specific tasks, but it compresses the diversity of what gets created overall. Everyone's short stories got a little better. But they also got a little more alike. The edges got sanded off. The weird, unexpected, only-a-human-would-think-of-this quality disappeared.
AI made the average better. But it made the exceptional harder to find.
What Taste Actually Is
Taste isn't preference. It's not liking blue over red or preferring minimalism to maximalism.
Taste is pattern recognition refined by lived experience, cultural immersion, and a willingness to be wrong a thousand times. Jobs described it as "trying to expose yourself to the best things that humans have done and then trying to bring those things into what you're doing."
It's the director who knows which scene to cut even though it's beautiful, because the film needs to breathe. It's the writer who deletes the cleverest line in the essay because it's serving their ego instead of the reader. It's the designer who chooses the slightly imperfect typeface because perfection would feel sterile.
Taste requires sacrifice. It requires knowing what to leave out. And that's precisely where AI falls apart.
AI models don't sacrifice anything. They don't feel the weight of a decision. When a model generates ten options, it doesn't agonize over which one to present. It ranks them by probability and hands you the most statistically defensible answer. That's optimization, not curation.
A study from Frontiers in Psychology tested this directly. They gave ChatGPT-4o a classic creativity task and found something telling: the model could generate more ideas than humans, but it couldn't distinguish between its original ideas and its conventional ones. It didn't know which of its own thoughts were interesting. It lacked what the researchers called "differential evaluation." In simpler terms, it had no taste about its own output.
The Homogenization Problem
This matters more than most people realize.
Walk through the internet in 2026 and you can feel it. Blog posts that all hit the same beats. Marketing copy with identical cadences. LinkedIn posts that sound like they were squeezed from the same tube. AI-generated content has a texture to it now, a sort of frictionless competence that your brain learns to slide right past.
It's not that the content is wrong. It's that it's not anything. It doesn't take a position it's afraid to take. It doesn't make a joke that might not land. It doesn't risk being ugly in pursuit of being honest.
When everyone has access to the same tools producing from the same training data, sameness becomes the default. And in a world drowning in content, sameness is invisibility.
Research examining Italy's temporary ChatGPT ban found that when people lost access to the tool, the content they produced became more varied again. The homogenization wasn't coming from the humans. It was coming from the machine.
Agents Can Execute. They Can't Direct.
The hottest development in AI right now is agents: systems that don't just answer questions but take actions. They browse the web, write code, manage projects, send emails. Gartner predicts 40% of enterprise applications will embed AI agents by the end of this year.
And they're genuinely impressive at execution. Give an agent a clear objective with defined constraints, and it will work tirelessly within those boundaries. It will iterate, recover from errors, and optimize its approach.
But here's the thing nobody wants to say out loud: agents are the ultimate middle managers. They're reliable, tireless, and completely incapable of asking "should we even be doing this?"
Creative direction, strategic vision, the ability to look at a project and say "this is technically perfect and emotionally dead, start over." That's still entirely human territory. Not because we haven't built models smart enough. But because taste isn't a capability you scale. It's a disposition that emerges from caring about something in a way that machines simply don't.
The people building the most interesting things with AI right now understand this intuitively. They use models as instruments, not as composers. The AI handles the tedious middle. The human handles the beginning (what should we make?) and the end (is this actually good?).
But Here's Where It Gets Interesting
I said "yet" in the title, and I meant it.
There are real reasons to believe this won't be the permanent state of things. And the path forward isn't what most people expect.
The next frontier isn't making models that produce better outputs. It's making models that can evaluate their own outputs with something approaching genuine judgment.
Think about how taste develops in humans. You don't start with it. You develop it through massive exposure to great work, repeated failure, feedback from people whose opinion you respect, and slowly building an internal compass that tells you when something is right before you can articulate why.
Now look at what's happening in AI research. Models are getting better at self-evaluation. They're learning to critique their own reasoning, identify weaknesses in their arguments, and iterate before presenting a final answer. The gap between "generate" and "curate" is narrowing.
The shift from Large Language Models to what some researchers are calling Large World Models points in this direction too. Instead of systems that just process text, imagine systems that reason across sensory experience. That understand not just what words mean but what spaces feel like, how music builds tension, why a particular shade of light makes a photograph feel lonely.
We're not there. But the trajectory is real.
Multi-agent systems moving into production this year hint at another possibility: taste through collaboration. One agent generates. Another critiques. A third synthesizes. The creative tension that produces great work in human teams could, in theory, be simulated between specialized AI systems that are designed to disagree with each other productively.
The models that crack this won't just be smarter. They'll be pickier. They'll throw away 90% of what they generate because they'll have internalized the understanding that most of what anyone produces, human or machine, isn't good enough. That willingness to reject your own work is the beating heart of taste.
What This Means For You
If you're building with AI right now, the temptation is to let it do everything. It's fast. It's cheap. It's good enough.
"Good enough" is a trap.
The people who will build things that matter over the next few years are the ones who use AI to handle what doesn't require taste and invest their freed-up energy into the parts that do. Strategy. Voice. The courage to make something that not everyone will like but the right people will love.
AI handles scale and speed. The bottleneck is now, and will remain for a while, human judgment. Not the precision of the answers we get, but the quality of the questions we ask.
The models will get better. They might even develop something that looks enough like taste to fool most people most of the time. But the gap between optimization and genuine creative vision is wider than the benchmarks suggest.
For now, taste remains the last unfair advantage. The one thing you can't automate, can't shortcut, and can't buy with more compute.
Develop yours while the machines catch up.
~Dakshay