TL;DR: AI generates high volume within conventional categories but cannot identify originality because it was trained on existing ideas—like a system incapable of recognizing what lies outside its training distribution.
The Short Version
You ask your AI tool for ten creative ideas. It delivers ten ideas. Volume. Fluency. Impressive idea count.
Then you look at them and realize: they’re all the same category. All within the same conceptual boundaries. All variations on the established approach.
This isn’t accidental. It’s not a flaw you can fix by asking differently. It’s baked into how AI works.
The Egg Task Study
Psychologists have a standard test for measuring divergent thinking and creative originality called the “egg task.” The prompt is simple: think of all the uses for an egg. The test measures not just how many uses you generate, but how creative and original those uses are.
A 2025 study examined how AI performed on this task. Specifically, how current-generation AI handled it.
📊 Data Point: The AI generated massive volume. Impressive fluency. Dozens and dozens of uses for an egg.
And virtually all of them fell strictly within conventional, pre-established categories: food, decoration, crafting, education, etc.
💡 Key Insight: The AI was phenomenally good at exhausting the obvious. It could generate infinite variations within established category boundaries. More egg dishes. More crafting ideas. More educational applications.
But ask it to break the category boundary entirely? To imagine a use for an egg that exists outside the conventional categories? It couldn’t do it.
The Originality Evaluation Problem
Here’s where it gets worse.
Researchers asked the AI to evaluate which of its own ideas were original versus derivative. To apply metacognition—the ability to think about your own thinking, to distinguish between a genuinely novel concept and a well-dressed version of an existing one.
The AI failed dramatically.
It couldn’t distinguish between an original idea and a derivative one, even among its own outputs. It struggled immensely to identify which concepts truly broke convention and which merely refined it.
💡 Key Insight: Generative AI was trained on human creativity. It learned the probability distribution of existing human ideas. It can remix and recombine what’s been imagined. But it cannot evaluate whether an idea is novel because novelty, by definition, exists outside the probability distribution it was trained on.
You cannot identify something as original from within a system that only understands probability distributions.
The Fundamental Limitation
Deep learning models are fundamentally incapable of experiencing human emotion, physical intuition, or lived contextual reality. They cannot “imagine” new paradigms because imagination requires this embodied experience.
They can only remix and repackage what has already been imagined.
This creates a specific cognitive failure: fixation bias. The model generates high volume within conventional categories (that’s what the data supports), and it cannot identify when it’s fixating within those boundaries (because it has no external reference point for what originality looks like).
You could ask it to “be more creative” or “think outside the box” and it will generate creative-sounding language about breaking conventions. But the underlying ideas are still locked in conventional categories. The vocabulary of originality doesn’t create the capability for originality.
What This Means When You Use AI as a Creative Advisor
If you’re using AI as your creative advisor—asking it to help you evaluate ideas, suggest directions, identify the most creative concept—you’re trusting the evaluation to a system that cannot perform creative evaluation.
It will tell you which ideas are most probable, most refined, most polished. That’s not the same as most original.
It will push you toward ideas that fit within existing conventions because those ideas have the strongest probability distribution. It cannot recognize genuine novelty because genuine novelty falls outside that distribution.
💡 Key Insight: You’re using a tool optimized for conventional refinement to guide creative strategy. You’re asking a system that can’t think outside the box to tell you which ideas are outside the box.
It will consistently point you toward the creative center, not the creative edge.
The Portfolio Risk
Without strict, highly skeptical human oversight filtering and challenging AI outputs, you risk saturating your creative portfolio with ideas that are highly polished but fundamentally derivative.
They look original. They sound original. The language is original. The execution is sophisticated. But the underlying concept is a variation on an existing theme, because that’s the only thing the system could generate.
This is particularly dangerous when you’re building in domains where differentiation matters most: product design, marketing positioning, visual identity, strategic positioning.
In these domains, originality isn’t a luxury feature. It’s your competitive advantage. And AI cannot provide it. It can provide variations on the conventional. Polish. Refinement. Optimization. But not breakthrough originality.
What This Means For You
You can use AI to generate volume within categories. Ask it to give you ten variations on a theme. It will do this brilliantly.
But don’t ask it to identify which idea is most original. Don’t ask it to tell you which concept breaks convention. Don’t use it as your creative advisor in domains where differentiation is the goal.
Genuine originality evaluation requires someone who understands the existing category boundaries, can imagine what’s outside those boundaries, has domain expertise to distinguish between incremental and paradigm-shifting creativity, and is willing to push ideas toward the edge, not the center.
These capabilities require human judgment. They require someone who understands not just the data distribution, but the context, the customer, the market, the missed opportunity. You cannot outsource this to a system that optimizes for probability.
Key Takeaways
- AI exhibits fixation bias: it generates high volume within conventional categories but cannot break outside established boundaries
- AI cannot evaluate originality even in its own outputs because originality lies outside its training distribution
- Using AI as a creative advisor pushes you consistently toward the center of conventional thinking, not the edge of novelty
- Domains requiring true differentiation (product design, positioning, visual identity) cannot rely on AI for originality evaluation
Frequently Asked Questions
Q: Can I train AI to break out of fixation bias? A: Not substantially. The fixation bias is structural—it stems from how AI learns from existing data. You can prompt it differently to get variations at the margins, but you cannot teach a system trained on existing ideas to recognize ideas that don’t exist in its training corpus.
Q: What if I use AI to generate ideas and then manually evaluate them myself? A: That’s a better approach, but with a critical caveat: you’re limited to evaluating what AI actually generates. If AI never generates truly novel ideas, then you’re choosing the most interesting derivative concept—which is still derivative. You need human brainstorming as the primary ideation source.
Q: Is it ever safe to use AI for creative strategy? A: Yes, but only after you’ve established your creative direction independently. Use AI to refine, expand, and polish ideas humans have already generated. Don’t use it to generate the ideas themselves.
Not medical advice. Community-driven initiative. Related: Why AI Is Killing Your Best Ideas | Algorithmic Monoculture and AI Creativity | Polished AI Output vs. Original Thinking