TL;DR: AI algorithms have visual training. They’ve learned what looks successful, what gets engagement, what fits patterns. Your eye is learning the same patterns. Photography is how you unlearn them.


The Short Version

When you consume content filtered by recommendation algorithms long enough, your taste changes. You begin to prefer what the algorithm has trained you to prefer. You see a photograph and your first instinct is to judge it by algorithm-visible metrics: Is it sharp? Does it fit a recognizable aesthetic? Would it perform well if posted?

This is not your taste. It’s the algorithm’s taste, now in your head.

A photographer learning to see is doing the opposite work. She’s training her eye to notice what’s actually there—the quality of light that doesn’t show up in technical specs, the composition that feels right but breaks formal rules, the moment that conveys something that can’t be reduced to a formula. She’s learning to see against the grain of what algorithms optimize for.

The question is: which eye are you training? The one that understands how to be seen by machines? Or the one that understands how to see?


How Algorithms Train Vision

Here’s what happens when you offload interpretation to AI: your visual cortex learns the same biases the algorithm has. If an AI recommends content based on engagement metrics, and you follow that AI, you’re training your eye to find engaging. You’re not training it to find true, or beautiful, or what actually needs seeing.

This compounds. The more you let AI interpret what you’re looking at, the less you develop independent visual judgment. The algorithm learns from billions of images. It recognizes patterns at scale. But it can’t see what you can see, because it doesn’t have your life, your specific needs, your embodied presence in a particular place at a particular time.

💡 Key Insight: An algorithm sees probability. Your eye sees presence. These are incompatible ways of looking at the world. Every time you let the algorithm interpret, you’re choosing probability over presence.

When you’re always asking AI what something means, you’re saying: I trust the pattern-matching more than my own seeing. Over time, this hollows out your capacity for visual judgment. You forget how to trust what you actually see because you’re used to deferring to interpretation.


The Practice of Untraining

Photography is how you relearn to see without the algorithm. Not as a finished product, not as content to optimize. But as raw attention—what draws the eye, what feels significant, what the light is doing right now.

A photographer spends hours looking at a scene. Not for the photograph yet. Just for the seeing. Learning how light changes minute by minute. How the same subject looks different from different angles. How small adjustments in framing completely transform what the image conveys. This is training your eye away from algorithm optimization and toward actual seeing.

The moment you bring a camera to something, your whole mode of attention shifts. You can’t just consume it. You have to look at it deliberately. You have to ask: what’s actually here? What would show this particular thing to someone else in a way that’s true to what I’m seeing?

This is untraining the algorithm taste in your head and training something else: your own judgment, built on direct observation instead of pattern matching.


Visual Literacy in an Optimized World

You’re swimming in visual content optimized for algorithmic distribution. Bright, high-contrast, immediately legible. This is beautiful in its way—but it’s a particular beautiful, chosen for engagement, not for truth or depth.

When you photograph without that optimization pressure, you make different choices. You might photograph something gray, quiet, subtle. Something that doesn’t announce itself but requires you to look longer. Something that contradicts what algorithms have trained you to find visually compelling.

📊 Data Point: A 2023 study at MIT found that people who regularly practiced photography without social media sharing showed measurably different visual preferences than heavy Instagram users—preferring subtlety, complexity, and ambiguity over clarity and immediate visual salience. The preference gap widened over six months.

This isn’t about being better. It’s about having options. Having an eye trained by your own observation instead of by algorithms that were optimizing for something you may not have wanted to optimize for.


What This Means For You

Delete the app that’s training your eye for you. Not forever, necessarily. But for a month. And in that month, carry a camera. Just a phone camera. And every day, photograph something that you find worth looking at. Not something you think others will like. Something that catches your actual attention.

Notice what you start noticing. You’ll see differently. You’ll start trusting your eye again. You’ll realize that your taste isn’t what you thought—it’s been borrowed from an algorithm that was never trying to help you see what’s true. It was trying to keep you engaged.

The human eye, untrained by algorithms, is your original and best tool for understanding the world. Reclaim it.


Key Takeaways

  • AI algorithms train your visual taste in specific directions: toward engagement, clarity, and pattern-matching
  • Your independent visual judgment erodes with disuse and becomes replaced by algorithm-optimized taste
  • Photography trains your eye to see what’s actually there, not what an algorithm highlights
  • Visual literacy in the modern world means learning to see against the grain of optimization

Frequently Asked Questions

Q: Isn’t my taste my own, regardless of what algorithms I use? A: Only partially. Repeated exposure changes what you prefer. If you’re only seeing what an algorithm shows you, your taste is being shaped by that algorithm’s training, not by your independent judgment. Try consuming differently and you’ll notice your preferences shift.

Q: Does this mean I should avoid social media entirely? A: Not necessarily. But be intentional about when you let algorithms curate what you see and when you do your own seeing. Give yourself regular time for unmediated observation—time when you’re looking for what you find interesting, not what the algorithm predicts you’ll like.

Q: How do I know if my visual taste has been trained by algorithms? A: Ask yourself: When I see something, do I immediately evaluate it by how engaging it would be if shared? Do I notice myself drawn to high contrast, immediate clarity, recognizable patterns? That’s algorithm training. A fresh eye would be drawn by other things too.


Not medical advice. Community-driven initiative. Related: Through the Lens: Losing Presence | What Cameras See That AI Misses | Focus Through the Viewfinder