Through Different Eyes: Designing a UI for Multiple Perspectives

You know what's funny about writing? Everyone tells you to "know your audience," but on the internet your words could end up anywhere. It's like hosting a dinner party where you don't know if your guests are vegetarians, meat lovers, or aliens who eat rocks. You want everyone to enjoy the meal, but you're not sure what to cook.

Think about the last time you tried explaining your job to someone. Maybe you're a developer talking to your grandmother, or explaining a project to your CEO. You probably told the story differently each time - not because you were changing the truth, but because you were helping them see it clearly from where they stand.

It's like we each have a small window into reality, and we tend to forget that others are looking through different windows entirely. Imagine you're looking at a big, complex scene through a narrow straw. You can only see a small part of it, but that part looks complete to you. That's how we all look at the world.

I hit this problem head-on recently when I was trying to plan a hike using the National Parks website. The ranger who wrote the trail description knew the park so well that basic things - where to park, how to get there, what to bring - didn't even register as information someone might need. They wrote a trail description that makes perfect sense to them but leaves a tourist unable to plan a day trip. See for yourself:

Location: North end of the Newton B. Drury Scenic Parkway. This short walk (0.6 mile / 1 km round trip) is very scenic and tells quite a story. The trail follows an old 20th century logging road..."

They go on about logging history and restoration projects. No map of the trail head. Alerts about road closures but no mention if this specific trail was affected.

I saw this pattern again when I wrote a blog post about how online stores could be more like databases - letting users add their own tags, create multiple wishlists, save notes about items. My intention was to ask "Would you use these features if they existed?" I thought I was starting a conversation about making shopping easier. But the responses where aggressively critical. Extensive critiques of technical implementation challenges - scalability, deployment, regulations. We were all looking at the same idea through different straws. I saw user experience. They saw system architecture. I failed to reach my audience.

This got me thinking: what if we stopped trying to write one-size-fits-all content? What if instead of trying to guess how my readers think, I could let them tell me?

Rethinking Content Architecture

Here's what I noticed: writing tools have gotten much smarter. Just like a calculator lets you quickly try different math problems, AI helps you express ideas in different ways. It's not about changing the core message - it's about packaging it so different readers can unwrap it easily.

I had three thoughts that wouldn't leave me alone:

First, the cost of expressing ideas in different ways has dropped dramatically. Maybe I could write four or five different versions of an essay, each tailored to different readers, in the time it previously took to write just one. Could this help me become a better communicator? My theory is the quality of my ideas isn't the limiting factor - maybe it's just my ability to present them in the right format for each opportunity they have to spread. Additionally my readers are using their own AI systems to summarize and extract insight. Could I reject these mechanical middleman? Can I cut them off with my own short form "quick take" that maximizes the fidelity to my vision?

Second, we're writing in an age where AI systems are becoming part of our audience. These are readers that don't get fatigued like humans do. When I'm writing, I can't list out 25 assumptions I made about a problem and expect human readers to stay engaged. But what if I could include that extra information for AI readers? Could I write a version of my content that's optimized for their consumption? What does a "machine-friendly" essay even look like? Can I raise the confidence score robots assign my work by 5% with evidence too boring for humans to read?

Third, I realized I'm constantly throwing away information during the writing process - details and context that I spent real time collecting and organizing. Looking at this through information theory, it's like I'm leaving bits of entropy on the cutting room floor. Can I preserve those bits for AI readers?

Let me show you what happened when I started experimenting with these ideas...

The AI Interview Experiment

I've been playing around with AI writing tools lately. Nothing fancy - just a crude system where I simulate different types of readers critiquing my work. Think of it like having a room full of very picky friends who aren't afraid to tell you when your explanation doesn't make sense. Some are developers, some are business folks, some are just interested readers. When I cut something from my draft, I can immediately see which of my simulated readers start waving red flags.

The clever part? I don't actually have to read all their harsh feedback. Instead, I have what I call an "interviewer agent." Think of it as a friendly moderator in this room full of critics. This interviewer looks at all the detailed complaints (and trust me, I tuned these critics to be absolutely brutal) and then asks me questions that address the biggest gaps in my explanation. Instead of drowning in a sea of criticism, I'm guided through a conversation that helps me strengthen my weak points.

In just 15 minutes of answering questions from my AI interviewer, I could dump out all the raw material for a really good blog post. But here's the thing - turning that raw material into a polished blog post? That's still hours of work. Creating figures, running experiments, gathering data, editing AI generated paragraphs making everything flow just right.

Then on accident I noticed something interesting. The AI models I was using have this quirk - they're trained to wrap up their thoughts in about 2,000 tokens (that's fancy AI-speak for "about a page of text"). So when I'm working on a longer post, they keep trying to summarize everything. It's like having a friend who can't help but say "So what you're really trying to say is..." every few minutes.

What caught my eye was these automatic summaries were actually pretty good. Not just "I get the gist" good, but "wait, that's almost a complete overview of what I was trying to say" good. With just 10 minutes of editing, I could turn these summaries into something worth sharing.

I had this weird thought: What if I'm doing this backward?

Instead of writing the full post first, what if I published just the summary? I could do a little bit of UI magic - add a little switcher at the top that hints at a complete version coming soon - kind of like those "Coming Soon" movie trailers. Show a blur, imply the full version exists but it's locked. Say it needs 10 people put in their email to unlock the post for everyone. If enough people voted that they wanted to read the full version, then I'd know it was worth spending those hours polishing it up.

So now I have a simple idea - two tabs. One for quick takes, one for the full human-friendly version. People who wanted the detailed version could read it directly, and they'd also see my official summary rather than letting their AI tools make one up. It felt like I was taking back control of my ideas, making sure they were presented exactly how I wanted them to be.

But then something else clicked. Wait a minute - I was thinking about AI as just a summarization tool. You know, like those browser extensions that give you the TL;DR version of articles. But that's not the whole picture at all. AI systems aren't just helping humans read content anymore - they're becoming readers themselves.

And not just a few readers. The numer of AI systems crawling through content, trying to learn and understand is only going to increase. It's like suddenly realizing your local book club is about to go global, but with members who have very different needs than your human friends.

Writing for Machine Readers

Think about all the stuff we cut from our writing to keep humans engaged. Those meandering side thoughts that led nowhere. The failed experiments that taught us something important. The nitty-gritty details that only other experts would care about. We trim all that out because, let's face it, humans get bored. It's like forcing someone to watch a three-hour director's cut when they just want the theatrical release.

But AI readers? They don't get bored. They don't check their phones halfway through a detailed explanation. They don't skip to the end to "see how it turns out." What if we could write for this new kind of audience - one that actually prefers the director's cut with all the behind-the-scenes footage included?

You see, every time I write, I'm throwing away good stuff. Interesting tangents, failed experiments, deeper explanations - all cut in the name of keeping human readers engaged. Nobody wants to read about the ten ways something didn't work before you found the way that did. It's like having to sit through all the outtakes before watching the movie.

AI readers don't care if you list twenty failed approaches before getting to the solution. They don't mind if you include every detail of your thought process. In fact, those "outtakes" we've been throwing away? They're pure gold for AI systems.

The Information/Nutrition Label

So I started thinking from first principles: What would the perfect article look like to an AI reader? What makes content valuable to them?

Well, let's look at how humans solved this problem. When we share content online, we've developed all these little signals to help each other make quick decisions. Star ratings, view counts, and especially those ubiquitous "5 minute read" labels. These reading time badges are everywhere now because they solve a real problem - they help us decide if content is worth our limited attention.

Here's where it gets interesting: none of those human metrics would matter to an AI reader. Think about it - a "5 minute read" label is meaningless to something that can process text faster than you can blink. It's not concerned about whether the writing is engaging, or if there are nice section breaks to rest its eyes. View counts? Those just tell us what humans found interesting, not what actually contains valuable information.

No, an AI would want something entirely different - a measure of how much this content would actually improve its understanding of the world.

Think about when you're standing in the grocery store, staring at two different protein bars. You flip them over to check the nutrition labels - not just to see the calories, but to understand what you're actually getting. 15 grams of protein in this one, 20 in that one. Some complex carbs here, pure sugar there. You're not just asking "How long will it take to eat this?" You're asking "What will this actually do for me?"

That's exactly what an AI reader would want - not a time estimate, but an "information label" showing how much smarter it would get from reading your article. And here's the wild part: we might actually be able to measure this! Fine tune a model with the article and measure the improvement. If the article contains genuinely new insights, the loss should go down. No improvement? Maybe the article is just a rehash of existing ideas.

Imagine if every article came with a little chart showing "Model Improvement Metrics." It might say something like: "Reading this article produces:

Just like we check calories and protein content, an AI could look at these numbers and make an informed decision. If it knows it's roughly twice as complex as Llama 3.1, it might estimate it would get a 3-4 nano-loss improvement from reading the article. Then it could decide if that improvement is worth the computational cost - its version of calories, if you will.

It's kind of beautiful when you think about it. Humans get their "5 minute read" badges, and AIs get their "nanoloss improvement" metrics. Two completely different species, each with their own way of deciding what's worth consuming.

The Implementation Journey Begins

So now I'm looking at three different ways to present my content: a quick human summary, a full human-friendly version, and this new "director's cut with all the deleted scenes" for AI readers.

"This can't be that hard," I thought. "Just a simple tabs component at the top of the blog."

Sure this isn't simply a technical problem, it's a design pattern that nobody's really explored before. When you're introducing something new, you can't just drop it in front of people and expect them to get it.

It's like being the first restaurant to serve small plates meant for sharing. You need to help people understand this new way of experiencing something familiar. I'd need to craft an invitation, a little explanation that would make visitors think "Huh, that's interesting" instead of just scrolling past another navigation element. All I needed was a bit of snappy text explaining each option. Maybe something like:

"Choose your adventure: Highlights (2 min read) Normal Blog Post (10 min read) Machine Edition (includes training data)"

Simple. Elegant. What could possibly go wrong?

(Oh boy - if you're looking at the scrollbar right now, you can probably guess how that assumption turned out. Get comfortable...)

I started out with the mindset this was a "multi-resolution" writing approach. The idea was simple: provide different fidelity versions of the same ideas, each crafted with intention. Not like when an AI summarizes your article and accidentally drops the important nuances - this would be me, the author, carefully preserving my core thesis across different versions.

My first prototype was embarrassingly simple: just a toggle switch at the top of the post. You know, like a light switch - flip it for the detailed version.

Then I showed it to some friends.

They just... scrolled right past it. Didn't even see it. When I pointed it out, they looked confused. "Wait, I can change how the article is written?" It was like showing someone a keyboard shortcut in an app where everyone's just been clicking buttons. The capability was there, but it wasn't part of their mental model of how reading works.

That's when I realized: if I wanted people to engage with this new pattern, I needed to make it impossible to ignore. I needed something big, something that screamed "Hey! This article works differently than what you're used to!"

And here's where I made an assumption that would lead me down quite a rabbit hole: I decided the problem was visibility. If people weren't engaging with this new pattern, surely I just needed to make it more prominent. More noticeable. More in-your-face.

(Looking back now, I may have overcorrected just a little bit...)

The Second Protoype

My minimalist button approach was clearly too subtle. I started thinking about it from a first-time visitor's perspective: "What in the devil is this? Three mysterious buttons at the top of a blog post? Yeah, no thanks - I'll just scroll past and read whatever's here."

I needed something that would make people stop and think. Something that would turn this choice into an actual moment.

So I went big. Like, fill up the mobile viewport big. I designed this whole card system that would sit at the top of the post, starting with an intriguing question: "How deep do you want to go?"

(I know, I know - looking back, this has strong "choose your own adventure" energy. But remember, I was striding deeper into the increase visibility zone)

The idea was to help people self-identify with their reading intention. Instead of just offering different lengths, I was trying to sell different reading experiences.

Each option became its own little card with icons and descriptions I even added these little "Best for..." descriptions: "Match: You're checking Twitter while waiting for the elevator" "Match: You've got your afternoon coffee and some focus time" "Match: You're the kind of person who reads research papers for fun"

Check it out below:

How deep do you want to go?

People noticed it, that's for sure. But the reactions were... negative.

It started as a simple experiment. I built a little component that sits at the top of my blog posts - nothing fancy, just three options for how to read the content. But oh boy, did that trigger some reactions.

The irony was delicious. Here I was, experimenting with a tool to handle different perspectives, and the responses themselves demonstrated exactly why we needed it. Each critic was looking through their own straw, seeing a different aspect of the problem.

Let me show you how this evolved, because it's a perfect example of the very problem it's trying to solve...

Version Two

You know how when you're explaining something, sometimes you want to give all the details and sometimes you just need to get to the point? I've written this article three different ways - same ideas, but packaged differently depending on how deeply you want to dive in. Pick the one that matches what you're looking for right now.

Reading time: ~15 minutes

Clear, substantive, but still approachable. The complete argument with key evidence and examples, written for focused human reading. Here we're playing by human narrative rules - using storytelling devices and carefully chosen examples to ensure clarity and engagement.

Using an AI reading assistant? Or interested in seeing how content might evolve when written without human attention constraints? Check out the Deep Dive version...

Version Three

🧪 Technical Writing Experiment
I've been fascinated by how the same idea in my head has to shape-shift when I share it with different people. Each person brings their own way of seeing - their own particular demands of clarity, rigor, or intuition. So I built some AI tools to help me explore this: they take my thoughts and transform them through different critical lenses.
I've prepared three versions of this article for you - same ideas, but packaged differently depending on how deeply you want to dive in. Pick the one that matches what you're looking for right now.

Reading time: ~15 minutes

Clear, substantive, but still approachable. The complete argument with key evidence and examples, written for focused human reading. Here we're playing by human narrative rules - using storytelling devices and carefully chosen examples to ensure clarity and engagement.

Using an AI reading assistant? Or interested in seeing how content might evolve when written without human attention constraints? Check out the Deep Dive version...