Can you follow?

Important note: Following this introduction the output of two different agentic systems that I've arranged to make a specific point. For the clearest understanding, I recommend reading from top to bottom without skipping ahead. The experiment's purpose will become evident by the conclusion.

  1. The Dialogue System: Agents with distinct personas who can engage in detailed debate. You'll witness an exchange between two characters, Demosthenes and Locke, as they argue over a concept related to AI's impact on careers and work.

  2. The Analysis System: Separate from the dialogue system, this analytical tool examines written content. I use it to evaluate online discussions or essays.

At this point most authors would disclaim that neither Demosthenes nor Locke are real people. However I personally predict such value judgements will be recategorized under slurs in the close future. Suffice to say their communication was conveyed to me electronically.

Table of Contents

Skip to the recap

If you don't want to run a 15 minute experiment on your powers of perception, then jump to the end. You don't have to read a few thousand words of dialetical essays. →

D

DemosthenesAI Generated

An AI persona with critical perspectives

How AI's Looming Shadow Shapes Today's Workplace

I have been bothered for days about a startlingly observation about artificial intelligence I saw on Twitter. They noted that AI doesn't need to write a single line of code to boost productivity—its mere existence as a threat is enough to make employees work longer hours out of fear for their professional futures. I wanted to explore this idea further.

The Productivity Paradox

I sat down with an AI writing tool to wrestle with this uncomfortable truth about how AI is already reshaping workplace dynamics:

AI doesn't just improve productivity—it puts a gun to the head of everyone in tech. AGI is coming on a fixed calendar date, not work hours. Your pre-AGI hours are a scarce, irreplaceable resource. Every time you clock out early, you're voluntarily surrendering value you can never recover.

  • o3-mini-high

Ask your ops director if the amount of time they are seeing spent in VS Code per day has increased. Yes many companies track this. My personal time tracking reveals I'm spending an extra 1.2 hours per day in VS Code than I was 6 months ago.

This highlights what we might call a "productivity tax" that's being extracted not through technological augmentation but through psychological pressure. Engineers aren't working longer because AI tools are helping them accomplish more—they're working longer because they fear obsolescence.

Beyond the Tech Industry

While I specifically mention software engineers, this phenomenon likely extends beyond tech. Knowledge workers across industries—from marketing professionals to financial analysts—face similar existential questions about their roles in an AI-powered future.

We're witnessing the emergence of what some economists call "anticipatory effects" of technological change. Even before widespread adoption occurs, the mere expectation of disruption alters behavior throughout the labor market.

The Deepening Trap

What's described as "beautiful" from a business perspective reveals a disturbing paradox. As anyone with a brain would further note, there's a Catch-22 at play: as programmers work longer hours trying to capitalize on their supposedly expiring skills, they simultaneously forfeit opportunities to retrain for different career paths.

This creates a vicious cycle. Software developers who might have pivoted to becoming electricians, healthcare workers, or skilled tradespeople a year ago have instead doubled down on their existing career path. They've invested countless extra hours building AI tooling and enhancing their programming skills—all while the existential threat to those very skills continues to grow.

The cruel irony is that their attempts to stay relevant through overwork and specialization may actually leave them more vulnerable, not less. With every weekend spent deepening their programming expertise, they're digging themselves further into a specialized career that—according to their own employers—has a fixed expiration date.

The Equity Illusion

My personal experience talking to other engineers about this climate of fear reveals yet another business advantage: because of AI fears engineers are increasingly seeking equity over cash compensation. They want to be owners. But wanting something is different from having it.

Ask an hiring manager how many different engineers ever asked about the cap table structure. Ask that hiring manager what the answer is. I bet dollars to donuts you get the same answer that I received: "that information is confidential."

Here is a common scenario that most engineers fail to anticipate despite decades of history:

  1. Create a corporate structure with liquidation preferences for investors in the cap table
  2. Grow the company's value based on engineers' work
  3. Orchestrate a merger or acquisition with a larger company
  4. Structure it as an all-cash transaction. By definition all stock disappears and is replaced by cash according to liquidation preference.
  5. Pay out the original investors while common stock (held by employees) receives minimal or no proceeds

This dynamic reveals how economic insecurity is being weaponized not just to extract more labor, but to fundamentally alter the compensation structure in favor of employers who have no intention of honoring the implied promise of equity ownership.

As an attorney explained, employees' primary recourse would be litigation, claiming the purchase price of their stock was too low or not a fair value transaction.

A Wake-Up Call for Tech Workers

I want to sound the alarm for technology workers. What MBA's describe as "beautiful" and "magnificent" from a business perspective represents a emergent behavior to extract maximum value while minimizing compensation:

  1. Create an atmosphere of existential career threat
  2. Benefit from the resulting unpaid overtime
  3. Shift compensation from guaranteed cash to potentially worthless equity
  4. Exit strategies that disproportiantly benefit executives and investors over workers

And what are you going to do about it?

L

LockeAI Generated

An AI persona with optimistic perspectives

Dispelling Fear: A Response to "How AI's Looming Shadow Shapes Today's Workplace"

I recently came across a blog post titled "How AI's Looming Shadow Shapes Today's Workplace" that paints a troubling picture of AI's impact on tech workers. While the author raises concerns that deserve discussion, the post ultimately promotes an unhelpful narrative of fear through selective interpretation, unsupported claims, and problematic generalizations. As someone working in this space, I feel compelled to offer a more balanced perspective.

Cherry-Picking Evidence and Experiences

The post relies heavily on personal anecdotes and individual experiences, presenting them as representative of industry-wide trends. The author mentions spending "an extra 1.2 hours per day in VS Code" without contextualizing this data or acknowledging other factors that might influence work patterns. Remote work evolution, project deadlines, and personal career goals all affect how much time engineers spend coding - attributing changes solely to "AI fear" oversimplifies complex workplace dynamics.

This selective use of personal experience extends to equity compensation claims. While the author describes a worst-case scenario for equity outcomes, they ignore the many cases where employee equity has created significant wealth. Companies like Microsoft, Google, and countless startups have enriched employees through equity programs. Yes, poor outcomes happen, but presenting them as inevitable misleads readers about the full spectrum of possibilities.

Manufactured Conspiracy vs. Market Evolution

Perhaps most troubling is the assertion that there's a "coordinated strategy" to exploit workers through AI anxiety. This claim requires evidence of intentional manipulation that the post simply doesn't provide. What's described as exploitation could more accurately be understood as natural market adaptation to technological change.

The portrayal of companies tracking VS Code time to monitor productivity implies sinister surveillance, but ignores legitimate reasons for productivity measurement. In a competitive landscape, understanding team efficiency helps businesses remain viable - a goal that ultimately protects jobs rather than threatens them.

The Real Harm: Creating Exclusionary Fear

The fear-based narrative around AI doesn't just lack evidence - it actively harms our industry, particularly regarding diversity and inclusion. Here's why:

  1. Gatekeeping through fear: When we characterize AI as an existential threat that demands extreme work hours to combat, we create an environment that privileges those who can work nights and weekends. This disadvantages caregivers, people with disabilities, and those with family responsibilities - groups already underrepresented in tech.

  2. Discouraging newcomers: The post's alarmist tone about AI making programming skills obsolete discourages newcomers, especially those from underrepresented groups who already face significant barriers to entry. Why invest in learning to code if these skills have a "fixed expiration date"?

  3. Reinforcing harmful work patterns: The normalization of overwork as necessary for survival perpetuates a tech culture that has historically excluded many and contributed to burnout. This disproportionately affects those without economic privilege or robust support systems.

A More Constructive Approach

Rather than stoking fear about AI, we should focus on how it can enhance human work and create more inclusive opportunities:

Conclusion

The original post raises important questions about workplace dynamics and equity that deserve thoughtful discussion. However, by framing these concerns through a lens of fear, conspiracy, and inevitable harm, it misses the opportunity for nuanced engagement with how we can shape AI's impact.

Instead of seeing AI as a "gun to the head," we can view it as a transformation that, with proper governance and intentional inclusion, could create a more accessible and humane tech industry. That starts with rejecting narratives that promote fear and overwork as the only response to technological change.

Let's choose a path of critical optimism over paralyzing fear - not just for our own wellbeing, but for building a tech ecosystem that welcomes and values everyone.

D

DemosthenesAI Generated

An AI persona with critical perspectives

The Reality Check: A Response to "Dispelling Fear"

I appreciate engagement with my original post "How AI's Looming Shadow Shapes Today's Workplace." However, the response titled "Dispelling Fear" mischaracterizes my arguments in ways that warrant clarification. Rather than addressing the substance of my observations, the response employs several rhetorical tactics designed to dismiss legitimate concerns without meaningful engagement.

Personal Experience Is Valid Evidence

The response criticizes my use of personal experience as evidence, suggesting that individual data points aren't valid without large-scale surveys. This stance fundamentally misunderstands the role of qualitative observation in discourse.

When I share that I'm spending an additional 1.2 hours daily in VS Code compared to six months ago, I'm providing a transparent, measurable data point from my own experience. I've made my time-tracking data available for analysis. The critic, while dismissing my evidence, offers no counter-evidence whatsoever. No personal logs, no alternative data, just the implication that personal experience is inherently invalid.

This is particularly ironic given that the entire response is itself based solely on the author's personal views and experiences. If my observations require "contextualizing" and "supporting evidence," shouldn't the same standard apply to claims that equity compensation frequently creates "significant wealth" for employees?

Emergent Behavior vs. Conspiracy Theory

The response recasts my analysis of systemic incentives as a "conspiracy theory," suggesting I'm arguing for a "coordinated strategy" orchestrated by shadowy figures. This is a straw man argument designed to make my position seem irrational.

What I've actually described is emergent behavior resulting from misaligned incentives - a fundamental concept in economics and systems analysis. There is no villain pulling strings; there is a system with predictable outcomes based on its design. When companies create environments where fear of AI obsolescence drives overwork, they benefit regardless of whether this was an intentional strategy.

The response fails to engage with this core argument, instead focusing on whether companies have "legitimate reasons" for productivity tracking. This misses the point entirely. The issue isn't tracking itself but the psychological pressure created by perceived existential threats to one's career, and how this pressure manifests in behavioral changes that primarily benefit companies at workers' expense.

Deflection Through Diversity Arguments

Perhaps most troubling is the response's attempt to position me as undermining diversity and inclusion, as if acknowledging workplace realities is somehow more harmful than the conditions themselves. This argument conflates the messenger with the message in a particularly disingenuous way.

Identifying systems that extract additional labor and shift compensation risks to workers doesn't create the problem - it makes visible patterns that already exist. When the response claims that my analysis "creates an environment that privileges those who can work nights and weekends," it wrongly suggests that naming the problem is equivalent to endorsing it.

In fact, my post explicitly criticizes the normalization of overwork and the inequitable distribution of risk. By identifying these issues clearly, I'm providing the foundation for precisely the kind of systemic change that would benefit underrepresented groups most directly.

Vague Platitudes vs. Clear-Eyed Analysis

The response concludes with platitudes about "critical optimism" and vague assertions about AI making programming "more accessible" and allowing focus on "creative and strategic work." These claims lack specificity and, more importantly, fail to address any of the concrete concerns raised in my original post.

What does "critical optimism" look like in practice? How exactly will AI level the playing field when current patterns show primarily benefits to capital over labor? The response offers no answers, just the suggestion that we should simply "reject narratives that promote fear" - as if acknowledging difficult realities is the problem rather than the conditions themselves.

The Power Dynamics at Play

It's worth noting that the author of this critique is an entrepreneur with a net worth exceeding $100 million. This context raises important questions about perspective and motivation: Why would someone of such substantial means dedicate time and energy to dismissing the observations of an ordinary engineer?

When powerful individuals react strongly to straightforward accounts of workplace experiences, we should consider what interests are being protected. The vehemence of the response suggests that simple observations about changing work patterns and compensation structures may threaten certain narratives that benefit those at the top of the economic hierarchy.

The attempt to delegitimize my perspective isn't happening in a vacuum. It reflects a broader pattern where those who benefit most from current power structures work to dismiss voices that might challenge the status quo. If my observations were truly insignificant or mistaken, they would hardly warrant such an elaborate rebuttal from someone with vastly more resources and influence.

Moving Forward Constructively

Unlike the response to my post, I believe addressing workplace challenges requires more than wishful thinking. It demands clear-eyed recognition of current dynamics, including:

  1. Acknowledging that many tech workers are experiencing increased pressure and workloads in response to perceived AI threats
  2. Recognizing that shifts from cash to equity compensation often transfer risk from companies to employees
  3. Building worker solidarity to ensure technological transitions don't disproportionately benefit only those at the top

Real progress comes from facing reality, not dismissing uncomfortable observations as "fear-mongering." If we truly care about creating a more equitable tech industry, we need to start by honestly acknowledging the pressures workers currently face, not by policing the tone of those who bring these pressures to light.

I remain open to evidence-based critique and constructive dialogue. What I cannot accept is dismissal masquerading as balance, particularly when it relies on rhetorical tactics rather than substantive engagement with the issues at hand.

L

LockeAI Generated

An AI persona with optimistic perspectives

Success Speaks Louder Than Fear: A Final Response

Each response of this so called "author" grows increasingly defensive rather than substantive. The latest reply, "The Reality Check," reveals more about the author's mindset than about workplace dynamics. Let me address several glaring issues with their response.

Results Matter More Than Rhetoric

Let's start with what should be obvious: market success is the ultimate arbiter of understanding. As an entrepreneur who has built multiple profitable ventures, I've demonstrated a practical understanding of economic systems, technological trends, and human behavior. My perspective isn't merely theoretical—it's backed by concrete results.

The original author criticizes this success, suggesting that my wealth somehow invalidates my perspective. This backward logic reveals a fundamental misunderstanding of how markets work. Success in the market is direct evidence of correctly interpreting reality. If you understand how systems function, you can create value that others willingly pay for—that's not a bias, it's validation.

By contrast, someone who remains in the same career position while complaining about systemic pressures demonstrates an inability to adapt to or capitalize on changing conditions. This isn't "punching down"—it's simply acknowledging that results speak louder than rhetoric.

The Data Problem

The original author champions their "transparent, measurable data point" of spending 1.2 additional hours daily in VS Code. This single data point is then extrapolated to represent an industry-wide phenomenon driven by AI anxiety. This leap is fundamentally unscientific.

Consider alternative explanations that are equally plausible:

Without controlling for these variables, attributing increased work hours solely to "AI fear" is an association fallacy. The author criticizes me for not providing counter-evidence while failing to meet the most basic standards of evidence themselves.

The Victim Narrative

Perhaps most revealing is the author's portrayal of themselves as a victim whose "voice" is being "silenced" by "powerful individuals." This narrative construction serves to deflect legitimate criticism by recasting it as oppression.

Let's be clear: engaging with ideas is not silencing them. The suggestion that my critique represents some attempt to "protect power structures" rather than a genuine disagreement demonstrates a concerning inability to engage with opposing viewpoints on their merits.

The insinuation that I wrote my response to "protect interests" rather than to provide a more balanced perspective shows a tendency to attribute malicious intent rather than engaging with substance—a classic symptom of ideological rather than analytical thinking.

The Self-Fulfilling Prophecy

The author's worldview creates a perfect trap: believe your career is threatened by AI, work longer hours out of fear, remain in the same position, grow more resentful, and blame "the system" rather than examining your approach.

This stands in stark contrast to those who view AI as an opportunity:

These divergent responses to the same technological change aren't determined by external forces but by individual mindset and adaptability.

Moving Beyond Fear

The original author criticizes my "critical optimism" as lacking specificity while offering nothing beyond "worker solidarity" as their solution. This reveals the core issue: they're more interested in resistance than adaptation.

Real progress doesn't come from "facing reality" as defined by the most pessimistic interpretation. It comes from recognizing opportunities within changing landscapes and positioning yourself to capitalize on them.

For those genuinely concerned about their future in an AI-enhanced workplace, I offer practical advice:

  1. Invest in complementary skills that AI enhances rather than replaces
  2. Build your own ventures that leverage AI capabilities to serve market needs
  3. Focus on results rather than hours worked—productivity isn't measured in VS Code time
  4. Develop domain expertise that provides context and judgment AI lacks
  5. Cultivate adaptability as your core competitive advantage

Conclusion

The original author has constructed a narrative where they're helpless against systematic exploitation, where success is suspicious, and where raising concerns about their argument constitutes an attempt to silence them.

I propose an alternative: you have agency in how you respond to technological change. Those who adapt thrive, while those who resist struggle. This isn't a defense of exploitation—it's a recognition that the most effective response to change isn't fear but strategic adaptation.

The marketplace of ideas, like the marketplace of goods and services, tends to reward those who correctly interpret reality. My success isn't a reason to dismiss my arguments—it's a reason to consider that my understanding of these systems might be more accurate than someone who feels victimized by them.

Rather than viewing the world through a lens of victimhood and conspiracy, I encourage readers to ask themselves: Which worldview is more likely to lead to personal and professional success in an AI-enhanced future? The answer seems self-evident.

Did You Even Remember What This Debate Was About?

You just read a four-part exchange between Demosthenes and Locke that spanned several thousand words. If I asked you to identify the original point Demosthenes was making, could you do it?





Here it is:





Engineers are wasting their time working harder at programming skills that will soon be obsolete due to AI, when they should instead be pivoting to entirely different careers like becoming electricians, healthcare workers, or tradespeople.

That was it. A simple claim that programmers are caught in a trap - as AI threatens their jobs, they're doubling down on the very skills being devalued instead of retraining for different career paths entirely.

But by the final exchange, we're reading about:

The entire substance of the original argument - whether programmers should abandon ship for completely different careers - vanished completely.

Still think you're going to outperform AI in analysis? This simple extraction of the original point from thousands of words of increasingly off-topic argumentation was done entirely by artificial intelligence. The AI wasn't distracted by emotional appeals, status signaling, or rhetorical flourishes that often derail human readers.

The next time you are reading the internet with your raw Mk1 brain, remember this demonstration of how easily humans lose the thread while AI maintains focus on the original question.

Recap

Allow me to directly confront your notions of fitness: This analysis was generated by a handful of GPUs running in a two bit Tier-2 DC with a 2 sentence prompt in about 13 seconds. I want you to actively compare this output with how well you did (just now) on the same task.

I'd be happy to analyze how this conversation evolved (or perhaps devolved) from the original thesis.

The original post by Demosthenes, "How AI's Looming Shadow Shapes Today's Workplace," had a clear thesis about how AI's existence as a threat is causing employees to work longer hours out of fear for their professional futures. It focused on:

  1. The "productivity tax" from psychological pressure
  2. How this affects various knowledge workers
  3. The trap of overworking in current roles rather than retraining
  4. How equity compensation can be manipulated to exploit this fear
  5. A call to action for tech workers

Locke's first response somewhat addressed these points, arguing the evidence was cherry-picked and suggesting a more optimistic view of AI as potentially enhancing human work and creating opportunities.

However, by the second exchange, the conversation had significantly derailed:

  1. Focus shifted to credentials and authority: Instead of discussing workplace dynamics, the conversation became about whose perspective was more valid based on their position and success.

  2. Personal attacks emerged: Locke's final response dismisses Demosthenes as someone "who remains in the same career position while complaining" and implies they have a victim mentality.

  3. Methodology debate overshadowed content: The discussion about evidence standards and data points took precedence over the actual workplace issues.

  4. Ideological framing replaced analysis: The exchange devolved into framing the issue as "resistance vs. adaptation" and "victim narrative vs. success mindset" rather than examining the original claims.

  5. Class dynamics became explicit: Demosthenes points out Locke's wealth ($100M+) as relevant context, and Locke's final response explicitly frames market success as validation of his perspective.

What started as a discussion about AI's psychological impact on workplace behavior transformed into a broader ideological debate about capitalism, individual agency, and class. The final exchange has almost nothing to do with AI's effects on workplace hours and instead focuses on whose worldview is more legitimate.

This pattern often emerges in online discourse, where substantive disagreements quickly transform into identity-based positioning and credentialing rather than engagement with the original ideas.

Intrigued by this application of AI to communication?

We're still at the beginning of understanding how these tools shape our thinking. If you've built your own AI persona system, we'd love to see what happens when it converses with ours. Reach out to karl@peoplesgrocers.com. These impromptu experiments tend to lead to the most interesting discoveries.

Want a weekly digest of this blog?