The Workslop & Productivity

Two years ago, during the initial ChatGPT frenzy, I was leading a team of product managers at a promising, well-loved organization. Almost overnight, my PMs started generating Product Requirements Documents using AI. The reason? Using AI usage made us seem intelligent like AI, cutting-edge, and forward-thinking in 2023.

The problem was immediately obvious to me and my design team lead: these PRDs looked professional but were hollow. They lacked the context, the customer insights, the technical nuance/judgments that makes a PRD actually useful for engineering teams. When I tried to persuade them to engage their critical thinking, I was met with resistance.

“But look how much faster this is,” they’d say, “But I worked with AI,” pointing to beautifully formatted documents that said nothing meaningful about implementation complexity, user flows, or specific product needs.

Soon after, tools like ChatPRD started trending on LinkedIn. The discourse shifted to whether “Product Management was dead” – as if the profession could be reduced to document generation rather than the hard work of translating user problems into buildable solutions, the insights after real consultations between teams.

As someone who’s been both an engineer and a product leader, I recognized the pattern immediately. What I saw was the early symptoms of what Harvard Business Review researchers now call “workslop”, AI-generated content that masquerades as meaningful work but lacks the substance to advance any real purpose.

And it’s spreading across our entire software development ecosystem.

We’ve all seen it by now. The code reviews that check syntax but miss architectural flaws. The user stories that sound comprehensive but leave developers guessing about business logic. The technical specs that hit all the buzzwords but provide zero guidance on implementation trade-offs.

This is workslop in its natural habitat: output that performs productivity without producing value.

Recent research from BetterUp Labs and Stanford found that 41% of workers encounter such AI-generated output, costing nearly two hours of rework per instance.

Meanwhile, 95% of organizations see no measurable return on their AI investments despite doubling usage since 2023.

Why Software Teams Are Particularly Vulnerable

Here’s the uncomfortable truth: software development teams are especially susceptible to workslop because we’ve been trained to optimize for signals that AI happens to be very good at mimicking.

We value consistency and standards. AI produces remarkably consistent output: same formatting, same patterns, same structure. But consistency without context is just uniformity, and uniformity without understanding is just conformity.

We trust tools and automation. Our entire profession is built on leveraging tools to amplify human capability. When a new tool promises to make us 10x more productive, our instinct is to adopt first, question later.

We measure the wrong things. Lines of code, story points, documentation coverage: all metrics that AI can game effortlessly while contributing nothing significant to actual software quality or user value.

We’re framework-obsessed. Whether it’s design patterns, or agile ceremonies, we love systematic approaches. AI can replicate these frameworks flawlessly while completely missing the engineering judgment that makes them valuable.

The Three Flavors of Software Workslop

1. The Code That Compiles But Doesn’t Solve 

I’ve seen pull requests with AI-generated code that looked so professional that senior engineers approved them without realizing they solved the wrong problem entirely. The code compiled, the tests passed, but the feature didn’t work for users.

2. The Spec That Documents Nothing Technical documentation that describes what the system does but explains nothing about why those decisions were made or what happens when things go wrong.

3. The Architecture That Scales Nothing System designs that use all the right patterns and buzzwords but ignore the actual performance requirements or user constraints.

The diagrams look impressive in design reviews, but the implementation reveals the gaps where real system thinking should have been applied.

When Productivity Theater Meets Software Development

When engineering leaders mandate AI adoption without guidance on quality or meaningful application, they create systems that reward the appearance of velocity over actual delivery.

We started measuring the wrong things: code coverage instead of bug rates, story completion instead of user satisfaction, commit frequency instead of deployment success. Teams began optimizing for AI tool usage rather than software quality.

It’s like optimizing for the number of database queries instead of query performance, or lines of code instead of working features. You get impressive numbers that make everything slower and buggier.

The Way Forward: Treating AI Like Any Other Powerful Tool

The solution isn’t to abandon AI tools. They’re genuinely useful when applied thoughtfully. Instead, we need to treat them like we treat any powerful engineering tool: with respect, understanding, and appropriate safeguards.

Code review AI suggestions like junior engineer contributions. Don’t trust the formatting and style. Look for logic, edge cases, error handling, and architectural consistency. Ask: “Does this solve the right problem?”

Product specs should answer ‘why’ not just ‘what’. AI can generate the requirements; humans need to provide the user context, the business logic, and the trade-off decisions that inform implementation.

Preserve the learning loops. When AI generates code, make sure someone on the team understands how it works. When AI writes specs, ensure developers can build from them without guessing.

Measure outcomes, not outputs. Instead of tracking AI adoption rates or story points, track deployment frequency, lead time, error rates, and user satisfaction. The things that actually matter.

Build cross-functional AI literacy. Train product managers to prompt for implementable requirements. Help engineers evaluate AI code for correctness, not just syntax. Teach everyone to iterate meaningfully with AI tools.

Focus on handoff quality, not document quality. Ask whether the person receiving your work, whether it’s code, specs, or documentation, can actually use it to move forward effectively.

The Choice We’re Making Right Now

Every time we accept workslop as “good enough,” we’re making a choice about the kind of software development culture we want to build. We’re choosing speed over understanding, compliance over craftsmanship, the appearance of productivity over actual user value.

The workslop epidemic forces us to confront fundamental questions about our craft: What is the irreplaceable human contribution to building software? How do we maintain our capacity for system thinking while leveraging AI’s pattern matching? How do we structure our teams to reward deep problem-solving over shallow output generation?

These aren the questions about what kind of software professionals we want to be and what kind of systems we want to build for users.


Software development is fundamentally about translating human problems into working code that solves those problems. When we let AI do the translating without human understanding, we lose the very thing that makes software valuable: the thoughtful connection between user needs and technical solutions.

We’re at a crossroads where we can either let AI make us lazy developers or help us become more thoughtful ones. The choice is ours, but we have to make it consciously.

We still have software to build. Real software. The kind that actually works & solves.

Discover more from Mind of Archita

Subscribe now to keep reading and get access to the full archive.

Continue reading