The future of work won’t be defined by AI, but by human integrity.

Florian schmetz 2 AR Qfzpg E unsplash

To me, SLOP is already one of the words of the year. It’s digital pulp, AI-generated content without a soul. Meta seems eager to build its business on it, providing an endless feed of “vibes”: skateboarding cats, synthetic influencers, and motivational quotes that mean nothing. OpenAI’s new social app, then, called Sora, shows what happens when machines start to dream. It offers perfectly lit images without meaning, stories without humans. It’s not pulp fiction, it’s AI pulp non-fiction. Technological pig feed. Slop, in other words.

Workslop

That same noise is seeping into our workplaces. The Harvard Business Review recently gave it a name: workslop. It refers to memos, emails, and reports that look flawless but say nothing. Texts that seem thoughtful yet aren’t. Sentences that are correct yet lead nowhere. The result: confusion, frustration, and the uneasy feeling that someone let ChatGPT do the thinking for them. If this sounds familiar, you’ve clearly been workslopped.

Research among U.S. office workers shows that 40% have encountered workslop in the past month. Each time, it took them nearly two hours on average to wade through and fix it. What was meant to lighten our workload has only added to the noise.

All this is happening at a moment of deep insecurity. Fear of AI simmers everywhere, from job seekers to seasoned professionals. Business leaders boast that thanks to AI, they can “do more with fewer people.” Salesforce CEO Marc Benioff recently bragged that he had cut his customer service team from 9,000 to 5,000 through automation and in the same breath said he was hiring “thousands of salespeople.” The message is clear: people are still needed, just not all of them, and not everywhere.

The worth of humans

Beneath that runs something deeper. The suggestion that a machine might surpass the worth of a human challenges the foundations of our understanding of identity and value. It erodes self-worth and breeds self-doubt. Some employees turn to AI in secret, so-called “shadow AI.” Others pretend to master it, afraid of being left behind.

Work slop is not a technical issue, it’s a cultural symptom. It’s the illusion of productivity: lots of text, little meaning. It shows that AI’s promise doesn’t automatically produce better work. A recent MIT study found that 95% of corporate AI pilots fail, not because the tech is weak, but because it’s used without direction or strategy.

The AI experimentation trap

That aligns with a warning from Nathan Furr in Harvard Business Review: companies risk falling into the “AI experimentation trap.” In their rush to prove they’re “doing something with AI,” they launch endless initiatives with no clear goals, ownership, or outcomes. The result? Fragmentation, confusion, and, yes, more workslop

The way out isn’t more tools or experiments. It’s more discipline. Don’t use AI as a magic wand; use it as a magnifying glass, something that sharpens, not distorts. Technology should serve a clear purpose, solve a real problem, and have someone accountable for it. The organizations that get this right start small, learn fast, and kill bad ideas as soon as they appear.

AI isn’t a miracle cure. It’s a mirror. It amplifies who we already are. Those who think will gain thinking power. Those who bluff will gain workslop.

The question isn’t whether AI will change our work, it already is. The real question is: do we have the courage to decide how? Do we want to write faster, or think better? Do we want more text, or more meaning? The future of work won’t be defined by artificial intelligence, but by human integrity.

This piece was first featured in Dutch in Trends.