4 Comments
User's avatar
Pawel Jozefiak's avatar

The Round-Trip Economy framing is sharp. The dangerous version isn't the obvious case where AI generates nothing -- it's where AI produces something technically correct that then moves through four more human steps before anyone asks whether the original task was worth doing.

I've been running a structured experiment tracking actual outputs vs costs over 90 days precisely because 'sessions used' was meaningless. The honest answer is the numbers are messier than expected, but they're pointing the right direction. Early data here: https://thoughts.jock.pl/p/project-money-ai-agent-value-creation-experiment-2026

Jean-Paul Paoli's avatar

Will have a look thanks ! 👀

Pierre-Eric Jacoupy's avatar

There are so many insights within that article - thanks for sharing. It is really felt than when everybody expedite content production through genAI, the temptation to avoid « the last mile » human judgment is real - and suddenly more work done begins to feel like snowball avalanche of things to be done and loosing sight of the real value. The initial example made me feel about hiring with ATS : twice as more tailored made CVs going through fewer job openings and automated ATS screening. Ultimately time to hire (or time to find a job) did not improve. Inconsistent agent to agent interactions in that case made none of the human parties happier.

Om Prakash Pant's avatar

Yes, metrics can give a false sense of progress.

In my experience, teams chase numbers that look good in dashboards, but the real friction is in how the AI behaves in context. Accuracy, speed, or engagement stats don’t tell the full story. You need to see it in actual workflow impact.