Something big is happening
Not the kind of "big" you might be looking for, but read on, as you might have missed some important signals coming from the industry at large
Hello, dear readers,
Today I’m going to experiment with a much shorter article than usual.
I love to write long-form pieces, but I also know many people out there have lost the ability to read more than 280 characters in a single session. But I’m Italian and Latin, which means I tend to be verbose due to my cultural upbringing.
For once I’ve decided that I’ll spare you the task of submitting the article to a token machine1, I’ll force myself to be concise.
But before we do that, let’s briefly hear about a great opportunity :)
Before we get into the core of the article, I have an important reminder.
I’m running a promo until the end of March, offering a 30% discount for 12 months to members of the Women In Tech community.
This is a wonderful opportunity to join our thriving community. Secure your seat here.
If you’re not ready to sign up just yet, I’ve got you covered!
On April 16th at 5PM CET, I’ll host a free open session of the Sudo Make Me a CTO Community. It will be like one of the regular sessions reserved for members of the community, except that it will be open to anyone for attendance!
If you’re interested, sign up here and you’ll receive an invite shortly after.
Now, back to the article.
Let’s start with a recent study that came out that didn’t get the attention I believe it deserved.
📉 Interesting longitudinal Study

The most important thing about the study is that it comes from a company that has close ties with Microsoft/GitHub and has notably been in the booster/enthusiastic camp of AI-assisted software engineering: DX.
DX has long been a reference in the DevEx space, with some of its founders having contributed directly to the SPACE and DevEx frameworks for developer productivity. I always appreciated that. But, like many other companies in the big tech space, they have been significantly riding the wave of LLMs sprinkled everywhere2 and deliberately evolved their tools to both take advantage and help other companies take advantage of the ongoing revolution.
So I was delighted when Justin Reock published the article titled AI productivity gains are 10%, not 10x.
Though it shares only preliminary results, what it shares is interesting, and so far it is the only longitudinal study trying to come up with some objective measurements of productivity improvements. Obviously, no study is conclusive. Just remember that they tend to be more conclusive than anecdotal and subjective opinions.
Even if their measure of productivity, PR throughput, is questionable as it’s just a proxy for real productivity, the fact that it shows an impact of about 10% is remarkably low compared to the expectations we've been primed to expect through large-scale pavlovian conditioning3.
I found most comments on the article so hilarious that I did something I rarely do: I posted a comment and follow-up note expressing my sentiment at those reactions.
I’m hoping that DX will soon release more data and publish a full-blown paper on the subject.
Meanwhile, we can look at other signs of historically enthusiastic supporters starting to come to terms with reality.
🤔 Notable boosters coming to terms with the real impact
There have been two notable examples of similar reality checks in the past weeks, coming from two authoritative voices in the space.
The first one is none other than Gergely Orosz from the renowned The Pragmatic Engineer newsletter.
In a recent deep dive, eloquently titled Are AI agents actually slowing us down? Gergely covers a broad set of examples of reported negative side effects of the use of agentic AI in big-name companies.
The article is paywalled, and I didn’t read the full content, but in the summary, what caught my attention was the following point on how to solve the issue:
How do we solve it? Engineers with strong architectural sense become more critical than ever, proposed solutions include formal validation methods, and perhaps reviving some old school QA ideas.
It sounds like the world of software engineering is discovering that solid software engineering practices are what make the difference.
So it’s not just enough to adopt fancy tools?
The second, and probably most notable, is a recent article from Steve Yegge.
Yes, THE Steve Yegge who literally co-authored a book titled Vibe Coding4.
In a recent piece he published under the unmistakable title The AI Vampire he touches on the human and psychological aspects of the way of working, or rather the way of living, he has contributed to promoting with the unfiltered - and should I say uncritical - promotion of AI tools above all else:
I regret the unrealistic standards that I’m contributing to setting. I don’t believe most people can work like I’ve been working. I’m not sure how long I can work how I’ve been working.
It’s refreshing to see the tone evolving even in the booster camp.
The only thing I regret is that none of them seem to mention, quote, or credit the authors, journalists, and writers who have been warning about these issues for a long time. 🤷♂️
💸 Where is the money?

One of my favourite authors of all time in the techno-critics space, Cal Newport, has recently started to cover more topics related to AI in his podcast.
In his most recent episode5 Newport did two things that I found particularly interesting:
First, he mentioned an article from Cory Doctorow6, focusing on the unit economics of the GenAI industry:
Now, some exceptionally valuable technologies have attained profitability after an extraordinarily long period in which they lost money, like the web itself. But these turnaround stories all share a common trait: they had good “unit economics.” Every new web user reduced the amount of money the web industry was losing. Every time a user logged onto the web, they made the industry more profitable. Every generation of web technology was more profitable than the last.
Contrast this with AI: every user – paid or unpaid – that an AI company signs up costs them money. Every time that user logs into a chatbot or enters a prompt, the company loses more money. The more a user uses an AI product, the more money that product loses. And each generation of AI tech loses more money than the generation that preceded it.
Then, he mentioned an article from the notably socialist reporting site Reuters.
The article, like many similar ones from Ed Zitron, highlights the creative accounting and reporting practices that many private tech companies use, a trend that started with the SaaS business and has become borderline criminal with AI companies.
The way they tend to report on annualised revenues based on “good time windows” is showing one thing: the need to make things look better than they are in reality.
Kudos to Karen Kwok for a wonderful closing on the article:
No one is being misled. Until AI companies standardize how they report revenue and are upfront about potential volatility, however, their metrics risk looking like a plausible hallucination.
Investors warned!
Insurers know something about risk

If that wasn’t enough, a recent article from The Register also caught my attention. It’s an interview with the co-founders of an AI company. Not your typical left-wing activists living barefoot in the woods.
The article goes by the title AI still doesn’t work very well, businesses are faking it, and a reckoning is coming, which is a bit too clickbaity for my taste, but that's the reality of most media these days.
Besides the now well-known issues they describe, one really caught my attention.
It’s about how the insurance industry is seeing the whole GenAI space:
Deeks said “One of our friends is an SVP of one of the largest insurers in the country and he told us point blank that this is a very real problem and he does not know why people are not talking about it more.”
Insurers, he said, are already lobbying state-level insurance regulators to win a carve-out in business insurance liability policies so they are not obligated to cover AI-related workflows. “That kills the whole system,” Deeks said.
Now, Derek’s company, Codestrap, welcomes you on the homepage with the message, “AI Systems You Can Actually Underwrite.” Surely they have their own incentives to promote such a narrative, but what they’re talking about seems to have some ground.
Not that I’m a big fan of insurers. They’re notably risk averse, as that’s their whole business.
But they’re also experts in risk assessment, though.
We might want to listen to what they have to say about the industry that is driving an unprecedented amount of investments and euphoria.
💩 Enshittifying squeeze
A small- to medium-scale drama has recently unfolded among the users of one of the well-known AI-assisted editors in the space: Windsurf.
For those who don’t know it, it is, or used to be, the main contender to Cursor AI for the AI-assisted VSCode fork IDE. The company was the object of a tumultuous acqui-hire operation from Google last year, while “the remainder” went to the folks over at Cognition, the creators of Devin AI.
Windsurf recently announced a drastic change in its pricing scheme, causing nothing short of a riot.
They messed it up in so many ways it’s not even funny, including announcing the change 24 hours before it went live, despite their Terms of Service explicitly stating they will give a 30-day notice for any change.
Needless to say, with the change in pricing, or rather what people get as part of their subscription, and how they can make use of it by imposing daily and weekly quotas, they’re significantly crippling the value any existing paying customer is getting out of the tool. This is basically product suicide, and there’s only one reason to do it: the need to stop the bleeding in a business that doesn’t have sustainable unit economics.
We’ve seen other players do the same, and more will follow, as they progressively run out of VC money.
What then? I guess that the survivors, once freed of competitive pressure, will follow suit and significantly raise prices to cover their insane losses.
What’s next?
OK, this wasn’t as short as I originally intended, but I guarantee you I have removed some additional points, including some AI-related (or AI-induced) open-source drama that hit close to home. I’ll save that for another time.
Now, unlike private AI companies, I don’t lie about my revenues.
As of today, the annualised recurring revenues (ARR for the initiated) for this newsletter are exactly €0.
I.e., nobody so far has upgraded to the paid tier. That's not entirely surprising, as there isn't a single piece of content behind a paywall.
My experiment proved something we all knew about: why pay when you can get something for free?
So, I had an idea.
Since I’m having a lot of fun following up on the news and articles helping us demystify the mainstream AI discourse7, I’m thinking about launching a recurring column titled “This week in AI”8, which, this time, will only be available for paying subscribers.
Or maybe it can become a recurring monthly issue of the newsletter.
I’m still unsure about the exact format and frequency.
What I know is that those articles will look a lot like this one, requiring a lot of research work and combining different sources into a coherent and interesting narrative.
Before I get started, I’d like to get a sense of the general interest in it.
So, please comment with “Interested”, or just reply to this email with your comments, if that’s the case.
I promise you won’t be charged against your will.
Or just upgrade to the paid tier by using the button just below.
As I need to keep the article short, I expect the footnotes to expand. Specifically, on the token machine: with slot machines, the more tokens you get in output, the more you’re winning. With LLMs, it’s arguably the opposite. You pay both for what goes in and what comes out, and increasingly so, as we’ll see through the article. The house always wins, even when you believe you are.
Just go on their homepage. It literally says “Developer intelligence for the AI era”. Whatever the AI era is.
For the funny anecdote. When I shared that article with someone I know (a practitioner with decades of experience), their reaction surprised me: instead of just taking the data in, they said something along the lines of, "I believe that’s what happens when you’re not taking full advantage of these tools. John Doe (made-up name of a shared acquaintance, not a tech guy but a CEO) told me that Acme Inc. (made-up name of a company we both know about) is targeting 90% of code written by AI by the end of the year." If you’ve noticed the use of “I believe” and comparing PR throughput with the % of code written by AI as red flags, you still haven’t completely given up your ability to process information. Congrats.
For the record, I’m going to read that book soon. If all goes according to my plans, it’ll be in the list of books for April. I don’t avoid getting in touch with “dangerous” reads, as I don’t believe there is any such thing as a dangerous read. There are only dangerous readers/writers, but that’s a topic for another time.
It came out on the various streaming platforms just a few days ago with the title “Did AI Just Become Sentient?", which, at the time of this writing, is not yet available on the official website. I don’t want to promote any specific platform, so either search for it on your favourite one or just wait a few more days.
In an interesting turn of events, after publishing my review of Doctorow’s latest book in last week’s newsletter, I randomly discovered he’d be in town for his book tour on Friday. So, I met him for the first time and got my copy of the book signed. Life is good.
This is a polite way of saying “calling bullshit on the broligarchs”, if you will.
I know I can come up with a much better title. This one is just illustrative, to give you an idea of what to expect. I might as well call it “Hallucinations from the Other Side” or “Decrypting the Cartel”. You get the drill.



interested
Interested