Accelerating drift into failure: today's most widespread "AI Strategy"
What can go wrong when the CEO pushes technical decisions, shift focus from users' problems to unreliable technical capabilities, and try to persuade everyone else to follow suit?
Today's article collects my latest reflections on the most popular TV show being broadcast on every social media and news outlet: tech bro's love affair with energy-hungry stochastic models they tend to refer to as "intelligence" or "revolution".
I plan to make it the last in a while on the topic as I'm increasingly getting tired of it, almost to the point that I regret the '80s, when the zeitgeist focused more on Michael Jackson, Madonna, and people trying to kill the Pope.
I remember when we used to ask CTOs to focus more on the business

Until recently, we used to ask CTOs to focus more on the business and less on the technology. If you've been in the industry for more than two weeks, you should remember that.
Today, we're facing a paradox: as more and more CTOs have taken it on themselves to become effective business leaders, the opposite is happening elsewhere in the executive room.
I don't know if they're trying to overcompensate for their CTOs who have become too effective in the shift, and maybe stopped talking about Cloud, Big Data, Microservices, and other esoteric concepts in the boardroom, but CEOs across the globe seem to have embarked on the opposite journey. They've stopped talking about - and in many cases caring for - their business, and today it looks like the only thing they can talk about is AI. AI here, AI there. Solutions in search of a problem. Output over Outcome.
Did we go too far as tech leaders? Do we need to reclaim the space of technical buzzwords and hype so that our CEOs can go back to doing their job?
Or do we need to set up the same 12-step path we all went through, remind them about the basic responsibilities of executives, that technology is just a means to an end, that they should focus on defining that specific end with clarity and strong arguments?
I don't know you, but when I see CEOs pushing and sometimes even imposing technical decisions on their organization, I'm worried.
Worried about their users, their shareholders, and occasionally about their mental health.
AI-First companies

We're seeing an increasing number of companies declaring their intention to become AI-first organizations. Out with the old-school mobile-first or — god forbid — people-first companies. That's so much yesteryear.
From Shopify to Duolingo, we've all seen bold announcements on social networks and news outlets. They declare their new strategy, one that every other company seems to be adopting. So much for originality and differentiation.
I suggest a simple exercise for everyone to better understand how devoided of substance most of those proclaims really are.
Pick one of them, and replace any occurrence of AI with something more mundane, but still relevant, such as electricity.
Someone saying, "We're an electricity-first company, before using manual tools, we'll try electric ones first, etc." doesn't sound all that smart and innovative.
Even worse, you, as a user, couldn't and shouldn't care less about how the company gets the job done.
What you care about is that it does a good job of solving your problem. You care about them making your life easier, not more miserable, solving the problem quickly and reliably.
That's why whenever I see the latest announcement of someone becoming an AI-first company, the way I read it sounds very different from the message they're trying to convey.
Such executives are putting the interest of companies selling AI services to them (models, infrastructure, etc) first and foremost, ahead of the interest of their users and shareholders.
They're taking shareholders' money and using their loyal users to finance big-AI companies in their attempt at building something that is slightly more reliable than a pathological liar.
That's the true meaning of AI-first companies in my book.
AI-first, as in they're putting the interests of companies selling AI technology first. Definitely not something I'd brag about if I were in their shoes.
Drifting into failure
A few years ago, I went through the daunting process of reading Drift into Failure, to date one of the most interesting and the most difficult books I've ever read1.
The main thesis from the book can be summarized as follows.
The Newtonian approach of the action/reaction relationship, relying on the notion of a single root cause for observable phenomena, is not sophisticated and nuanced enough to explain the actual world. The author believes that we need to take a systems thinking approach and recognize that failures are generally the result of small issues or problems piling up, compounding over time to cause a major disaster, such as a plane crash.
One human behavior is often at the core of causing those compounding minor problems, something the author describes as the normalization of deviant behaviors, or normalization of deviance. The slow but inescapable drift into condescension, as humans get used to nothing bad happening. We become less careful. Routine checks get delayed. Security measures are abandoned. Until the day when all those isolated actions collaborate to cause a major disaster.
I think a lot of what is happening today could be classified just as that: normalization of deviant behaviors.
We see people trusting whatever a chatbot tells them to do, without stopping to question and challenge as they'd do with any regular person. They're effectively delegating their ability to make decisions to stochastic models. And we call that normal.
We see users being told just try again, if you're lucky, you might get a better result, as if they were playing on a slot machine rather than trying to get some important and sensible work done. And we call that normal.
We see people shipping code to production that has not been tested at all for vulnerabilities, by people who have no clue about what a vulnerability even is. And we call that normal.
We see companies believing they can replace people with decades of experience in narrow and complex spaces with unreliable statistical models. And we call that normal.
We see kids in school cheating their way through a degree and not learning shit any longer, because it's just easier to get AI to do your homework. And we call that normal.
Where many people see progress and innovation, I see an acceleration in the normalization of deviance.
Where many people celebrate the reckless move fast and break things ethos, I prepare myself for the disaster that will eventually follow.
I can't predict what the disaster will look like: will it be a bubble bursting? A long-lasting economic crisis? The exponential degradation of climate instability? The AI-race fueling an arms race between East and West, leading to further conflicts and victims?
I don't know, but from what I've learned about the topic, it looks like a major disaster will happen relatively soon, and most people, especially the biggest optimists, will be caught by surprise.
The 10x Engineer: from a myth to industry-wide delusion

What seems to get everyone worked up in the industry is the promise of a virtually unlimited supply of synthetic equivalents of the mythical 10x engineer.
Except that, like with the physical equivalent they're dreaming to replace with reasoning machines, they're falling for the trick of misreading short-term bursts of output for long-term productivity.
True 10x Engineers aren't those who individually produce 10 times the amount of code than the average. They aren't those with the biggest amount of PRs produced per unit of time. Software Engineering, and knowledge work more generally, is not manufacturing. Your clients aren't paying you based on the amount of new lines of code produced. And I'm sorry to break it to you, often they're not even paying based on the number of features you have.
True 10x Engineers are those capable of building systems that make the entire team, and the organization as a whole, productive over the long term: easy to reason about, easy to maintain and modify, stable, and reliable. They can simplify complex systems and are often champions at removing code rather than adding it.
As of today, there is zero evidence that GenAI is having a positive impact at that level. In case you missed it, check this article on the topic.
Even the idea that you can get away with more junior profiles by pairing them with AI assistants is flawed at its core. Those assistants aren't capable of teaching junior engineers how to get better. They'll only support them in churning out even more bug-ridden and hard-to-maintain code.
In fact, they might even make the problem worse. As more junior engineers will fall for the eloquence of the models, they might believe they're improving their skills while, in reality, learning plenty of dangerous practices.
Natural selection and survival of the fittest

In October 2022, I joined a company working in the Web3/Crypto industry. For those unfamiliar with the recent history in that space, that was about a month before the dramatic FTX crash2.
It was so dramatic that it triggered what many called a major crypto winter, or bear market. The market took a massive hit from that event: drastic reduction in transacted volumes, the major cryptocurrencies losing most of their value on the markets, and VC money, still flowing abundantly in the weeks preceding the crash, suddenly becoming scarce.
I lived all of that from the inside, and could observe the industry's reaction.
What I found fascinating was a silver-lining narrative shared by the most optimistic actors in the space3. The narrative went more or less as follows.
What happened with FTX and its consequences are good for the industry as a whole. It'll clean it up, by unmasking the fraudsters and con artists that have infiltrated the space.
What will be left after the winter cleaning will be a much more robust industry, made up by honest entrepreneurs and talented builders. This will, in the long term, benefit the whole ecosystem.
I have since left the crypto industry, but what I observed during the year I've spent in that bubble stuck with me.
What's particularly relevant today is exactly that popular silver lining after the crash.
People believed in some Darwinian mechanism at play that would make only the fittest survive. They believed that some form of natural selection was at play. Most likely one that was a distributed, decentralized, permissionless, trustless.
I can't say whether that prophecy materialized, but I find it can apply to the current moment.
What might natural selection look like when applied to today's tech industry?
A possible scenario is the one in which rich broligarchs will only get richer, with the support of conniving administrations. Plenty of people are writing about that, and I'm afraid they might be right. Even though that might be the final outcome, I'm more interested in what could happen on our way there.
Lately, I've been thinking about how natural selection might play out for all those companies that seem to be embracing AI as a silver bullet. We seem to be headed toward a world in which AI (specifically GenAI) is becoming a commodity. As such, it will cease to be a differentiator, and it probably already has. A world in which everyone will try to squeeze in chatbot interfaces, hallucinatory content, and machine-generated mediocrity in all their digital products, built on increasingly shaky foundations.
While many enthusiasts believe this to be the next Cambrian explosion for businesses, I think it has more chances to trigger the next mass extinction event, from a business perspective.
As all products start to look the same, talk the same, and make the same mistakes, the winners, the fittest, or should I say the survivors, will still be those who can differentiate themselves.
Those who aren't just copying what someone else has already done, and produce genuinely new ideas and products.
Those who can deliver compelling experiences and solve fundamental problems thanks to human ingenuity and creativity.
Those who resist the allures of the next gold rush and steadily focus on first principles and fundamental value.
Those who didn't put the interest of big AI companies ahead of their users'.
And those are the companies I'm excited to work with in the upcoming months and years, as I believe they will be the ones coming out stronger once this bubble finally bursts.
If you enjoyed this
This newsletter is free, and I intend to keep it free forever.
Sharing it with others helps immensely in growing it.
Engaging with my professional services is a great way to ensure I can continue dedicating many hours each week to producing what I hope to be high-quality content.
Those services revolve around three legs:
Fractional CTO or Advisory roles for startups, scaleups, and established tech companies. Find out more on this page.
Individual Mentoring and Coaching for Engineering Leaders. Find out more on this page.
A paid Community for engineering leaders. Find out more on this page.
If your needs fall into a different category, such as newsletter collaborations or sponsoring, please reply to this email or schedule a free call via this link.
What makes it hard is mostly the style in which it's written. Despite that, I still recommend it as it contains a lot of valuable lessons. You can get your copy here. Don't read it while driving, as you risk falling asleep.
If you haven't heard about the FTX crash, I'll summarize it for you in two sentences. A spoiled kid with a high IQ and influential connections fooled everyone into giving him their money, which he lavishly spent and used without consent to make silly investments that didn't pay off. All of that happened on FTX, the company he had created, which happened to be one of the biggest Crypto Exchanges in the world when it crashed. If you want to know more, this link can be a good starting point.
And believe me, optimism seemed to be one of the most common traits in that industry, to the point of being reckless. Very similar to what I'm seeing in the AI space these days. But I digress. Well, at least I'm doing it in a footnote.
I disagree 🙂.
Let’s take this quote:
‘Someone saying, "We're an electricity-first company, before using manual tools, we'll try electric ones first, etc." doesn't sound all that smart and innovative.’
Being an “electricity-first” company is meaningless today. But what if electricity that’s broadly available is *new*? What if people in the company are slow or hesitant to adopt it?
The whole AI first framing is a push to overcome barriers to adoption. Barrier to adoption across all the functions in the company, not just engineering.
The point is to adopt it across the board. Not (just) into the product but into the “factory.” Into how finance does audits. How marketing does research. How sales tailors messages to leads.
So it should be the CEO, not the CTO driving it.
And it should be done, because we are in the middle of a transformational shift in tooling. Just like electricity.