Is GenAI Digital Cocaine?
How a recent common frustrating experience with AI-assisted tools let me to a deeper reflection on the impacts of overreliance on such crutches.

It all started with a mundane and seemingly uneventful task.
I was wrapping up the integration with Amplitude for product analytics for a new product we were launching for one of my clients. I had done all the event tracking and was building the first set of dashboards.
They should generate insights and provide the rest of the organization with a sample of the tool's capabilities.
Like every other SaaS service these days, Amplitude offers an AI agent that is supposed to help you achieve your goals with the tool: building a custom dashboard or report, navigating the documentation, etc.
It's supposed to be a shortcut to help you get things done faster.
I ignore the specifics of how it works internally, but it has all the visible signs of a common approach in this space.
Using an LLM model from one of the leading providers, most likely OpenAI, fine-tuned by feeding it the product documentation and probably the content of their support forum.
If you trusted the claims of every single LLM vendor on the planet, you'd expect the tool to be very effective at one thing: providing instructions on how to use Amplitude and doing so more efficiently compared to having to go through the extensive and admittedly dull documentation.
Nothing further from the truth.
But let's first hear from this week's sponsor!
Lemon.io (Sponsored)
This week's newsletter is sponsored by lemon.io.
Ship Software Faster with Experienced Engineers
Finding skilled developers is hard—even for seasoned engineering leaders. Traditional hiring cycles can take months while critical features sit in the backlog.
Lemon.io removes the guesswork by rigorously vetting engineers and matching you with top talent in just 48 hours.
Need to scale your team quickly or find specialized talent that's hard to source locally? We match you with developers from Europe and Latin America who integrate seamlessly into your workflow—without the long hiring cycles or commitment of long-term contracts.
Unlike other platforms, we don't just check résumés—we put developers through a rigorous multi-step vetting process that assesses technical skills, problem-solving abilities, and communication.
Start building faster with lemon.io
Back to Amplitude.
I was trying to write a custom formula that didn't match any basic examples I had seen in the documentation. This would be a great use case for leveraging the assistant: it would give me the correct answer faster than my less sophisticated trial-and-error approach. By leveraging the power of GenAI, I would complete the task quicker and move on to something else.
The results? A disaster.
Despite the many attempts at tuning the prompt, an activity becoming increasingly similar to trying to communicate with a donkey in plain English, the assistant consistently came up with wrong answers.
Not in the sense that it didn't understand my request. It seemed to get that right. However, it consistently suggested ways to implement the desired solutions that would not work.
From suggesting unsupported syntax to long and tedious workarounds that failed to deliver the expected result, Amplitude's assistant consistently lied about what the product allowed or didn't allow me to do. At some point, I felt I was talking with a sales rep trying to sell me a contract. Not a good one, but that annoying sales rep who is not afraid of making up capabilities and lying to you to get to their quarterly quota.
When I realized I was turning into one of the apes in the cult scene from 2001 A Space Odyssey, screaming prompts at the AI monolith, hoping to trigger a meaningful and illuminating interaction from it.
I decided I had enough of persevering at such a dead end and resolved to do what I should have done from the beginning: plunge into the documentation, no matter how unengaging that was, and develop a deeper understanding of the syntax provided by the tool firsthand.
Twenty minutes later, thanks to investing the time to read the exact definition of syntax operators, I had found the solution. Two minutes later, it was implemented and working as I had intended.
I reprimanded myself and wondered why I had chosen the easy path first, even though it was less efficient. My explanation at the moment was that I wanted to move fast to deliver something non-critical. I had fallen for the temptation I often criticize in the world of knowledge work: the obsession with speed over everything else, including quality and personal growth.
Even if the shortcut approach had worked, I would have achieved the same external goal of producing a working dashboard, but I would have had a shallower understanding of its inner workings.
It was around that time that I saw this post from
1My recent experience and Beck's serendipitous post led me down a rabbit hole of reading, discussions, and reflections, culminating in the title of today's article
Is GenAI digital cocaine?
Before we get into the specifics of that question and my proposed answer, let's explore the current understanding of GenAI's effects on our cognitive abilities.
Reliance on GenAI could be making us dumber
Contrary to the dominant discourse on AI, I will refrain from making bold claims unsupported by evidence.
Instead, I'll start by recognizing this is a complex and nuanced topic, and we're just beginning to see the first results from research and studies.
As is often the case with our life choices, we're sometimes called to make decisions with limited information. We must choose tradeoffs that offer the best rewards at the lowest risk. The definitions of reward and risk are highly subjective.
What follows is my subjective view on the topic based on the information available thus far. It includes reflections on the dominant incentives in the modern capitalistic society, what I deem worthy of risk, and what I'd instead protect at the cost of missing out on some other benefits.
One of human's cognitive biases is availability bias, or availability heuristics2.
In the past couple of years, we've been literally bombarded by overly enthusiastic claims about AI's powers: that it can do a better job than many humans, that AGI is just a few months away, and that LLMs can reason in ways that are becoming increasingly comparable to human abilities.
Anecdotally, a friend of mine recently observed that he wonders when Sam Altman finds time to work, given the number of declarations and interviews he seems to be releasing at a staggering pace.
I can only speculate on a possible answer to that, and I ultimately (Altimately?) don't care.
Still, it's a fact that we're constantly a click/scroll away from reading a big claim about a utopian (dystopian?) imminent future.
That's why it becomes increasingly important to read beyond the blatantly obvious marketing material and declarations of those who will benefit the most from AI's pervasiveness—the same people who are selling the technology.
On the one hand, we can rely on individuals reporting their own experiences, especially when they don't seem to align with the dominant narrative.
I found a recent post3 from a self-reported Gen Z'r who stopped using ChatGPT as she felt it was impacting her cognitive abilities. In her own words, Eline refers to it as brain rot, following Oxford's word-of-the-year decision in 20244. For the unfamiliar, brain rot is defined as follows:
‘Brain rot’ is defined as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration”.
One of the most interesting points Eline makes in her article is this:
Instead of taking hold of genAI to get clearer ideas and structure sprawled thoughts, we’re allowing ourselves to be shaped by them. Soon, even if you are not using AI to write, those words, phrasal verbs and connectives will be so intricatedly(sic) attached to your brain that you will have to double your efforts to avoid inserting them in a text.
Eline's observations are not based exclusively on her own experience.
In her piece, she quotes an article by Lance Eliot published on Forbes a few months ago, explicitly titled Generative AI and Brain Rot5.
The article makes for a fascinating reading. It tries to anticipate — and preemptively dismiss, or at least mitigate — future claims that GenAI could lead to brain rot:
I don’t want to seem dour, and I realize those are all depressing recounts of how brain-rot via generative AI might occur. Remember that those are just theories. Little if any bona fide research backs up those conceptualizations. It is principally conjecture.
Although the article is superbly written, it misses one key point that differentiates GenAI from Social Media, TV, or video games as it tries to naively compare this technological breakthrough to those preceding examples.
He missed that incentives on TV, Social Media, or video games are entirely different from those we see in the GenAI space, specifically in the areas where it's being applied.
But We'll discuss that in the next section.
Before exploring that aspect, let's offer more support to what Eliot dismisses as principally conjecture in his article.
A few studies have recently discovered significant correlations (not causality yet, mind you) that arise from over-reliance on chatbots in the context of students and knowledge workers6. These are early studies, and I expect we'll see more of them as we progress. Their findings are interesting and worth some reflection:
Other studies revealed that regular utilization of dialogue systems is linked to a decline in abilities of cognitive abilities, a diminished capacity for information retention, and an increased reliance on these systems for information
And also
[…] A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.
These early studies provide insights into how excessive reliance on these new digital crutches might lead to severe cognitive consequences for those who frequently use them.
Specifically, it looks like people who have high trust in GenAI tools and tend to rely on them for daily tasks exert lower levels of critical thinking. They simply assume the results generated by the LLM are correct and move on with other tasks.
When we zoom out from the specifics of the research, it makes logical sense: with GenAI we're not only outsourcing to a tool the need for retrieving and storing information — like we used to do with search engines — or the acceleration of everyday tasks — an area where most tools such as spreadsheet or more general automation have played a key role in the digital age.
We're also increasingly outsourcing our reasoning tasks, which is the very skill that differentiates humans from all other animal species.
That's exactly what I was doing when trying to engage the support of Amplitude's assistant, or what Beck refers to in the post mentioned in the intro.
Our cognitive abilities are akin to a muscle. Like with any other muscle, if we don't use them, they will atrophy, and we'll lose the ability to use them when needed.
That's what the Irony of Automations mentioned in the last quoted article is all about7.
While research is catching up and discovering more substantial and confirmed links between GenAI and the (still potential) decline in cognitive abilities, let's look at the factor I believe Eliot missed in his analysis.
We'll start by taking a look at a phenomenon that played and still plays an important role in today's world: the cocaine trade.
Cocaine, the System's drug
In his 2015 book Zero Zero Zero8, Roberto Saviano describes the worldwide cocaine trade in great detail. What stuck with me when I read it was not its detailed description of the mechanisms of production and distribution. It was not the violence of the Mexican cartels, the ingenuity of mules carrying cocaine across the Atlantic, nor the ambiguous role played by the CIA and the FBI in its declared war against narcotraffic.
It was instead his analysis in the opening of the book about the key difference between Cocaine and all other mainstream drugs such as heroin and other opioids or cannabis derivates.
Saviano argues that what makes Cocaine an outlier compared to all other drugs is a simple fact: it's a performative drug. Unlike other known alternatives, it improves the performance of the person who uses it.
It doesn't dumb you down. It makes you feel stronger and capable of doing anything. It allows you to work longer hours, sleep less, and feel limitless. In one word, it's the system's drug. Its effects are aligned with the incentives of the competitive capitalistic society, as it allows you to be more productive.
From the system's perspective, it doesn't matter that your cognitive abilities will decline over time due to using the same substance that made you more effective in the short term. Someone else using cocaine will outsmart and outperform you. Thus, this drug is the most popular among any socioeconomic class.
Even though cocaine is now less of the upper-middle class drug that few can afford regularly, and it has become more widespread across all levels, it still bears significantly less social stigma than any other alternative.
What has changed since Saviano's book in 2015 is the emergence of new forms of synthetic drugs that seem to have similar performance-enhancing effects, such as Ritalin or Adderal, but the principle seems the same.
Some of these have even been praised as cool by prominent tech figures, at least until things turned south for them9.
We're finally equipped to explore the central thesis I'm arguing for.
GenAI is the digital equivalent of a system's drug
Taking inspiration from Saviano's remarkable work on dissecting the complex construct of cocaine trade and consumption, I argue that we might be witnessing the emergence of GenAI — or at least some of its widespread forms of usage — as a digital twin of what cocaine represented in the physical world for about half a century.
Analogous to the plant-based alkaloid, the primary and secondary effects of GenAI usage show a stark resemblance to those of cocaine, especially when it comes to relying on chatbots to perform work-related tasks.
The primary effect seems to deliver on what it promises: it makes its users more productive. As for the effects of cocaine, productivity is mainly measured in terms of output generated per unit of time. If cocaine makes you work longer hours by reducing the impact of fatigue, chatbots make you produce more artifacts in the same amount of time. From a system's perspective, the net result is comparable: more units of stuff produced by units of cost.
The fact that (ab)using such performance-enhancing crutches might lead to the progressive decline of the individual's cognitive capabilities is a side effect that doesn't directly prevent the system from achieving its stated goals. Once productivity gains are achieved, the human cost becomes little more than an unfortunate side effect.
Quantity is all that matters. Since the dot-com bubble of 1999, more marketing material, sales emails, and features have been the predominant driver of digital business growth.
This is no conspiracy theory.
Some of you might be tempted to dismiss this whole thesis as just another conspiracy theory about those bad guys in Silicon Valley. It is not. I don't believe there is a conspiracy or a grand criminal scheme behind what we're observing.
In the same way as I don't believe there is a conspiracy or a grand vision behind the trade of cocaine around the world.
The explanation is way simpler and a few orders of magnitude less glamorous. To be honest, it’s plain boring.
There are plenty of examples of situations that benefit a system while being detrimental to the single element that participates in it.
The cocaine trade is a driver of wealth. If you look at the margins it offers, it's one of the most lucrative businesses in the world. It allows dealers to become rich while offering a short-term advantage for the consumer, too, as they become more productive in their jobs.
Similarly, GenAI, specifically the promises of productivity bolstered by chatbots and coding assistants, generates wealth—arguably as concentrated in few individuals as the drug trade—while making its consumers feel more productive and earning short-term gains.
In both cases, the long-term effects on the individuals seem no more than a bump in the road and a sacrifice worth making for the greater good.
Or are they?
Instead of a fancy conspiracy theory, I'm offering a more effective tool for making sense of reality.
I've often been referring to the Hanlon's razor10 since I first learned about it from one of the smartest software engineers I ever met, someone I had the honor of sitting with for about a year. Thanks, Art!
The razor, in its original form, is formulated as follows
Never attribute to malice that which is adequately explained by stupidity.
Douglas Hubbard offered a version that seems more appropriate for understanding the mechanics I'm describing here:
Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system
As I don't have the depth of thinking and language mastery as either Hanlon or Hubbard, I've come up with my version, which focuses on an aspect often associated with the public discourse on AI… or vaccines.
Do not attribute to a conspiracy that which can be easily explained by greed
I don't expect this to be quoted in future edits of Hanlon's Wikipedia page. However, I'd be more than satisfied if it helped a single person think more clearly about the dynamics we observe in today's complex intersection of society, technology, and economic incentives.
Where to go from here
I'm not suggesting we all turn into Luddites and start destroying machines all around us. Machines are part of our lives, for good and bad, and trying to deny it would only be anachronistic and unproductive.
Instead, I'm trying to keep practicing my critical thinking to try and make sense of the transformation we're all witnessing in this decade. That means not taking the highly salient and shallowly inspiring enthusiastic statements from the same people who benefit from selling you GenAI at face value.
I'll keep reading papers from those whose job is always to help us understand correlations and causal effects with the most neutral view: researchers and independent journalists who dedicate their lives to helping humanity develop a deeper understanding of the world around us, including the benefits and side effects of our own inventions.
History has proven time and time again that technology evolutions tend to have unexpected implications on society, and not always for the better. Because of what history has proven to us many times — despite humans' short memory — I'm always skeptical whenever I hear someone, anyone, promise a future nobody can predict.
Regarding the use of GenAI specifically, I'm not abandoning it altogether. Instead, I'll be even more cautious than I've been in adopting it, especially as a substitute for my ability to reason and create. I'll do my best to resist the urge to opt for short-term speed and focus instead on quality and depth.
As an example, it took me an insane amount of hours researching, thinking, and then writing this article. In the same amount of time, I could have published about a dozen GenAI-assisted shallow articles on vaguely the same topic. It would have made for more content created per hour. But I'd missed the experience of learning about a subject, forming a more precise picture in my mind, mulling it over for weeks, and then being able to talk and reason about it without the need for any conversational crutch.
If only one article has been produced instead of a dozen, countless more neurological connections have been created in the process of producing it.
In my view, that capital is worth orders of magnitude more than any volume of written content I could have published.
If you enjoyed this
I don't expect you to agree with me; I'm just inviting you to be open to different views on a subject that has dominated the public discourse way too much in the past couple of years.
Conversely, I'm always open to hearing arguments and views that will help me refine my perspective. Please share your notes or links to references or articles in the comments section. I don't often answer comments, but I read them all.
Finally, this newsletter is free, and I intend to keep it free forever. Sharing it with others helps immensely in growing it. Visiting my sponsors is a great way to support the work financially. Engaging with my professional services is a great way to ensure I can continue dedicating many hours each week to producing what I hope to be high-quality content.
Those services articulate around three legs:
Fractional CTO or Advisory roles for B2B companies. Find out more on this page.
Individual Mentoring and Coaching for Engineering Leaders. Find out more on this page.
A paid membership/community for engineering leaders. Find out more on this page.
If your needs fall into a different category, such as newsletter collaborations or sponsoring, please reply to this email or schedule a free call via this link.
For full disclosure, no GenAI tool was used to create the content for this article. However, I use Grammarly Premium (not an endorsement) to help me write better English. As a non-native English speaker, I found this tool effective in helping me improve. My method for making it work is simple: I always write all the text first, and only then do I review suggestions from Grammarly. I treat it like an editor, not a co-author.
I'm not sure I'll use it in the long term. As my writing improves, I intend to eliminate the need for it.
Happy reading, and see you all next week!
See this Wikipedia page for more details.
In this part of the article, I'm referring mainly to the two following papers:
I recommend reading them both if you're interested in this space
One of the most blatant cases has been the one of the former posterchild of the Crypto tech-bros sphere, Sam Bankman-Fried, who made extensive use of Adderall and promoted it amongst FTX's employees. See an example here.
Read more on Hanlon's razor on this Wikipedia page. You might even ask your favorite chatbot to tell you about it, but I can't guarantee the results.