Interview with Baldur Bjarnason on writing, AI and media
I recently had the pleasure to meet Baldur, the author of one of my favorite books for 2025, and we turned that chat into the interview you're about to read.
This is the second time I end up interviewing the author of a book I really liked1. This time, I got the pleasure of chatting with Baldur Bjarnason, the author of The Intelligence Illusion. In our chat, we discussed writing, AI, and the media industry a lot. An eclectic mix that closely reflects Baldur's personality and background.
Let's get into it!
Welcome, Baldur. Can you briefly introduce yourself to those who don't know you?
My name is Baldur Bjarnason, a writer, researcher, and freelance software developer. I've always been interested in understanding technology trends and their impact on society, which led me to start researching AI, or rather, generative models. I published a book in 2023 about the business impact of generative models titled The Intelligence Illusion, and I've just recently released an updated second edition. All of my books are self-published.
Let's get into more detail about your journey. How did you get involved with technology?
You could argue that I'm “an old”, compared to the children dominating tech these days. I guess my journey really started when I got the first modem for my Mac.
And of course, as any self-respecting nerd would do, one of the first things I did was make a website. I was a bit of a, and still am, a comics geek. So, I made a website about comics submitted to all of the search engines at the time. I went to bed, and when I woke up the morning after, I had emails from people who had read the website, and they liked it.
At that time, the web was so small that just a new email or a new website meant people discovered it just because they were browsing the web.
At that point, I was hooked.
Initially, though, for me, the web was mainly a medium. My family has a long history and tradition of being in media: TV, radio, and publishing. At that time, around 1999, I was facing the dilemma of either following my family tradition of joining traditional media or taking the risky bet of focusing on the web, which is what I did. I decided to go study interactive media in the UK.
I never got a formal computer science education, though I'd always been a STEM nerd throughout my school years.
As I was following the media path, I realised that to use the web properly as a medium, you had to understand the underlying technology. People around me didn't seem interested in that aspect, so I ended up becoming the guy who knows how to code websites, and quickly became the go-to person in my entourage for coding. A lot of my photographer or actor friends who needed a website would reach out to me, asking me to build one for them.
All this was happening as I was going through my university studies.
My academic background is quite research-heavy, which has given me the ability to do deep research into complex problems, and that became one of my main assets as a freelancer, too. For instance, when a company needed to add support for PDFs in their app, I could do the research to develop expertise on some of the internals of PDF documents to help them out.
I constantly apply the research methodology that I learnt during my PhD to my work. That has led me to often and quickly become the subject matter expert in different topics. One of those topics was the ePub format, again an example of something sitting at the intersection of tech and publishing.
I’ve mostly focused on working on tech problems in publishing and education, often with non-profit organisations funded by research grants, and are experimenting with new things that rarely end up being anything, as that's the nature of research. The low success rate is baked into the overall approach.
And then a few years ago, I was wrapping up a project with a Canadian not-for-profit organization when I heard the first GitHub Copilot announcement and started focusing more on researching generative models.
You have been writing for a while. Can you tell us more about how you got started with it and why?
I've always been writing, ever since I was a child. I started when I was around five or six years old. That has to do with the overall influence in my family, of course, but also a natural interest in thinking and writing.
I've been practicing for so long that I can write very quickly. When I was in academia, some of my colleagues got annoyed because I could produce 3000-word articles quickly, making it look almost effortless. In my haste, I sometimes made a hash of it, though, as they delighted in pointing out.
I got into the habit of writing regularly very early, and I've been practicing that for about 40 years.
Can you tell us more about the journey of publishing your first book?
The first books I self-published were a series of fantasy novellas over a decade ago. They never sold particularly well, but did get a few solid reviews. Overall, though, it was a very discouraging experience as I was working in UK publishing at the time and when I asked for feedback (just feedback, not blurbs or reviews or anything) all I got from friends and acquaintances in publishing were comments along the lines of “the typography is quite nice”.
This is, if you aren’t familiar with the British style of feedback, utterly devastating. It means they couldn’t find a single positive thing to say about the writing itself.
After a few responses like that, I just stopped and took the novellas off sale.
A few years later, during the COVID pandemic, I got hit pretty hard by brain fog due to long COVID, so I felt driven to condense all of my research and experience in software development into a book, as I fully expected the fog to only get worse. That and shifts in the software development industry meant it really felt like time was running out. So, I wrote first drafts of each chapter on my lucid days, edited and typeset during my foggy days, and used maps, notes, and a marker board to keep track of things, and the end result was my first non-fiction book, Out of the Software Crisis. I started work on The Intelligence Illusion pretty much immediately after that.
What led you to make the transition from fiction to non-fiction writing?
My first foray into self-publishing was in fiction. In a nutshell, the issue was that it’s fine if friends, acquaintances, and family won’t touch or even try to read your specialised non-fiction. Those are texts geared to help people work, so it’s not a surprise that people around you might not bother to read them.
But it’s very disheartening when that same thing happens with fiction. If you write a novella or a novel and it’s hard to get people to read it or say anything positive about it, that just takes all of the energy out of you.
Ironically, now that I’m back in Iceland and less involved in the publishing community, I might try my hand at fiction writing again at some point. Or, I might not. We’ll see.
How do you balance your work as a writer with your work as a freelance developer?
This balance might well be taking care of itself. I’ve been a freelance consultant, focusing mostly on web development, for just over five years now, and I generally rely on a network of contacts for finding new projects. And many of those projects have been with grant-funded not-for-profits.
All of that is changing. Most of my tech industry contacts have either been laid off or are facing budget cuts and are themselves trying to find freelance gigs. Most of the not-for-profits are facing existential funding crises. So, the writing has ended up being a more reliable income stream, even though it’s generally lower. The freelance developer's work pays well, but the projects are becoming fewer and farther between.
If I want to stay in software development, I might have to change specialities. Maybe learn another programming language or something. Or, I might just focus on the media business. I haven’t decided yet.
Why did you choose the path of self-publishing, and would you recommend it to aspiring new authors?
Having worked in publishing means I have a slightly different understanding than most of the tradeoffs involved in self- or traditionally-published books.
Trade publishing (academic publishing is whole ‘nother ballgame) gives you broader reach but at a substantially lower income. The broader reach also affects the writing. You can’t get away with focused, specialised writing if you’re getting published by a regular trade publisher. It needs to have a broader appeal.
But the only kind of writing that has an outright, measurable business value—of the kind where you can say “reading this book helped me save or earn this amount of money”—is exactly the focused and specialised kind.
Self-publishing makes sense for that kind of writing—the kind that has business value.
Sort of. There is a major caveat.
If you have the kind of specialised skillset and knowledge that benefits businesses—and if you do, you know, because you’re probably already selling access to that skillset through consulting—then either self-publishing or selling online courses directly through your website, promoted with a newsletter, is the only way to go. Trade publishing would water the book down and make it less useful to businesses. Academic publishing would make it inaccessible. So, it might make sense if you’re in the consulting business and have skills businesses value.
The major caveat is that it gets very expensive if you don’t have the expertise to put the book together yourself. And it gets even more costly if you don’t have people with editorial experience around you willing to volunteer their time. This means that I can’t recommend it to most new authors. If I had paid for typesetting, cover design, or line editing, then the books wouldn’t have been financially viable.
When and how did you get your first ideas for what eventually became The Intelligence Illusion?
Before the GitHub Copilot arrived, I had already been working on a project that involved a deep dive into the state of machine learning. When GitHub made the announcement, I was very curious to see if they had really managed to solve the various problems and issues I knew affected the tech. I decided to do a deep dive, as another of my research projects. I just assumed the research would pay for itself down the line, as that's one of the things I need to take care of as a freelancer.
I came into the project with optimism, as in my prior work, I had been toying with some ideas on how you could use machine learning to improve the software development process, specifically debugging. Unfortunately, I saw none of that when I started looking under the hood. I quickly realized that the whole premise of Copilot and similar tools was based on the flawed myth of the 10x engineer: someone who churns out code at incredible speed, with no worries about the overall design and the end user's needs.
The more I looked into generative models, both LLMs and diffusion models, as part of my research, the more I was horrified by what I was seeing. I was shocked to realize how the industry was presenting something that produced mediocre results as a groundbreaking tech advancement. The results were also mediocre in a very uninteresting way, contrary to the output of earlier models that were wrong in a way that could at least be considered creative. They made errors humans would never make, and that was interesting in its own way. But the modern diffusion models that got rid of the earlier weirdness produced results that were just plain mediocrity. These images look like generic street or model photography. There is a homogenization that is contrary to creativity and art.
In essence, that's how the book came to be. I was in sheer horror at the disconnect between the promises and the reality, and I felt that I had to put this research together in a form that would make it accessible to a broad audience.
Some books focus on the academic aspect of AI, and others focus on the fraud and deception. I just wanted to boil down my research into a single book that just focused on the fact that these systems and tools do not really work for the tasks we're supposed to use them for.
Some of the notes for the book were three years old, but the bulk of the writing took place over about two months, not counting proofreading and production.
In your book, you talk about the fact that there's rarely a first-mover advantage in tech, and you provide more than a handful of examples for that. Why do you think we're still collectively falling for FOMO as an industry?
Because, to use the words of the programming pioneer Alan Kay, tech is a pop culture, not engineering. It is fundamentally a business driven by fads and fashion, not reasoning or engineering. A catchy concept will conquer the entire tech industry before sound engineering manages to tie its metaphorical shoelaces.
Tech is also heavily financialised. Most of the money in tech comes from swings in the stock market or cryptocurrencies. For many of these companies, their actual products don’t matter, only that they can tell a plausible story about those products.
In the book, you break down people's attitudes towards AI into three major groups: True Sceptics, True Believers, and Sceptical Believers. You argue that being in this last bucket, Sceptical Believer, is the riskiest position, which to many might sound counterintuitive. Can you elaborate on that?
It has to do with gaming out the consequences of the technology. And only if the participants are rational, which they rarely are in practice.
If the technology is revolutionary, then it will grow incredibly fast and be dominated by fraud and speculation because the technology is genuinely lucrative, and if it’s a true revolution, most of the early innovations will rapidly become outdated as revolutionary tech tends to grow and mature very quickly.
If it promises to be revolutionary, but isn’t, then the same will happen, but probably at a smaller scale.
So the true believers and the true sceptics are in the same boat. If they game out how things are likely to play out if their beliefs are true, it should become obvious quite quickly that the early years are likely to be extremely risky, and they will adjust their behaviour accordingly. A rational believer and a rational sceptic will both be very careful about adopting the “revolutionary” tech into their core business. The rational believer will still be very involved in research and experimentation, but won’t move their business wholesale until it starts maturing.
The sceptical believer, on the other hand, is the manager who thinks there might be something to the “revolution” but maybe not to the degree promised. They generally adopt the tech quite early because they don’t want to lose out, but trick themselves into thinking that they can limit their exposure to risk by only adopting the tech into parts of their business.
What they’ve done is make their business reliant on a technology that will quickly become outdated, flawed, or even insecure, and they won’t have the expertise to spot the issues until it’s too late.
But, nobody involved in “AI” is even remotely rational at the moment, so none of this really matters.
You make the point that one of the biggest challenges with LLM-related research is that training data are not available for peer review, which means all available papers are written by the vendors and not peer-reviewed by other scientists. Do you think that with the emergence of open-source models, this might change in the near future, or not really, since the models are open but the training datasets are not? Is it smoke and mirrors?
I originally had some hope that smaller open-source models might be a path out of the bubble, but that hasn’t turned out to be the case.
The open source models are not really open source at all. You can’t take the code published and recreate the models. The training process is usually either undocumented or just outright nondeterministic. The license does not conform to any of the traditional free software or open source principles. They often have usage and commercial restrictions that mean that any business use is risky.
Generative models is still a field dominated by secrecy and hype, and “open source” models have, genuinely, only made the problems worse.
I recently published an article titled “Is GenAI Digital Cocaine”2 exploring the early signs of negative effects on cognitive abilities caused by overreliance on GenAI-based tools, both for students and Knowledge workers. I've also published another article a few months ago, warning about the environmental impact GenAI is having, at a point in our history when trillions should be spent saving us from extinction rather than fancy technology3. You didn't touch on any of these two aspects in your book. Was it a decision to focus on the false promises and less on the side effects, a matter of time, interest, or what else?
Partly cynicism, to be honest. It was primarily because I don’t think businesses and managers care. If it were legal to sell heroin or meth, most managers wouldn’t hesitate. They don’t care about the environmental effects either.
But it was also because I know that others, like Emily M. Bender, Alex Hanna, and Dan McQuillan are doing a much better job of covering the ethical or environmental issues. I decided to focus on pragmatic business use because that’s where I can apply the same expertise I use in my consulting. I can speak with authority on business use. I can’t speak with authority on the ethics or environmental effects.
On page 21st, where you describe the structure of the book, you close with the following sentence: “If you want to be depressed, read the whole thing from beginning to end”. I appreciate the style, and the fact that there are so many interests – read billions – at play in this space that people could feel helpless in their quest to resist the “enshittification” of products. What are some actions you recommend our readers take to counter this trend?
No clue. Would love to hear suggestions, though.
Do you use any form of AI in your life, or are you actively trying to avoid them?
So, there’s a major difference in practice between various forms of machine learning and generative models. Machine learning is fantastic. Generative models are unreliable, error-prone, and volatile.
Machine learning is a major part of modern computing. It’s a big part of photography and video workflows today. Using Lightroom to automatically create a mask for the sky in a photo, for example, uses machine learning but not a generative model. That’s useful. I use that kind of technology every time I work on a photograph or video.
LLMs and diffusion models, however, do not work the same way and are bolted onto products but haven’t yet been meaningfully integrated into software workflows because of their high error rates and volatility.
Of the various generative models, the only one I use is OpenAI’s Whisper to create a first pass at captions for my videos, but the error rate is massive and unpredictable. Some videos transcribe cleanly. Others transcribe into gibberish. The work involved in fixing the ones that go bad is such that I would be better off transcribing them manually. So, not an entirely positive experience.
AI is spitting out mediocre content. What do you think about the consumer side? Aren't we also becoming more accustomed to mediocre content? Doesn't this predate AI in a way?
If you look at Western media, and specifically North America and English language UK media, the trend is true. There is an undeniable trend towards the mediocre.
But alongside that trend, there is also a trend towards massive financial losses and media companies absolutely destroying themselves because they're losing audiences, revenue, and profit. So I don't think the end user is actually satisfied with the quality of the content being offered to them.
The pattern I see is that executives in big media companies are trying their best to avoid having to deal with labour. They try to cut costs as much as possible. That's why, instead of having writers' rooms, they basically do what Netflix does with their TV series: shoot the first draft of the script rather than pay for multiple iterations and polish.
That's just madness, as working on the writing is exactly the easiest and most cost-effective way you can improve the quality of a TV show or a movie.
Experienced writers know how to frame scenes in a way that preserves their core effect but drastically reduces costs, such as skipping scenes in a swimming pool and the like. Writers with less experience would not know how to optimize for the cost of shooting, which ends up increasing the overall production cost while the quality of the story remains low.
The vision of the executive producer is dominant, and they seem to believe that the audience will follow.
The second problem is that technology platforms have been looting the whole media industry. They're undervaluing it by offering a huge catalog for a flat fee. Today, there are no incentives for making a show a blockbuster hit, as the revenues you'll generate are more or less the same. There is no reward for quality or success; only quantity matters.
The Netflix model is becoming more and more the equivalent of the gym model, where what matters to them is that people sign up and pay, regardless of whether they come to the gym or watch anything on their platform.
It shouldn't be surprising to discover that WeTV and IQ.com are among the fastest-growing streaming services in the world, and they're both based in mainland China. The West, as it has in so many other industries, is effectively ceding the media landscape for much of the world to those with the greatest capital, and, for many areas, that’s going to be China.
What's surprising is that despite these services being based in a country famous for all sorts of authoritarianism, outside of China they have way more queer content than all of the North American streamers combined. And I mean queer content as in where all of the leads in the series are queer couples with a happy ending, and the entire plot revolves around a queer community.
Media industries outside the US are becoming good at tackling niches and turning them into mainstream. Take South Korea, for example, with the whole K-pop music movement. It went from being a niche interest 10 years ago to the mainstream phenomenon of today.
In my view, that happened because Western media and Western pop culture have let executives drive them into the ground.
I worry that the same thing is happening to tech today. I see that tech is so obsessed with automating mediocrity and adding minor features to lock in users into SaaS products just to make it hard for them to drop out and switch to something else.
Meanwhile, take a look at what has been and is happening in China. Ten years ago, they had an incredibly bad reputation for electric cars, and now they're dominating much of the global market. Twenty years ago, they had an incredibly bad reputation for making computers, and now the majority of computers are built in China, and a substantial portion of the world’s semiconductors. Maybe not the same cutting-edge chips that you see made in China, but they make a lot of the rest.
Right now, China is investing a lot of money and a lot of research in technology, while everybody else seems satisfied with basically extracting money from a captive audience.
Seeing that the United States is destroying research funding, and knowing how much the tech industry relies on that government funding, this means the US is deliberately sacrificing its future. They are assuming they've already won, and that will lead them down the path of decline, if not collapse, and will make the experience very unpleasant for us all in the meantime, as we depend on their technology.
Thanks for the great conversation, Baldur. Finally, where can people find out more about you?
People can follow my work on my website.
I'm also on LinkedIn and Bluesky.
If you enjoyed this
This newsletter is free, and I intend to keep it free forever.
Sharing it with others helps immensely in growing it.
Engaging with my professional services is a great way to ensure I can continue dedicating many hours each week to producing what I hope to be high-quality content.
Those services revolve around three legs:
Fractional CTO or Advisory roles for startups, scaleups, and established tech companies. Find out more on this page.
Individual Mentoring and Coaching for Engineering Leaders. Find out more on this page.
A paid Community for engineering leaders. Find out more on this page.
If your needs fall into a different category, such as newsletter collaborations or sponsoring, please reply to this email or schedule a free call via this link.
A few months back, I interviewed Cate Huston; you can find the whole article here.
You can find that article here.