What the latest DORA report says about GenAI and Software Development
While the AI broligarchs make bold claims about the products they're selling, expert researchers help us develop a more nuanced understanding of the impact of GenAI on Software Development.
This won't be the I've spent X hours reading up on Y so that you don't have to do it kind of article some of you might have gotten used to reading.
I want you to read the latest DORA report1 in its entirety.
It's good for your brain, your ability to process complex topics in your head, and ultimately for your career. Beware of anyone trying to convince you to delegate those activities and just consume stripped-down summaries written by them.
In most cases, they're doing it in their own interest, leveraging the innate human laziness even to make you think you should thank them for the service.
You shouldn't.
Feel free to read summaries, but then read the original work and form your own ideas and mental representation of it. In fact, my primary intention with this article is to convince you to read the original report.
Please let me know in the comments if I succeeded.
Stories and opinions about Generative AI in software development are now everywhere.
The DORA report confirms that "89% of organizations are prioritizing the integration of AI into their applications, and 76% of technologists are already relying on AI". The pressure to adopt it is significant. Crazy investments are flooding the sector, and the promise of a revolution in which software engineers will become obsolete and anyone will be able to vibe-code their way to production-ready applications is repeated to exhaustion.
On the one side, you have many vendors trying to convince you that their tools can do magic. Those bold claims aren't new, and though they're grounded in some level of truth, they can't conceal the inherent conflict of interest of those making them. On the other hand, you have independent researchers, such as the folks at DORA, who have been relentlessly working to help our entire industry make sense of the nuanced, complex, and still largely misunderstood discipline of software engineering.
While it's easy to fall for the formers, as they tend to take up all the media space, and require minimal brain capacity to be processed, I strongly encourage any engineering leader to put in the effort to find and consume the work of researchers whose main incentive is to improve our understanding of the world; not making billions and feeding their egos.
When we move past vendor promises and examine the data presented by DORA, a more nuanced and, frankly, more complex picture emerges. A picture that can't be fully captured in 280 characters, or whatever the limit on X-former-Twotter is these days.
Let's look at one of the most interesting findings from the report.
The Productivity Paradox
On the surface, the report notes some appealing individual benefits. Developers using GenAI often report feeling more productive and experiencing enhanced flow states. These subjective improvements in the developer experience are undeniably part of the overall impact of GenAI on software development.
However, the data reveals a significant paradox.
Despite these feelings, the report also finds that increased AI adoption correlates with developers spending less time on tasks they find valuable. Time spent on toil remains largely unchanged.
In essence, it suggests that GenAI might currently be better at accelerating preferred tasks than eliminating shallow and tedious work. Let's take a moment to let that sink in, as it seems counter to the repeated promises of technology.
We were promised a world in which we'll have plenty of time for meaningful, creative, and fulfilling work, but the current findings suggest the opposite is happening.
We might be accidentally creating the conditions for Software Engineers to spend even less time building products and even more time in meetings and filling tedious reports.
Furthermore, the report highlights that "developer trust in gen AI is currently low"2, acting as a significant barrier to maximizing even the perceived benefits. There are reasons for that which we need to understand fully. Simply thinking that developer trust is the problem might be an exemplary case of focusing on the proverbial finger that is pointing at the moon3.
Process Gains vs. Performance Pains
Perhaps the most subtle and surprising finding is the disconnect between process metrics and overall delivery performance. While AI adoption is linked to some improvements, such as potentially better documentation and faster review cycles, the report finds that "AI adoption negatively impacts software delivery performance, particularly delivery stability"4.
How can internal processes seem faster while actual delivery outcomes suffer?
This is one of those cases where focusing on individual productivity can have unexpected and undesired side effects5. The report suggests a plausible explanation rooted in fundamental DevOps practices.
The ease with which AI generates code might be creating incentives for teams to abandon the good old practice of small batch sizes. As the report theorizes, this could lead to larger, riskier changes that directly undermine stability.
It appears "AI improves process measures but hurts performance measures without adherence to fundamentals,"6 underlining that technology doesn't replace sound engineering principles – it makes them potentially even more critical.
Extrapolating from the report, it hints at a potential scenario that many people in the industry underestimate or misunderstand. What the DORA report implies is that solid DevOps principles and practices, the same ones that have been found to drive overall organizational performance pre-GenAI, are a crucial requirement for increasing overall performance when integrating GenAI into the mix.
So, contrary to what many voices seem to suggest, though GenAI might be lowering the bar for writing code, it's not a key differentiator when it comes to outcomes.
This means that companies with strong engineering practices and cultures in place are the ones most likely to see the biggest positive impact from introducing GenAI into their processes. Not the romanticized junior engineer who suddenly gains superpowers.
In other words, GenAI is more of a leveling field than a differentiator.
In recent conversations, I've used an analogy that even people with no knowledge of how software is built, operated, or maintained can understand: Money in the world of entrepreneurship.
Let's assume that suddenly, any single person on this planet can easily get access to funding if they want. Like ZIRP on steroids.
Does that make every single person a successful entrepreneur? Is money everything required to build a successful startup? Arguably not.
We've seen a good number of companies that were extremely good at burning money very quickly, but unable to make any profits.
The same goes for code. Code is a key factor in building a successful tech startup, but code alone has little intrinsic value. It's a liability. More code doesn't mean more value, exactly as more money invested doesn't necessarily translate to increased ROI.
There are many things you need to do right to achieve that positive transformation. Hard work is still hard work, even with a lot of money or with AI-monkeys writing most of your code.
Key Takeaways for Engineering Leaders
Like with any other major technological shift, we should be careful and not throw out the baby with the bathwater7. The DORA report also provides a set of pragmatic and evidence-based recommendations to help you guide your teams in getting the best out of AI adoption. They include the following.
Look Past the Surface: Don't accept claims for immediate and guaranteed performance gains. The data shows a more complex reality where process speed doesn't guarantee delivery success. Maintain healthy skepticism and focus on verifiable outcomes.
Double Down on Fundamentals: Now is the time to reinforce, not relax, core practices like small batches, robust testing, CI/CD, and thorough code reviews. These are both essential guardrails for managing the risks AI can introduce and the key drivers for improved outcomes. AI makes them even more relevant than they used to be.
Address the Productivity Paradox: Recognize the nuance behind "productivity." Understand that AI might change how work feels without necessarily improving what gets done or reducing toil. Engage with developer concerns about skills and job satisfaction honestly. Find ways to dedicate the freed-up time to activities that take advantage of human creativity, knowledge, and ability to solve hard problems.
Build Trust Methodically: Acknowledge that low trust is a key inhibitor but also a symptom. Improve confidence through clear usage policies, strong quality controls (testing, reviews), transparency about AI limitations, and empowering developer choice rather than mandating tools. More importantly, do not try to "fix" people's lack of trust in the tool. Fix the tool and the way it's used instead.
Measure Real-World Impact: Track core DORA metrics rigorously and consistently, and watch out for unexpected deviations. Is AI adoption leading to tangible improvements in how software is delivered, or just altering internal perceptions and processes? Use those findings to start conversations with your teams and diagnose potential root causes and mitigations.
Finally, the report emphasizes the need for transparency and a clear vision to ensure the maximum benefits from GenAI adoption, while addressing concerns about job security and quality of work that may be present in the organization.
GenAI will never solve fundamental cultural or transparency issues at your company, but it might become the incentive you need to address those systemic problems. It's better to improve the situation for the wrong reasons, like companies improving their privacy or accessibility practices just to comply with regulations, rather than not doing it at all.
Think about how you can use the hype surrounding this technological shift to address long-standing issues in your engineering practices, such as documentation, testing, continuous delivery, and the long list of capabilities DORA has been promoting for over a decade.
Regardless of what the future of AI tools in software development looks like, your organization will be better positioned to benefit from them.
If you enjoyed this
This newsletter is free, and I intend to keep it free forever.
Sharing it with others helps immensely in growing it.
Engaging with my professional services is a great way to ensure I can continue dedicating many hours each week to producing what I hope to be high-quality content.
Those services revolve around three legs:
Fractional CTO or Advisory roles for startups, scaleups, and established tech companies. Find out more on this page.
Individual Mentoring and Coaching for Engineering Leaders. Find out more on this page.
A paid Community for engineering leaders. Find out more on this page.
If your needs fall into a different category, such as newsletter collaborations or sponsoring, please reply to this email or schedule a free call via this link.
You can download the full report here. If you prefer not to fill out the form and share your personal data, you can get the PDF from this direct link. You see, I'm making it easier for you to read the original report. You have no excuses.
This is a summary of the findings presented on pages 23-29.
In case you're unfamiliar with the old saying attributed to Confucius: "When a wise man points at the moon, the imbecile examines the finger", you can read up on it here.
Again, this is a summary of the findings presented on pages 5-14.
I wrote about another similar issue, the convenience principle, in a prior article:
Summary capturing the argument on pages 12-14.
I love how crude this saying is, but it's also very effective!