Good morning, {{ first_name | AI enthusiasts }}. A week ago, Spotify’s co-CEO claimed their best devs haven’t written a “single line of code” this year — echoing a wave of execs that describe AI coding agents as the future of software.
The shift is happening – there’s no doubt about that. But bringing AI into real engineering workflows is more nuanced than hitting a switch and going on autopilot.
To better understand what’s changing (and what isn’t), we sat down with Rajeev Rajan, CTO of Atlassian, the company behind popular collaboration tools such as Jira, Confluence, and Loom. We got insights on their recent software development agent Rovo Dev and discussed how human roles evolve in an AI-native world.
In today’s AI rundown:
The non-negotiable cost of coding with AI
When AI writes code, what will an engineer do?
Building agentic AI for developer joy
What happens when AI makes a mistake?
Truth about the “death of SaaS” theory
Quick hits with Rajeev
LATEST DEVELOPMENTS
WORKFLOW REDESIGN
The Rundown: As more enterprises adopt coding agents, Rajan says teams will have to redesign their workflows with both human ownership and safety systems, so that AI moves faster without sacrificing consistent, production-grade quality.
Cheung: AI is generating more code than ever, but research shows 45% of it still contains security flaws. How exactly should teams leverage AI coding agents without trading quality for speed?
Rajan: There’s an undeniable trend toward an AI‑native software development lifecycle. But if you let quality slide, you’re just moving faster toward incidents and customer pain. It’s less about “AI vs. quality” and more about “how do we redesign the workflow on the assumption that AI is in the loop by default?”
In code review, we are entrusting AI to catch bugs, enforce coding standards, and explain complex changes. For example, Rovo Dev helped reduce PR cycle time by 45% and auto-resolved 51% of potential security vulnerabilities. The nature of review is changing here: instead of humans reading every line of a peer’s code, it’s about a human owner reviewing an agent’s work.
Rajan added: At deployment, if AI is helping you generate and ship more code, your safety systems have to keep pace. Think: smaller batches, heavier CI, stronger observability, and fast rollbacks. You can’t operate in a black‑box scenario.
Why it matters: Speed only becomes an advantage when the system behind it is airtight. The right approach with AI coding agents is to treat them as core enterprise infrastructure — which means designing workflows around them, building in safety from day one, and holding output to the same standards as human-written code.
ROLE SHIFT
The Rundown: With writing code no longer the bottleneck, Rajan believes that the next big opportunity for engineers is stepping into more strategic functions (like planning to execution) and designing better systems with AI in the loop.
Cheung: How much of code will be AI-written by 2028, and what does the role of a software engineer look like with that change?
Rajan: By 2028, I would not be surprised if most new code in large companies is AI-generated. I say that as someone who fell in love with writing code early in my career and still remembers the joy of seeing something work for the first time.
We’re seeing a shift where every engineer is a tech lead, orchestrating systems and agents. Engineers now spend more time driving clarity and owning what happens “left of code” and “right of code” – from planning and design on one side to testing, rollout safety, and operations on the other.
Cheung: What about new grads entering the field — does AI help them or hurt them?
Rajan: Focusing on the right fundamentals and adopting the AI-native way of working will give new grads a big advantage — potentially allowing them to leapfrog senior developers who haven’t adopted AI ways of working yet. Your edge will come from judgment: knowing when to trust the AI and when to challenge it.
Why it matters: As AI writes more code, the moat for engineers moves from actually typing to instead framing problems, designing systems, and maintaining oversight. Rajan’s leapfrog point is especially interesting: the advantage may not go to the most senior person, but to whoever learns to orchestrate AI fastest.
DEVELOPER JOY
The Rundown: Atlassian kicked off its internal journey to improve “Developer Joy,” raising developer satisfaction scores from 49% to 83%. With teams moving faster and feeling more empowered to make changes, Rajan shared how this renewed sense of ownership led to direct product improvements with Rovo Dev.
Cheung: Why did you decide to focus on developer joy, and how did you actually measure it?
Rajan: When I joined Atlassian, we chose to frame developer productivity as ‘Developer Joy’. If developers are frustrated, blocked, or taken out of their flow, it doesn’t matter what productivity metric you pick — you’re not going to get great outcomes.
We track this with regular satisfaction surveys and hard metrics tied to pain points. Developer satisfaction has gone from 49% to 83%, and we see that show up in the work. For example, by focusing on one of the biggest friction points, the Confluence backend team cut its full server build time by more than 60%. We see these kinds of investments as core to our ability to ship value to customers.
Cheung: When you tested early versions of Rovo Dev internally, what did engineers push back on?
Rajan: Early on, the feedback we got on Rovo Dev was that parts of the experience felt like ‘magic’ in the wrong way. It would do something useful, but you could not see enough of the work it did to get there.
We actually scrapped and reworked an early ‘one click, do it all’ flow because our own teams would not touch it without more transparency and control. They wanted a way to understand each step the agent took, how their instructions led to different outcomes, and the ability to steer the agent. That pushed us hard toward agent sessions you can inspect and experiences that keep developers in the loop.
Why it matters: Rajan’s early Rovo Dev story highlights how critical internal feedback loops are when adopting agentic AI. The more teams listen to users (and iterate on what feels opaque, risky, or frustrating), the stronger and more trustworthy the system becomes. Iteration goes hand-in-hand with developer trust in an AI-native world.
ACCOUNTABILITY
The Rundown: Rajan says powerful AI agents should be deployed to production only when there’s clear human ownership and a way to track, monitor, and steer the system’s behavior, creating an accountability layer that moves as fast as the AI itself.
Cheung: As AI agents take on more autonomous work, how do you maintain accountability, and why can’t “the AI did it” ever be an excuse?
Rajan: When something goes wrong, ‘the AI did it’ can’t be the answer, because AI doesn’t own customer trust – we do.
As we bring autonomous AI into our workflows, we have to be explicit about accountability: every AI-assisted decision or action has to have a clear human owner. If we can’t understand or observe how an AI is behaving, it doesn’t belong in a critical path.
We put guardrails and observability around AI, we log and audit its actions, and we make sure teams treat it like any other powerful tool: you understand failure modes, you monitor it, and you don’t ship without ownership. AI can help move faster, but it doesn’t replace judgment and responsibility.
Why it matters: With something as powerful as agentic AI, accountability is non-negotiable. If you get it right, you unlock speed, trust, and durable customer confidence. But if you get it wrong, the consequences can be massive — because autonomous systems can amplify mistakes just as quickly as they amplify progress.
SAAS STORY
The Rundown: Despite the rise of AI coding agents, Rajan argues SaaS tools aren’t going anywhere. In fact, he believes these tools will get stronger — with AI working across projects and controls by tapping contextual insights.
Cheung: What’s your take on the whole “saaspocalypse” narrative that AI agents will kill SaaS altogether?
Rajan: The idea that one person in your team can vibe code an in-house solution over a weekend and replace a mature SaaS solution you’re paying for, in my view, is overrated.
When customers buy software-as-a-service, they are not just buying the code; they buy workflows, shared context, security, compliance, and reliability. That is where well-designed SaaS products still matter a lot. What AI actually does is make great SaaS even more valuable.
Rajan added: Your projects, docs, tickets, and conversations live in those systems, and AI can now move across them, automate the boring parts, and orchestrate agents around trusted workflows and controls. So I am more interested in SaaS that becomes AI-native than in hot takes that SaaS is dead.
Why it matters: The debate is still on, but one thing’s clear: as AI agents grow more capable, the foundation of SaaS will become even more important. AI will evolve the platforms that already hold an organization’s workflows and institutional knowledge, serving as trusted systems of record.
LIGHTNING ROUND
Most underrated AI trend right now?
Rajan: Lots of AI products today are designed for a single-player system. We see much greater potential in how AI helps entire teams work better together — allowing important context to flow across a team of humans and agents.
Something you believe about AI that most people in tech would disagree with?
Rajan: I think AI will make engineering more human, not less. A lot of people worry we will lose the craft — I believe we will spend less energy on repetitive implementation and more time on strategic, creative work and collaboration.
Advice for teams struggling with developer burnout?
Rajan: Start by fixing one concrete, high-friction problem that impacts your team. You’d be surprised by how quickly chipping away at issues like slow build times and noisy tooling can multiply and have a greater impact.
You worked at Microsoft for 20+ years, then led engineering at Meta. What did each teach you about building great teams?
Rajan: Microsoft taught me the value of deep technical rigor and building platforms that stand the test of time. Meta taught me how powerful it is when you pair strong engineering talent with a bias for fast iteration and learning. At Atlassian, I try to combine both: long-term architecture with a culture that ships, learns, and adapts quickly.
That's it for today!
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown











