#257 - The Future of Code Review: Stop Reviewing Line-by-Line, Start Governing AI Agents - Itamar Friedman
“Instead of reviewing lines of code, you’re actually reviewing rules, skills, quality workflows, integrity workflows, agent traces, etc.”
What does code review mean when AI writes most of the code? The answer isn’t to review more carefully. It’s a fundamentally different process, one built around rules, agents, and governance rather than diffs and comments.
In this episode, Itamar Friedman, founder and CEO of Qodo.ai, shares how AI is forcing a complete rethink of code review — from inline comments on code diffs to multi-agent governance systems that verify intent, architecture, and business logic at scale. He traces the evolution of code review through successive generations, explains why traditional static analysis is no longer sufficient, and lays out what a modern quality and governance layer actually looks like. Itamar also introduces the concept of “shift up” — extending quality checks into the planning phase so that technical product managers can contribute directly to shipping features — and explains how teams can move from vibe coding to viable, grounded development. The conversation also covers the race between AI labs, the role of open-source models, and a frank look at where the software developer role is heading by 2030.
Key topics discussed:
- Why line-by-line code review doesn’t scale with AI-generated PRs
- The generational evolution of code review tools (Gen 1 to 3.5)
- How multi-agent systems surface only what needs human attention
- Turning tribal knowledge into enforceable rules and skills
- Shift-left and shift-up: embedding quality earlier in the workflow
- What the new agentic code review UI will look like
- Vibe coding vs. viable coding: the governance layer in between
- Where the software developer role is headed by 2030
Timestamps:
- (00:02:50) How Has AI Driven the Evolution of Code Review to Multi-Agent Systems?
- (00:07:53) How Do We Move from Vibe Coding to Viable, Grounded Development?
- (00:12:35) Are Traditional Static Analysis Checks Still Sufficient in the AI Era?
- (00:16:27) How Do We Handle Exploding PR Volume Without Sacrificing Code Review Quality?
- (00:22:11) How Do We Evolve Code Review from Simple Comments to Senior-Level AI Reviews?
- (00:28:51) What Will the New Agentic Code Review UI Look Like?
- (00:33:32) How Does Qodo Differentiate Itself as an AI Code Review and Governance Platform?
- (00:37:15) What Do Shift-Left and Shift-Up Mean for the Future of Code Quality?
- (00:41:23) How Do We Maintain Quality When Running Multiple AI Agents in Parallel?
- (00:48:11) How Are Chinese AI Models Reshaping the Open-Source vs Closed-Source Race?
- (00:55:25) Which AI Models Excel at Code Review, and Are We Heading Toward Specialization?
- (01:03:16) Will Software Developers Still Be Needed as AI Automates More of Engineering?
- (01:08:50) 3 Tech Lead Wisdom
_____
Itamar Friedman’s Bio
Itamar Friedman is the CEO and Co-Founder of Qodo, an AI code review platform used by 1M + developers. Before founding Qodo, Itamar was a founder of Visualead, which was acquired by the Alibaba Group. He then worked for Alibaba Group for 4 years as the Director of Machine Vision. Now, Itamar is dedicated to quality-first code generation.
Follow Itamar:
- LinkedIn – linkedin.com/in/itamarf
- X (formerly Twitter) – @itamar_mar
- Qodo.ai – qodo.ai
Mentions & Links:
- 📝 Attention is All You Need - https://arxiv.org/abs/1706.03762
- ARCHITECTURE.md - https://architecture.md/
- Vibe coding - https://en.wikipedia.org/wiki/Vibe_coding
- Test-driven development - https://en.wikipedia.org/wiki/Test-driven_development
- Behavior-driven development (BDD) - https://en.wikipedia.org/wiki/Behavior-driven_development
- Spec-driven development - https://en.wikipedia.org/wiki/Specification-driven_development
- Root-cause analysis (RCA) - https://en.wikipedia.org/wiki/Root-cause_analysis
- V-shape model of software development - https://en.wikipedia.org/wiki/V-model_(software_development)
- Worktrees - https://git-scm.com/docs/git-worktree
- Figma - https://en.wikipedia.org/wiki/Figma
- Notion - https://en.wikipedia.org/wiki/Notion_(productivity_software)
- GitHub - https://en.wikipedia.org/wiki/GitHub
- GitLab - https://en.wikipedia.org/wiki/GitLab
- Bitbucket - https://en.wikipedia.org/wiki/Bitbucket
- Gerrit - https://en.wikipedia.org/wiki/Gerrit_(software)
- Azure DevOps - https://en.wikipedia.org/wiki/Azure_DevOps
- Claude Code - https://www.anthropic.com/product/claude-code
- Cursor - https://en.wikipedia.org/wiki/Cursor_(code_editor)
- DeepSeek - https://en.wikipedia.org/wiki/DeepSeek
- Nemotron - https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/
- Nemotron Super Three - https://developer.nvidia.com/blog/introducing-nemotron-3-super-an-open-hybrid-mamba-transformer-moe-for-agentic-reasoning/
- Gemini - https://en.wikipedia.org/wiki/Google_Gemini
- Kimi - https://en.wikipedia.org/wiki/Kimi_(chatbot)
- Qwen - https://en.wikipedia.org/wiki/Qwen
- Andrej Karpathy - https://en.wikipedia.org/wiki/Andrej_Karpathy
- Jensen Huang - https://en.wikipedia.org/wiki/Jensen_Huang
- Alibaba - https://en.wikipedia.org/wiki/Alibaba_Group
- NVIDIA - https://en.wikipedia.org/wiki/Nvidia
- OpenAI - https://en.wikipedia.org/wiki/OpenAI
- Anthropic - https://en.wikipedia.org/wiki/Anthropic
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
[00:02:03] Introduction
Henry Suryawirawan: Hello everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today I have with me Itamar Friedman. He’s the founder of this code review tools, new code review tools, called Qodo.ai. I don’t know whether I pronounce it correctly, but maybe later on you can correct me. So today we are going to talk a lot about code reviews, what does it mean to do code review in this current AI era, and what are new best practices that we can adopt after the introduction of AI into our software development life cycle? So Itamar, thank you so much for your time. Looking forward for this conversation.
Itamar Friedman: My pleasure, Henry. Really excited to be here. And yeah, we can talk about AI code review now and soon and the future because it’s coming really fast. So we can talk all of them all at once.
[00:02:50] How Has AI Driven the Evolution of Code Review to Multi-Agent Systems?
Henry Suryawirawan: Yeah. So one thing I realized as well, like keeping up with the pace of the change, you know, with the AI, AI models, AI tools, software development tools, those things are really really — you know, I’m struggling myself to actually keep up with that. Maybe let’s start with the, you know, after all these introductions of, you know, AI-assisted software development tools and all that, what kind of problems do you start seeing in terms of code review?
Itamar Friedman: Yeah. So generally about like dealing with all the noise and a lot of tools that are coming and papers, I think each one of us needs to hold two buckets. One, this is how I see the future and believe in it. At the same time being, having like really good channels that are low noise. So I just recommend like, spending time on that, like a little, even if a little bit like how do I see the future, where I wanna spend my time, and how do I get like low noise. Like for example, podcasts like yours, I think it’s one of the best.
So about code review, I think it’s important to agree on two things. One is that code review is essential but it will evolve, okay? It’s like if we look on code generation, code writing is essential, but it will evolve. I remember the days when I said, in the days of TabTabTab, remember like 2023, everybody are excited about TabTabTab, I said, it’s gonna be irrelevant. It’s, nobody’s gonna do that. And people thought, I’m crazy, hallucinating, you know? But it happened, right? But is code writing still important? Yes. But the way we do it actually moved from generation 1 to generation 3.5 or like really quickly writing lines of code, writing full classes with Q&A style, that’s gen 2. And gen 3 are agents, and now we are team, agent teams, right, so 3.5. I have also 4 and 5, you can ask me later.
And so I think the same thing going with code like, code review. The beginning, let’s say gen 1 of code review is like giving you recommendation on code snippets. It’s the equivalent of TabTabTab, right? Then generation 2 is managing to provide like more complete code review. Perhaps, for example, on a full pull request as a whole, right? And generation 3 is managing already to do that across multi repo, across multiple PRs. And 3.5 is where we are right now with code review. And it moved much faster, by the way, than code generation or to catch up. We’re actually in a multi-agent code review system where you have multiple agents. Each one of them is in charge of different aspect of what the jobs do that we’re trying to do in when you’re doing code review, and then trying to basically surface what’s only requires attention, what’s only what’s important and critical for the human developers to look at.
For example, one agent might be focused on verifying that your UX/UI is as it is in the ground truth in Figma, that your, and your code is according to your intent — another agent according to your intent in Jira. Another agent verifying that your architecture is according to your ARCHITECTURE.md or your existing architecture. Another one requirement in Notion or et cetera. Another one that you’re not gonna ruin your database because you’re gonna be fired, not your AI if you do something wrong there, right, et cetera. So that’s like — and if you think about the user interface, then it’s evolving from, you know, inline comment to full PR review, to a new review paradigm where actually you need to see a story, and like, that surfaces what’s important to look at. And then that’s where we are right now. We can talk about the future.
Henry Suryawirawan: Yeah, wow. So hearing what you’re saying it seems like the evolution happening quite fast, right? I think I haven’t even experienced some of the later part, you know, the 3.5 that you mentioned. But nevertheless, I think we can see, I mean for those who have tried using some of these AI tools, AI code review tools, you would have seen some of these implemented in some shape or form, right, depending on the tools. And I think it can be quite exciting.
[00:07:53] How Do We Move from Vibe Coding to Viable, Grounded Development?
Henry Suryawirawan: But before we go dive deeper into the code review aspects, right? One challenges that some people would still think when they use AI is the hallucination problem. And, you know, speaking about code review and code generation, right? They will think that AI would not be able to produce, you know, great code as great as a human can do. So maybe from your experience, what do you see so far? Are the models already capable of producing good quality code? You know, without necessarily, you know, irrelevant bugs and things like that?
Itamar Friedman: Yeah. So I think when you’re thinking about AI assistance for software development, they have layers. The foundation is built on top of an large language model, an LLM, that was trained and specialized for coding, and it’s getting better and better. If you just use it plainly, you will get a lot of hallucination. It’s kind of similar to bringing a developer and asking it — A new developer, sorry, that you hired into your company — and asking it after one minute finishing, reviewing a piece of code to now spit in five minutes a new function or a new feature, right? It will hallucinate. Or he or she in this case. But there is more to that in AI assistant for software development. There is tools and products and even platforms and they help us. For example, shameless plug, Qodo as one of them, they help us to move from vibe coding to viable coding or grounded coding. I’ll explain. Vibe coding is all about the flow. You know, as a developer for the first time, the goal is to flow with my work. Before that, we did, for example, I don’t know, test-driven development and the goal was that the correctness. The BDD, the goal is behavioral correctness, the like a spec-driven development, like each one of them had a different goal. And finally with, it’s so fun with vibe-driven development, it’s like the flow isn’t, is the goal. But that’s not sustainable. That’s not, probably will not bring too high quality code. So, but we don’t want to lose the flow. It’s so fun and efficient to some extent. We wanna make it also really efficient for long-term, for example, with quality.
So in order to do that to turn it from vibe code, vibe-driven development to viable or grounded, you actually need to add proper planning that is already reviewed at the stage of the planning. Flow code, coding in a flow manner like vibe coding to some extent, but with background agents and processes that slapping the coding agent always towards the right direction because it could go to so many directions that are not in high quality. And finally, high quality review, that is like basically opinionated, is not an LLM that is with its internal knowledge that learned on the entire open source. Rather it’s a code review that is already was trained and tuned into the company’s specific standards and biases. And so it’s like a really governance layer.
That’s how you turn vibe coding into viable coding: Proper planning, already reviewing it. Vibe coding, yes, but grounded with quality workflows that makes sure that almost every class decision, et cetera, is according to what is expected. And then having even a deeper analysis that is even, you know, more subjective at the point that you review, and then close the loop if there’s needs to be refined and fixed.
Henry Suryawirawan: Right. You just open up, you know, the term vibe coding. You know, so many people are raving about it, right? So, you know, the non-tech people can also now code. And we have seen in the news, you know, there are popular, you know, inventions that some of these people create, but also there are some popular disasters that, you know, some people also create, right? So I think the vibe coding definitely is associated with like, you know, some kind of level of quality of the code produced.
[00:12:35] Are Traditional Static Analysis Checks Still Sufficient in the AI Era?
Henry Suryawirawan: Before we go into the vibe coding itself, you mentioned about, you know, the governance, the rules and things like that we have to put in place. And you mentioned about like BDD, tests and all that. Usually when we do software development in the past, we actually use, you know, maybe like static analysis check, you know, like our linters. Maybe the test as well. Is this something that is still relevant or is this something that has to evolve as well?
Itamar Friedman: So yes, no. Maybe so in a sense, exactly what you say need to evolve. It’s relevant because we still want to check all those more deterministic checks that could really help improve quality. But the problem with, or, you know, it’s not sufficient. The problem with them, they’re very, you know, heuristic about specific lines or functions. They don’t have, these tools don’t have the big picture of what’s the intent, what’s the requirements, what’s the architecture design. And if we don’t like encap — these tools have to be encapsulated, needs to be part of a bigger platform, okay. At the day we started Qodo, that’s the opportunity that we saw, that we have more than 10, maybe even 100 tools. Each one is in charge of something small related to quality in our code. Quality in the biggest sense of it like including like security, compliance, et cetera.
And there’s an opportunity to build one platform, let it be Qodo or someone else for just, you know, not to advocate only for ourself, that you get one stop solution for a governance layer. Because you do need to have all these like linters and static analysis, and that’s part of what need to be checked. But without a more semantic understanding, you are not covering all the jobs to be done, that are being done during the code review. And then code review will be the bottlenecks. Quality will be a bottleneck. And we cannot really harness AI for software development.
A good example is verifying versus intent in Jira. Another good example is, for example, architectural design. Another good example are specific rules and policies that were learnt the hard way, you know, that are very semantic, that are written sometimes in documents of RCA, root code analysis, and some of them in developer’s mind. And they’re very semantic. This is how we do things because that’s how we maintain performance or, you know, managing to avoid like, one point of failure that is so hard for LLM to, real or static analysis, analyze without that tribal knowledge that happened in the past or that people experience.
Henry Suryawirawan: Yeah, I think you brought up something kind of like novel, I would say. Like some people in the past, right, they never kind of like codify all this, you know, RCA, you know, architectural that are being discussed, right? So now with the, you know, invention of LLM, actually now we have the capability of analyzing languages, you know, the written text or the verbal and all that. And actually check that against, you know, maybe our code base, maybe our architecture and all that. So I think definitely it opens up a lot of use cases.
Speaking back to the vibe coding, right? I think many, you know, vibe coding are kind of like solo developers, right? Maybe they don’t care so much about the quality aspect, they just wanna do one shot and, you know, create the application.
[00:16:27] How Do We Handle Exploding PR Volume Without Sacrificing Code Review Quality?
Henry Suryawirawan: But maybe let’s put those kind of people aside. We just wanna talk more about the maybe enterprise kind of use case. One challenges with the AI now is that there are a lot of people who are using AI and can generate massive ton of, you know, lines of code, so to speak, right? This can be associated with the number of pull requests that gets raised as well. So if we are doing the traditional way of doing code review, there will be a lot of massive bottleneck that I assume will happen simply because human cannot review all the lines of code that are produced by AI. What do you think about these challenges? Should we be having some new invention or new workflow of tackling this problem?
Itamar Friedman: Yes and yes. So basically what we’re seeing, and we are processing millions of PRs a month at this point, you know, with Fortune 500 companies like Walmart, Intuit for Texas Instrument, Red Hat, like, et cetera. And what we’re seeing is that the size of PRs is growing and the amount of PRs is growing. So it’s like really huge mental load on developers to deal with all that lines of code. And basically if you wanna properly harness AI, you kind of need to skip on reviewing line by line. But that’s like really challenging, like in a sense that if you care about your customers, you care about your users, and you don’t wanna have a catastrophes. I just read that yesterday one of the biggest, one of the cloud giants said that they associated a big portion of their downtime due to code written by AI, cetera. So you, it’s kind of like challenging and mentally to give up on reading line by line, but you have to. And that’s why we do need to change the paradigm on how we review code. Doesn’t mean we do not review. It doesn’t mean that humans do not take part. I actually don’t think that’s going away. It’s just like like how it’s going to change.
And the big change is instead of reviewing lines of code, you’re actually reviewing rules, skills, quality workflows, integrity workflows, agent traces, et cetera. You build them, you use tools to create them, to close, to create verification loops, et cetera. The future looks like a huge dashboard that’s such that for a bunch of PRs that are stacked together, you have a dashboard with hundreds, if not thousands of workflows that were run and verified, accumulation of rules and relevant skills and quality workflows that verifies that those — could be thousands of tens of thousands lines of code — and surfacing then those lines that require human attention. I think that’s where we’re going to.
That won’t happen in a day. We need to have a very mature product and technology that takes tribal knowledge we have in our heads and in the code base and codify them into rules and skills and workflow. And these need to be continuously learned. So the UX/UI, it’s gonna be totally different than the UX/UI of reviewing line of code. It’s like a dashboard, just an example to imagine, where rules are surfaced to tech leads because there’s a new learning by the AI that this rule would help the review and the developer, the tech lead will say, I agree with that rule or not. And then the day after, the developer sees how this rule actually holds the water for a new code for your existing code base and decide if that’s good enough or want to update that, et cetera.
So it’s similar to some extent how we progress with cloud. I’m old enough remembering the days where we need to build racks or whatever and monitor them. Today you monitor completely different things and you don’t build anything at a hardware. So we won’t look on the cables. We don’t look on the cables anymore, we look on the cloud. We won’t look on line by line or not each line, I’m not saying not at all. So I think that’s a paradigm shift. Turning into like herding agents, rules, skills, verification loops at scale with a very designated independent governance and review layer.
Henry Suryawirawan: So, yeah, maybe for those people who are still kind of like stuck in their old habits of doing code review and traditional software development, right? I think usually what they do is like they raise pull requests or merge requests, and then they have like, I dunno, like pre-push check or something like that runs to verify their code is working okay. Some are using CI/CD as well, to have kind of like build steps to verify depending on the checks that they want. And kind of like give a thumbs up of thumbs down whether they’re code changes are actually okay.
[00:22:11] How Do We Evolve Code Review from Simple Comments to Senior-Level AI Reviews?
Henry Suryawirawan: So maybe you mentioned a few things just now, like you mentioned about rules, skills, you know, quality workflows, maybe intent workflows, those kind of stuff. How do you actually implement that or how do you actually implement that in the new way of doing code review?
Itamar Friedman: Yeah. Amazing. So I think it’s a great framework. At the beginning, we don’t need to change much, right? Like for example, in code generation, which started with auto complete, it’s like I’m about to complete, start to write a line and it’s completing for me. At this point, we’re doing code generation — many of us, not everyone in some cases — by the way, it doesn’t work, right, at all. So we’re doing it by prompting agents even without looking on the code supposedly, right? So that’s like totally different paradigm. So in code review, imagine what you just said. That the, you know, the holy — sorry. Imagine what we just said that where the basic user interface is the GitHub pull request or GitLab merge request or Bitbucket et cetera, Azure DevOps pull request page. And this is where developers, being surfaced with the diffs, with the chunks of code that was changed, and a chat interface where they discuss with each other whether on marking the code or outside it, like on things that they think needs to be changed.
And the first step is we basically simply saying, use an LLM to go over those lines of changes and surface comments the same way that a human reviewer would do. That’s gen 1. Gen 2 is that it’s not simply running those diff on an LLM, it’s much more, let’s say, comprehensive and capable on reviewing lines like complete functions in a complete PR but it will still push comments the same way as a human review. The good thing about it is that you get that review in five minutes, or 15 or 20, depends on the tool that you’re gonna use. You don’t need to wait sometimes a day or more. Then the gen 3 is that where these comments are very knowledgeable. They are designated to catch different aspects. There’s much more tool usage, and context engineering, and context and AI harnessing, et cetera, such that the level of those suggestions are getting to the level of a senior developer. But the user interface is supposedly the same. In order to get to those levels of a senior developer, that’s where you do need to do some work.
For gen 1 and 2, you use tools that just widely exploits LLM on your context. But in gen 3, you want those agents to get to the level of the seniors, you need somehow to dump the knowledge, right, from the tribal knowledge that are within the heads of your senior developers and in the code base, and extract them. And then you can either do it by yourself or, sorry for a shameless plug, you can use Qodo. Qodo is the only tool out there that automatically learns the rules and skills for you. And it first learns, then surfaces the rules and skills, enables tech leads to agree or disagree or edit, and then enforces those rules and track how much these rules are actually being caught and used and enforced, et cetera. So you can do it by yourself or that’s the whole idea of a gen 3, 3.5 code review tool, that it turns the comments, the same comments into a level of a senior.
But that’s where we are today and it’s not scalable enough. Because the amount of code that we’re, some companies are already getting and some other companies are gonna get in half a year is — and PR is so, so many of them — that maybe this step that we are today, that Qodo and other code review tools are leaving comments as humans, is not enough and we need to completely change the UI. Like we changed the UI recently with the coding, that we moved out of the IDE, out of even the CLI, into a whole new app, agentic development environment IDE. We would also move to agentic review environment. That’s where we’re going to, okay? So that’s like, that’s how you should expect the evolution of your team also. I wouldn’t start with the end. If you’re new to it and you now want to start using and harnessing AI for code review, start with simple tools that gives you comments on your pull request. Then the most advanced tools there are there. And then shift to, you know, agentic review outside of GitHub, GitLab, Bitbucket, et cetera.
Henry Suryawirawan: Yeah, thanks for such a good explanation and the evolution as well. I like the way you always explaining like, you know, gen 1, gen 2, gen 3, and so on and so forth so that people can have like a gradual progression on how sophisticated the code review can become.
So I think a few things that I like, and you mentioned about the, kind of like the UI of how we are doing code review now. It’s still kind of like pull-request based where you are surfaced, you know, change requests with a diff and maybe some comments and people are commenting with each other. I think some people might have seen, you know, such, I don’t know, coding assistant tools that can also review the PR and give comments as if like they are human as well. I think this is quite popular these days, right?
And I think the other one is about agentic review, right? Which is, probably something that is coming up. And I like the one that you mentioned that sometimes when we run this static code analysis, right, it can really take a long time because, I don’t know, it has to build a syntax tree and things like that, and it’s just like heavy weight. But somehow with the AI model, it seems to run much faster. So I think this is also another aspect that probably will come. Like the code review might not be such a, you know, heavyweight process anymore.
[00:28:51] What Will the New Agentic Code Review UI Look Like?
Henry Suryawirawan: So speaking about, you know, the UI, do you think this current UI of pull requests and how we actually raise, you know, code review requests, will it still continue or will it move away?
Itamar Friedman: Yeah. So as we said, we’re gonna see every PR probably growing in size because agents are gonna complete more end-to-end tasks. Like you don’t necessarily need to break a user story to five, just as inventing a number, dev stories. Maybe you can do end-to-end user story with five dev stories but in one PR because the breakdown was done at all at once and implemented all at once. And then getting those like comments one after another might be very exhaustive and not, you know, healthy for mental mind for developers. So we want, might want to, like make it a, two things, a little bit more like the story to focus on the business logic. In the PR, there will surface all the facts that this code should work according to, and if there are issues, let’s go through them. So it’s much more focused on verifying that the requirement and the intent and the business logic is there rather verifying the lines of code.
I believe that PRs very soon are going to much more clearly include all the requirements for them and including the traces of the coding agents. Because that’s part of how you verify, you know, the code is working as expected. But moreover now that when I’m used to wake up in the morning and see five PRs, I wake up in the morning and see 57, if not 120 PRs. So I need a new UI. Because I don’t know if you’re familiar, many team members before they leave work, they send some agents to work. And on the weekend, why do waste time, you send some agents to come Sunday or Monday morning. That’s the worst case scenario in a good way that you have like hundreds of PRs waiting for you. And if you’re not there yet, don’t worry, you’ll get there like by then of 2026, my prediction. And now we need to start reviewing, have a UI that fits everything all together.
So for example at Qodo, this is really coming soon, probably maybe even by the time we publish it. There’s like a new user interface where Qodo like, stack a few PRs in one review as one interface or actually stack PRs according to mutual issues. For example, PR 7, 17, 27, and 47 had a issue with how our agents are doing logging and see those issues in one thread, all others totally different PRs, and then you can ask to fix all of them all at once, okay? So that’s a whole new interface to how we’re going to deal with the, this amount of features and code. Why do we have so many lines of code and so many PRs? Because we’re completing, we are trying to complete more feature quickly, so we have to verify and surface much more clear the business interface, that’s one UI that needs to change. For example, our PR is gonna be full of videos and images, and verification flows that you can click and get a proof. You click here, you get a proof that the agent ran that flow for you and then go back to the PR if you like, et cetera. And the second big change is that stacking and looking things on much more holistic. That’s, this is my prediction on the UI change.
Henry Suryawirawan: That sounds really cool, right? So I think definitely looking forward for a more such advanced case. And especially like now with the LLM models, they understand multi-modal, right? Like you mentioned video, maybe audio as well. We might start seeing new patterns, new ways of doing the code review. And yeah, very excited.
[00:33:32] How Does Qodo Differentiate Itself as an AI Code Review and Governance Platform?
Henry Suryawirawan: Maybe now it’s a good time to actually talk about Qodo itself, you know, the platform that you’re building, the new way of doing code review. Maybe to kind of give a high level, how does Qodo differ with other, you know, code review tool?
Itamar Friedman: Qodo is the code review and governance layer for code changes and code base health in general. Qodo is both better and different. Qodo is better that the precision recall of Qodo is the best. And it’s really not easy to do because some elements of code review are objective, but some of them are subjective. And the second thing is that we’re very unique in how we help enterprise understand their rules, skills, their code base health, the quality of their code to begin with. In order to be able to enforce code quality, including its subjectiveness, the first thing is that you need to be able to surface semi automatically, if not automatically, what are the standards? What are the best practices? What are the rules? What are the skills of organization? Most of them do not know. And even if when they thought they knew, they were built for humans and not built for AI. And you need the mix of them because that’s what we want to create clarity.
So Qodo is a tool that you can install in your Git, GitHub, GitLab, Bitbucket, even Gerrit and Azure DevOps, because we’re very enterprise focused, and that’s where enterprise live. And Qodo leaves comments that are at the senior level because it’s really good in catching bugs, but also catching, finding and enforcing subjective standards and rules, et cetera. Basically that’s the first interface of Qodo. The second interface is the dashboard where tech leads, director of platform, director of engineering continuously get insights from Qodo. Like new rules that needs to be applied because something bad happens or Qodo notices something before it becomes a mistake in production. There’s good and bad code that were pushed. A tech lead left a comment to another mid-level developer and Qodo catch, caught that and turned it into a rule. Continuously learning what are the standards and the rules, and that’s a different dashboard with insights including surfacing what is currently the quality of the pull request, et cetera. That’s where we’re different to really enable code review.
And then of course there’s also skills that enable to connecting Qodo to Claude Code, to connecting Qodo to Cursor, et cetera. Connecting Qodo to Jira to get all ready to reviews in the planning, et cetera. So a lot of, sorry, a few skills and interfaces, MCPs, CLI tools, et cetera, to do shift left and shift up. Qodo is the first, like flagship interface and capability is being the gateway and the gatekeeper at the point of merge at GitHub, GitLab, Bitbucket, et cetera. And then after we establish what code quality means for an organization, then we also, Qodo enables shift left and shift up with pushing those quality guardrails and rules, et cetera, towards writing code and planning.
[00:37:15] What Do Shift-Left and Shift-Up Mean for the Future of Code Quality?
Henry Suryawirawan: Wow, so when you mentioned shift left, I assume this can also run on the developer machine themselves. Or maybe, I dunno, like using maybe the Claude Code this day, right, when you ask it to make the changes, can you actually do it that way? Or also when you mentioned shift up, what does it mean by shift up?
Itamar Friedman: Yeah, thank you for asking that. I was hoping. So shift left means exactly what you mentioned. Why wait to the end of the coding process to get those reviews from Qodo? In order to do shift left, we connect really well to Claude Code, an example mostly, and also of course, Cursor and Copilot, et cetera. They’re great. And I believe that we’ll have new ones, by the way, by the end of 2026. And Qodo basically runs supposedly on the machine. It doesn’t mean that Qodo doesn’t call the backend. It doesn’t call like your ability, like Claude Code calls from, for LLMs in the background, right? Also Qodo will call Qodo backend, et cetera, but it runs on your work tree or in the local machine, if you like, for example.
Shift up is where I think the future is very exciting and where we’re heading towards. Imagine a world where we want to enable a technical product manager to write down a specification, launch a coding agent, get the code ready, push to production, and seeing the code, you know, in the hands of the users. In order to do that, two things are important: that the plan, already in the planning phase is well reviewed. That’s a shift up. And the second thing is that the code is being rigorously reviewed with all the agents, designated agents that needs to run, with a dashboard that’s showing all the proof of work and everything that was need to be checked that was designed by or guarded or built by the developers so that this system is up and running and we trust it. The shift up means that the technical product manager is getting AI assistant to write proper plans with all the required details such that it’s almost easy for coding agent to just execute it.
If you think about it, it’s like basically squeezing the V-shape model of software development. If you go in Wikipedia and look for V-shape model for software development, what you will see is the following. X-axis is time, Y-axis is executability. When the V, you start with, no, not executable, planning. Then you go down in the V and you actually write code, which is executable and it takes more time until you complete that. Then you go back up on the V where you write test, you do code review. These are either not really executable in the application or not executable at all. And what we’re saying is squeezing the V. Basically if you properly review the plan, then writing a code is almost like instantly happens right after and you can run. So that’s what we call a shift up.
Henry Suryawirawan: Wow, this is, this sounds like the next level of vibe coding, right? Or with proper guardrails and governance and all that, right? So not just engineers who can write the code, but maybe like what you said, technical PM or non-technical people. As long as you have these guardrails, anyone can maybe push something to production. But yeah, this also requires the planning. You know, maybe coming back to this trend, spec-driven development, I think some people are kind of like raving about it, you know. You plan first without actually doing one shot, generate the code, right? So the plan actually helps you to break down the tasks as well and figure out aspects that are important before then you let the AI agents to execute them.
[00:41:23] How Do We Maintain Quality When Running Multiple AI Agents in Parallel?
Henry Suryawirawan: So I think this is definitely one way of doing software development. Another way of doing software development that I can see rising as well. Like people now open up, I don’t know, 5, 10 terminals and, you know, make changes all at the same time in parallel and let them all even submit as a pull request. The other one is actually, you know, this trend of OpenClaw where they build small, small agents that run, you know, something, dynamically. So how do you catch these trends? You know, how do you ensure that the quality actually is also up to mark?
Itamar Friedman: Yeah. A small remark, you kind of compare between shift up and spec-driven development. And I do agree to some extent. I think the small differentiation that I’m making with the shift up, that spec-driven development to some extent it’s still owned by the developers or written by the developers. And the shift up is like if you like high level or spec-driven development is like if you, like the AI is doing the spec and verifying that it’s correct. And the technical is like basically prompting to have a spec. So it’s like spec-driven development, but a bit upper like managed by technical people that are not necessarily developers. I’m not saying that a non-technical CEO should do that. Not in 2026, maybe in 2029. But definitely technical people. It basically means that companies that have like right now 20,000 developers and 20,000 technical people that are not developers, maybe by the end of the year basically what it means they have 40,000 developers. Or more accurately, 40,000 technical people that are able to contribute to shipping new features. It does require meaningful resources and thoughtfulness from the engineering team to put all the guardrails that we talked about.
Using many agents, I’m coming from the machine learning world, kind of remind me in the good old days, almost everyone trained models. Maybe I’m exaggerating. Many people trained models, not just a foundation labs or boutique labs. Like at Qodo we fine tuned models, et cetera. Like a lot of people were training models and basically it was like a little bit, well, like slot machines. You remember, like you kind of like do five experiments. If your mental model and you’re organized, you can go 50. And if it’s not too costly or you have the budget, you can go 50 training all at once during the night and/or even during the day. It’s like, okay, let’s send those machine learning like, and, see what works. I think of what we’re seeing is that, like you’re saying, okay, let’s imagine that I need this feature. You know, then let’s think about three different implementation, let’s go. Instead of me overthinking it, why should I overthink it for two days? Let’s implement it and then test it and perform and test or whatever, just an example. So it’s not a bad idea, maybe even a good one. But that’s one like, let’s say paradigm, or like that I see.
And the other one is like kind of like parallelizing a few tasks that could be parallelized. And the interesting thing is that we see the rise of different like techniques or capabilities that look a little bit unneeded in the past, like worktrees. Like how can I have the same branch but basically split and all at once so you can work without those conflicts and just merge at the end or so. And I think that’s another part where people like are trying to do much more like features all at once. And I think it has some, like there’s some glass ceiling because the glass ceiling is the human capacity to have a men- like load on a mental load of like what are all the features that I’m developing right now, and who do I need to communicate in the company, and what is the target, and do they contrast each other and et cetera. But I think that it’s much more than we could do in the past. If in the past we would do two project all at once. Maybe I got stuck here, I’ll do that. Now maybe I’m doing 15 or am I exaggerating? I’m doing five to seven.
The thing is that we do need to remember that right now, like these agents are basically subsidized. Like you are paying 50 to 200, while very probably, it does cost two thousand or so to the vendor. So either we’ll see a major reduction in compute, et cetera, which I believe will happen. And/or we will be a bit more efficient and thoughtful about which agent we’re gonna send because it’s not gonna cost as little as right now. So it’s gonna meet somewhere in the middle. That’s my experience. The way I suggest is be much more thorough about planning and think about parallelism in the way you, you know, break tasks. It was useful anyway, but now it creates superpowers. If you put efforts into guiding your planning, whether you do it yourself or LLM, into working work packages that are can be parallelized, I think that’s the best. Because then you don’t need to run 17 different features all at once. You actually run five or three, but you do them much faster because all the work, some of the work packages are parallelized and completed much faster. So that’s my recommendation.
And last thing is that the more you do that, the less you can read your code, then you have to put and use AI for code review, verification loops. This is becoming really critical at this point.
Henry Suryawirawan: Yeah, speaking about parallelizing work, I myself try my best to parallelize as many as possible, but my head just couldn’t cope. I think the kinda like switching the tasks, also like looking at each of the agent that comes back and ask me something back, right? You know, juggling those things is definitely very taxing and you get very tired quite easily. So I don’t know how we could imagine, you know, having so many features being built at the same time. But I guess, you know, the way of working probably is gonna be different as well. Or we don’t probably, you know, reply to agent much frequently or maybe look at the lines of code anymore. It’s all gonna be more high level, right?
[00:48:11] How Are Chinese AI Models Reshaping the Open-Source vs Closed-Source Race?
Henry Suryawirawan: So you mentioned about something in the past that you do with machine learning, right? I know that previously you were working in Alibaba when your company got acquired, right? So I think I wanna maybe switch gears a little bit and ask what do you think about the China advancement of AI, you know? We all people know these days it’s about, you know, DeepSeek, you know, open source model that somehow could produce the same, similar, you know, result compared to the, you know, like the, you know, I dunno, like the Western model that is, seems to be more expensive than them. So maybe from your view being there at firsthand, right, so what is your view about China model and where is the trend going with them?
Itamar Friedman: Yeah. First, an anecdote. It looks like the China versus supposedly versus U.S. LLM is a lot about closed source versus open source. You know, we wish it wouldn’t be like that and sometimes we even have a hope. For example, you know, OpenAI open sourcing one of their models. Or back in the days when Meta did immense, like, effort into being such a foundation lab but it looks like it’s all gone or they’re cooking something and we’ll get it later. There is a new promising open source, perhaps from the U.S. which is Nemotron from NVIDIA. It’s still up and coming and more to show. But I can tell you that at Qodo we did a lot of tests on their latest Supernova Three which is being announced these days. And it’s actually very surprisingly how well it’s doing and, especially considering its efficiency. So I still think that until today there is like very anecdotal that it’s U.S. versus China is almost like closed versus open source.
But that might change and — But it does tell you a lot about the differences. I think a lot of the moat of OpenAI and Anthropic, et cetera, is on the LLM. While if you think about the foundation labs in China, a lot of the moat is actually the business. Like who is, who are those that are creating their — Like I know there’s some like foundation labs, but I think still that the biggest one are for example Alibaba. And their business is much more than that. It’s a cloud. It’s like the equivalent of like, Gemini, you know, like and Google. Like Gemini obviously is not the business. And that’s in their advantage. And then like, it pushes them also towards like going open source, et cetera.
I think that China has great, great talent and they have the capacity to think for long-term. If you think about where they were 2016 versus where they are now, then you could claim that you’re seeing like a curve that is going to cross the other curve. But in my opinion, what’s really gonna happen is just that these two curves, they close the gap and these two curves are gonna run together. Because there’s also great talent in the US and I think that maybe not government-borned, but there are huge initiatives that are by the companies that are running. And then I think, the only thing that is actually left for the government, although maybe I don’t know U.S. companies will take over that as well, is the energy. I know we’re trying to improve on energy efficiency. I know that NVIDIA are doing their best. Look out on some competitors of NVIDIA, I don’t wanna mention names, that are up and coming in like inference and training devices or — and I don’t call it GPU as we call it, they call it differently.
But having said that, I think energy is the next, one of the next big biggest frontier, if not the biggest frontier. And that I don’t think you solve it on a company level. Although I might be surprised. And here I think China has a very big advantage. So on one hand, I think on the algorithm side, on the AI side supposedly they, I don’t think they’re really crossing, I think it’s more like closing the gap. But at the point where putting AI into scale, when you will need a lot of energy and let’s assume that they solve the problem of generating, creating GPUs, manufacturing GPUs themself, I think that there’s might be like an advantage there unless something is happening in the U.S. that I’m not aware or something will change.
I will say something related. I think it’s interesting. It looks like we’re an exponential growth on technology, but if you zoom in, it’s basically S-curves such that the time between S-curve to another is shrinking. So I’ll be a little bit romantic. The time between the invention of the fire, usage of fire to wheel maybe took a lot of time in between, you know, creating a the first like a neural network to “attention all you need”, like, transformers, whatever. So it’s shrinking. But when you look at it, it’s basically like S-curves of innovation that bursts and there’s always a new innovation that coming that require to continue that exponential when you zoom out. And I think energy is essential as one of those S-curves that could keep the exponential. I would even claim that in order to let AI start creating AI, like let AI really innovate math and science, et cetera, we probably need to go through like a S-curve of energy and then being able to release AI to start being equivalent of running millions, if not a billion of human brains that are running for a century. And in order to do that, that’s why like what I think like energy is so important.
Henry Suryawirawan: Yeah, you bring a very interesting aspect, you know, about energy, right? Because I think many software developers, when we are, you know, subscribing to this, even multi-models, right? You know, OpenAI, Anthropic, Gemini, and all that, we just spam, you know, them with our code base, without actually thinking, you know, the sustainability aspect, the energy that is required to actually produce the lines of code. Sometimes even just to change a few lines, we are kind of like lazy. We ask AI to do it. So I think that’s definitely gonna be a turning point where, you know, this might not be a sustainable thing that, you know, we might need to come up with a new invention in the energy space or maybe new way of doing things. So I think thanks for bringing that up.
[00:55:25] Which AI Models Excel at Code Review, and Are We Heading Toward Specialization?
Henry Suryawirawan: So speaking about model, right? I know that, you know, there seems to be an arm race. You know, sometimes, you know, this model is ahead of the others. You know, when the other model release a new version, they seem to go ahead. Maybe from your point of view, maybe also looking from your wide variety of customers. Are there some models that perform really, really well, maybe in terms of code quality or code review? Maybe can give us a little bit of glimpse, you know, perspective from here?
Itamar Friedman: Yeah. The short answer is that right now what we’re seeing is that different models, frontier models, like the best models, are better in different properties. And I think that it’s going to keep that way and have actually maybe even go, you know, towards more specialized models. This is a bet, okay? Like, and I think longer answer is that for 20 years we had models, dozens of unique models, each one of them specializing in something else. Sometimes these models could not even compete. Like they don’t have the input-output capacity to compete in other models, like tabular versus, for example, visual or so. Today you can push all that. I think the GPT-3.5, one of the most, and then 4.0, one of the most thing about it is that suddenly, you had model that was better on everything. If you remember those graphs where OpenAI released, the X-axis different professions like history, math, whatever, the Y-axis vs human like competitors or, you know, practitioners, et cetera. And it was amazing. Like you could, you don’t even compare it to models because you already need to compare it to human because it was better than any other model out there. And then I think we had that moment with Sonnet 3.5, right? And 3.7, where they were better on coding. And that was a moment where like the, like for two years of one model, like better on everything, starting to like maybe Anthropic is actually better on coding and OpenAI better on other things. And today I actually think that Anthropic is better on some aspects and, or, sorry, Claude models are better on some aspects of coding and GPTs are better in other aspects of coding. By the way, at this point, it’s even to some extent subjective. It shows you how similar they are.
Some — I have team members that prefer to write code with Claude and then Sonnet, for example, and plan with Opus, but write documentation and review with GPT. And I have team members that are the other way around. They’re saying, yeah, maybe GPT-5.4 is a little bit less, for example, 5.4 is a little bit less creative than Anthro-, like the Claudes, but it actually does what I’m telling it to do. Things like that. So I think we got an era of properties. Even more you’re seeing now companies that are developing models for tabular data. And you’re seeing Qodo like developing models that are specific for indexing metadata and code for quality and governance like layer. And even fine tuning OpenAI models and fine tuning Anthropic models, which makes them a unique model for code review and et cetera. And we have more surprises that we’ll share later on about this. So I do think that this where we are heading.
Now, I have one caveat or one prediction that I didn’t put a number on yet. There could be a breaking of, a breaking point for one of the labs that they actually change the architecture. Right now their main paradigm is data and compute is all you need. And you just need to scale. You hear Dario, the CEO of Anthropic still saying that. But if one of them is secretly working on a new like architecture, I don’t think we said the last word on it. Like thinking that the architecture that was found, quote unquote in 2016-17 or et cetera with that, like a little bit maybe improved and later on ‘18, et cetera with the transformers, is like the attention layers, et cetera, is all you need. I doubt it. I think like we could put AI to work. It’s called AutoML to some extent, neural architecture search. And I think that right now, everybody forgot about it, but it will come back. And then maybe suddenly you see another era where a new model, let’s call it Qodo v3, opens a totally new ball game, but then it will last for two years or so until people capture, like cope. Yeah, that’s my thinking about models.
And by the way, we didn’t talk about open source like Kimi, and Qwen, and all that. They’re great. And I think like I would expect —maybe I’ll say that. I actually put my money that maybe the innovation will come from there. Why? Because when you’re number two — I’m a sailor. I’m a skipper — and when you’re number two, the wind is the same wind. Architecture’s the same architecture. You have to change your moves to do something different than number one. So actually those that are working in open source, they’re number two right now in accuracy. They might actually innovate. And you see Alibaba and Qwen models are actually innovating there. Did they break the glass ceiling yet? They did not. But I actually think that they might.
Henry Suryawirawan: Yeah, seems like all these arms race very competitive, right? It keeps changing from time to time. Very tiring to keep up as well, knowing like which models should I use? Sometimes I just wish everything is auto, you know, they just decide, you know, what model is best for my case or my question?
Itamar Friedman: But you can do that. All you need to do, sorry for maybe I’m too simplifying, is these models, what are you using them for? For example, at Qodo, we, as part of our advantage of being a third-party, we use a cocktail of models according to their, you know, best properties. When you, if you use Qodo and you want to go under the hood and configure it differently, probably we’ll do a mix of Anthropic models, OpenAI models, and even Google models, depending on the deployment. We’re very enterprise-oriented so we have like cloud prem, single-tenant, multi-tenant, et cetera. So if there’s no issues from the client side, from the customer side, I mean, then we’re very probably like you’ll get a mix. And then who cares? Like we will do the checks for you. Sometimes it’s really surprising there’s a new model and it’s not better. Like we didn’t find, I don’t wanna say which, like one of the Anthropic, OpenAI models, they did a big like, and then it wasn’t better for code review and we just kept our older one until the second one was actually better, et cetera. So just an advice
Henry Suryawirawan: Yeah. Yeah. Speaking about model changes, yeah, I also experienced myself. So for example, I have the same skill, but applying it with different model, even the, just a different version, right, could give you a seemingly different result. And if you layer it with the agentic behavior as well, it could be even more different, right, in terms of results. So definitely, everyone’s use case is different, right? So the insight here is like model will keep changing, I guess like you would need to keep up with those changes as well.
[01:03:16] Will Software Developers Still Be Needed as AI Automates More of Engineering?
Henry Suryawirawan: So I wanna probably kind of bring up the discussion. Like we have seen so many advanced use cases, you know, invention. But the, at the end, right, the developers are feeling anxiety. You know, like we are feeling like, okay, what’s our task in the future, right? Are we going to be replaced? If I don’t write the code anymore, just write the plan, how many developers really are we needing, you know? What’s your view on this? Maybe do you have some take that you can share with us as well?
Itamar Friedman: Do you remember that people said that automatic cars, autonomous driving is coming 2020, 2021, 2022, et cetera? You know, it did came. It did come eventually, right? Like I mean, like, we’re 2026, which year are we? And like, it’s like starting to scale right? By 2030, which is 10 years later, then I think we’ll, it’ll be a large scale across the U.S. And I think Karpathy which was in the Tesla AI team, said that he thinks that we’re in the agents era and software development automation will take as much as it took like, you know, autonomous driving where it started in 2010. It depends if you count DARPA already like announcing on a ‘99 or something like that. But the, a lot of big efforts like started 2010 and took until 2030, let’s say.
I think that software development will be solved by 2041. Think how different than I’m saying it different versus like other CEOs. I heard, like Anthropic and OpenAI saying five years or three years ago that by the end of next year, no need for developers. By the end of, no need. Then the year after, in six months, no need for developers, you know, et cetera. Now they’re saying, for a window of time, of a time window, we might actually need more developers, if you heard that. They’re saying it for the first time after like three, four years claiming that there’s no need for more developers. Even Jensen from NVIDIA said in two years, AWS CEO said in two years, that these two years are about to end. And I don’t think that the developers disappeared. But hey, I am claiming — but notice what I’m claiming — I’m claiming that eventually it will happen. So what we need to do as a developer is evolve really quickly with the profession. By 2030, it’s gonna look completely different than it was in 2025. was quite different than 2020, right? So we do need to evolve with it. And interestingly, I think that by 2030 actually the knowledge that we had still need to be the same. This is a big claim. It is very different than what others are claiming because I claim that in order to put those rules, put those skills, put those guardrails, monitor the coding agent and the verification agents, you need to know software development properly, okay? So I think like not in 12 months, not in 24 months. I even think like in three years you still need to be a developer. Hey, but at the same time, one developer could do a work of 100 developers, right? So like at some point, I think 2030, we’re going to some, maybe a tipping point, where software is ubiquitous, like anyone can develop like simple software. And software development reached a peak. And currently it’s actually growing, I believe, despite like all the firing and et cetera.
But at that point, I think we’re gonna start seeing like decay, and decay, and decay over time. And it’s even like the software development role will evolve. And eventually like it’s really hard to predict, I think gonna be completely new jobs. We’re gonna — it’s a totally different world. Forget about software development. It’s like the, if once we got to that future, it means that we’re at totally different level of automation. Software development in one hand is the easiest because it’s formal to some extent, right? On the other hand, if you completely automated software, you probably automate a lot of other stuff that we do and use. And then like we’re totally, and we’ll see very new roles, and adopting and working and with AI, I think will bring you to, into those roles, in my opinion.
Henry Suryawirawan: Wow. Yeah, it’s always exciting to predict something, right? But definitely, you know, the key message for people is to keep evolving, you know, re-invent yourself. Especially this change is not like, like a tool related change, right? This is like the whole paradigm change, right? Capability will change. And in fact, the profession, the software development profession itself will evolve by a lot. Maybe we don’t even call ourselves software developer anymore. Maybe it will be something else, who knows. But yeah, thanks for sharing your perspective here. I appreciate that.
[01:08:50] 3 Tech Lead Wisdom
Henry Suryawirawan: So, Itamar, I think we are reaching the end of our conversation. But before I let you go, I have one last question. I call this the three technical leadership wisdom. Just think of them as like an advice you wanna give to the listeners. Maybe if you can share your version today, that would be great.
Itamar Friedman: Yeah, awesome. So one advice, very important one that I already gave, do keep learning software development. Even programming language. Of course, architecture decisions, all those senior level decision that needs to be made. But don’t give up. Keep doing that. It’s gonna be very important for us as a community and for you and your career all the way to 2030. As you progress, a lot of soft skills, business skills, product skills are gonna be really important. How you collaborate with your colleagues, with the world, with partners, with management, et cetera. So invest in that. That’s almost like counter productive for developers in the old world. Like, you know, where’s my hoodie? I don’t wanna talk to anyone. Gimme my task and let me code. And right now, like you need to be really good in communication and business and product like understanding, et cetera. So I definitely like recommend that. And I would also recommend investing. Take part of your money and invest in those companies that I think are gonna be like a big part of the future. So it’s almost like insurance. If that future comes that we’re claiming that, you know, like in 2035, ‘41, like everything is automated, right? And let’s assume that you don’t have a job. I think it’s just gonna be evolving, right? But let’s — So what’s your insurance? It’s like betting on, you know, the right companies.
The thing is that these two tips goes along together. If you are a good developer and you learn your skills, soft skills, et cetera, and business and product skills, you get like better intuition. And like you’re in the market to understand who are the promising companies so you can invest in them. I think many people invested in NVIDIA before it became a thing, when they realized that this is gonna be like something big, et cetera. You want to, you being in this market, I think gives you that opportunity as well. I’m not an investor per se or I’m not allowed to give any investment. So take it with a grain of salt and everything I’m supposed to say in addition to that. Well, that’s my advice.
Henry Suryawirawan: Wow. Very interesting angle. So about investing. So probably this is the first time, like, somebody mentioned about that, but I think you do have a point as well, right? So if you know all this new innovation, you know, comes to fruition, right, as predicted, right? Definitely, you know, the valuations they have will be something quite massive. And if we can take the ride with them, I think that will be also something.
Itamar Friedman: Maybe one personal one that I missed in this, is that I talked a lot about like the developers. If we go one level higher to the tech lead, to the managers, et cetera, with everything that is in X and, you know, social networks, this future is coming. But you know it’s coming and you are responsible to make it happen. And this are two different things. I, with subjectivity about — like shameless plug for Qodo as being one of the, if not the leader in this — you have to invest in quality workflows and putting the guardrails and rules, et cetera, to bring that future for you. Take a look at what Ramp is doing, what Block is doing. I mean, on the part of automating like software development, not on firing people, et cetera. And notice how the tech leads are investing in it. Don’t wait for, I don’t know, like Claude or Cursor to bring you the tools. Understand your entire software development and bring the tools to the right bottlenecks. That’s where Qodo, for example, is playing with the code review and governance. But it doesn’t have to be that. Maybe you need an SRE agent, maybe you need like a, you know, product design agents, et cetera. That’s my other advice. Quality and workflows are the moat and will be your advantage versus competitors. Don’t wait for it to come to you. Invest in it.
Henry Suryawirawan: Thanks for the plug. I think that’s really nice, right? So for people who are like in the leadership, manager, tech lead, position, right? You do have responsibility as well to make this future happen. And also, yeah, not getting kind of like laid off simply because, you know, we don’t need middle managers anymore. But I think, you guys play a big part as well.
So Itamar, thank you so much for this exciting conversation. I really learned a lot. And especially I love hearing the new advancement that you share. If people love this conversation as well, they wanna reach out to you, connect with you online, is there a place they can find you?
Itamar Friedman: Yeah, of course. I’m on X, it’s itamar_mar. And on LinkedIn, Itamar Friedman. Generally you can just search Itamar Qodo, Q-O-D-O, and you’ll find me. By the way, Qodo stands for Quality of Development.
Henry Suryawirawan: Ah, nice! Thanks for adding that. So okay, thank you so much again for your time, Itamar. I wish you good luck, in, you know, revolutionizing how we do software development and code quality and code review. So thank you so much for spending your time today.
Itamar Friedman: Thank you, Henry. It was really a pleasure.
– End –
