#243 - CTO Coach: Why Tech Companies are Really Laying Off Developers (It’s Not Just AI) - Stephan Schmidt

 

   

“If a company lays off developers, it means they don’t have enough ideas. It’s not that we become more efficient. The bottom line is you don’t have enough ideas to feed a development engine that’s now two times, five times more productive.”

Why are tech companies really laying off developers? The uncomfortable truth has nothing to do with AI efficiency and everything to do with running out of ideas.

In this episode, Stephan Schmidt, CTO coach and author of “The Amazing CTO’s Missing Manual,” shares a perspective on AI adoption that most tech leaders aren’t talking about. Developer layoffs aren’t about AI replacing jobs; they reveal a deeper problem. Product management has become a bottleneck, creating shallow features just to keep developers busy rather than driving meaningful innovation. When AI accelerates development, this bottleneck becomes impossible to ignore.

Stephan explains why architecture must be AI-ready before teams can benefit from AI tools, how CTOs can manage unrealistic business expectations, and why junior developers actually have a massive opportunity right now. He also challenges the common belief that vibe coding will democratize software development, explaining why you need to be a strong developer to prompt effectively.

Key topics discussed:

  • Why AI layoffs reveal companies ran out of good ideas
  • Architecture must be AI-ready for real productivity gains
  • Vibe coding only works if you’re already a strong developer
  • Product engineering roles will replace traditional developers
  • MCP connections unlock AI value beyond code generation
  • Juniors have huge advantage as AI-native engineers
  • Iterate on plans, not prompts, when using AI tools
  • CTOs can finally “rise and shine” using AI strategically

Timestamps:

  • (03:19) Transforming Into AI-First Organizations
  • (04:13) Managing AI Development Velocity Expectations
  • (08:35) AI Use Cases Beyond Code Generation
  • (12:04) Leveraging MCP for Organizational Impact
  • (15:04) Why Developers Resist AI Adoption
  • (18:35) AI, Layoffs, and the Product Bottleneck
  • (21:22) Opportunities for Junior Developers in the AI Era
  • (24:36) Critical Thinking and Moving Up the Abstraction Layer
  • (27:24) Vibe Coding: Benefits and Pitfalls
  • (31:59) Upskilling Toward Product Engineering Roles
  • (35:59) Building an Effective AI Adoption Strategy
  • (38:06) AI Adoption Strategy for Development Teams
  • (40:44) Avoiding the AI Tech Zoo
  • (44:48) Navigating Data Privacy and Security Concerns
  • (50:31) AI’s Impact on the CTO Role
  • (57:23) 3 Tech Lead Wisdom

_____

Stephan Schmidt’s Bio
Stephan Schmidt is a technology veteran whose journey began as a self-taught programmer in 1981 and evolved through roles as a serial startup founder, engineering manager and CTO of an eBay Inc. company. With deep expertise in artificial intelligence dating back to his university studies, Stephan now serves as a CTO Coach, helping technology leaders navigate the current industry disruption. His unique perspective bridges the evolution from early computing to today’s AI revolution, positioning him as an ideal guide for developers and managers facing strategic technological disruption.

Follow Stephan:

Mentions & Links:

 

Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Transforming Into AI-First Organizations

  • Their situation is basically, they get a lot of pressure from the CEO, from business to introduce AI, and to push really hard. Then they have developers who are experimenting with AI. They are using Cursor or Claude Code or Copilot, something. And then the CTO in the middle needs to somehow create a structure from the engineers that experiment and do stuff on different speeds. Some are very fast, some are very slow. And the business pressure to deliver and also managing the expectations. Like business expects everything to be twice as fast or 10 times as fast, which is difficult to deliver. So this is where they find themselves in.

Managing AI Development Velocity Expectations

  • Yes and no. Currently, I’m also doing a lot of AI stuff on my own, in private, not being a coach. But I write myself, for example, a coaching operating system to make my coaching smoother and the operations of that smoother. And I don’t look at the code for some time now and just let Claude Code write the code. And I feel like, yeah, I think it’s five times faster or even more. I’m very productive on one hand. On the other hand, there are two things. First, it’s very straining. I feel like as an engineer, doing like four features a day or something compared to one feature a day and that has a high cognitive load and a lot of stress. And also managing AI is kind of stressful on one hand and the other.

  • It works for me because I have the right guardrails in place in my new project. And the architecture of the project is also done in a way that works for AI because it was built with AI in mind. The struggle that the CTOs have by for creating five times more output, is the architecture is not in place. And also the processes and the control structures are not in place. So control is also very challenging with AI. So business needs to also adapt.

  • Yes, they should communicate differently. I’m a big fan of framing. Framing is a very important skill for managers to have. And the frame would be our architecture is not yet AI ready and we need to get it AI ready. And we are not a greenfield project. We have an existing code base and we need to put stuff in place. So yes, we can benefit from AI and from generating code faster and more features, but we need to get into position. That’s one thing. And the other thing there is stuff that can be done. One thing you can do is have much more prototypes. That’s something people at Google called prototype first. Don’t do requirements first or don’t do ideation first. Go for prototype first. And this is something where CTOs can deliver already if you have a pipeline of prototypes and also MVPs. You can show the benefit of having five prototypes for great ideas per week. That’s something that wasn’t possible before. That’s now possible with AI. So you can deliver on some promises of AI to the business and keep them happy until your architecture is more in place to enable this boost in productivity.

AI Use Cases Beyond Code Generation

  • I tell all my clients, if you only think about generating code, you’re doing it wrong. I also do a lot of workshops, AI workshops, helping in the transition. Mostly helping with the motivation of developers and motivating developers to use AI because there are a group of developers who don’t want to use AI and a group who are very eager. And so most of my workshops are around motivating people. And I show a lot of examples. You can do a lot of security scanning, bug scanning. You can use prompting to get into a new code base to understand things. Claude is very good at finding bugs with the right prompts and the right guardrails. So there’s a lot of stuff that has been tedious before for developers, that’s now much easier. Sometimes fixing a bug or creating some documentation, creating some tests. If you do it right, Claude Code is even better at creating tests than developers are, if you’re doing it the right way. So there’s a lot of things that developers can do and engineering can do besides generating code.

  • On the other hand, if I connect GitHub issues, Zendesk with MCP into Claude and the code and everything, and I ask Claude Code what features should I develop, and then it looks at Zendesk and customer requests and bugs and issues and all of these things. And perhaps a strategy document in Notion, and then it comes up with a list of features that you should implement. Or you can say, well, this is my strategy, this is my roadmap. Does the strategy map to the roadmap or not? Or does the roadmap fulfill the strategy? I think that’s where business needs to adapt. They obviously adapt in marketing and sales, but that’s where business needs to adapt in product development. I think in product development, business is not focusing enough on AI usage for strategy, vision, features and KPIs. Finding KPIs that if we build this, how can we find out if it works? This kind of stuff that’s pre-development. There is a huge potential for business to use AI in product development before it gets to a developer. And just focusing on generating code and saying, okay, we want to use developers to generate more code with Claude. That’s too narrow focused.

Leveraging MCP for Organizational Impact

  • First of all, integrating systems with MCP is a low hanging fruit. Introducing MCP servers or MCP bridges proxies, API proxies, is a low hanging fruit. It’s low effort, big gains.

  • The second about MCP is there are a lot of use cases that also my clients use. The very first one is just connect your data warehouse with MCP to a chat bot or to Claude Code, and then show the CEO that he can ask questions about the data and about what’s the best customer and what should we do, how can we upsell things. And so there’s a huge benefit of connecting your data sources with MCP.

  • There are also some other things like, simple, very simple example, but if you connect GitHub to Claude Code. A build process fails, you can just say, okay, the build process fails. What would we need to do to fix it? And then if Claude Code comes up with the right plan, you say, okay then execute the plan. So you can also do a lot of operational things with MCP and integrating into Claude. And something that some of my clients do is also do a lot of compliance work with MCP. Like you connect various sources, your processes, your implementations, code and all of these things. And then you can pre-screen for compliance before your audit and say, okay, am I compliant or am I not compliant? And then probably, the AI agent will come up with a list of stuff that you’re not compliant. There is a ticket that is not approved, there is this and this and this. So it really helps you stay compliant if you do this iteratively and fast or continuously.

Why Developers Resist AI Adoption

  • I’ve been writing code for 40 years. So from what I see in people, I have some ideas. One idea is the company comes to the developer and says, use more AI so you can have a more stressful job, produce five times the features and kicker, same salary. In which world does this make any sense? It doesn’t make any sense. The other thing is, because everyone benefits. The product manager benefits, the CTO benefits, the CEO benefit, everyone benefits basically from the simple generate code setup. The developer does not. The other thinking is if your value proposition is you’re a senior developer with five years or 10 years of React and then you should use AI and Claude Code, if you tell it to read this source and this documentation, it’s quite good at doing some hooks and components and all of that stuff that you thought distinguished you. The knowledge has distinguished you from other developers and that’s taken away. So AI is taking a lot of things away that distinguishes you from other developers. So why should you like AI? It’s leveling the playing field. And if you’re at the top, that does not make a lot of sense either.

  • And the third thought is. There are coders and there are creators. When I was a kid, I wanted to write video games. So I learned, I teach myself programming in a department store as a tool to write video games. I didn’t want to be a coder. I want to be a creator of video games. And over time I like coding, I liked the puzzles, I liked the intrinsic beauty of code. That’s all the things I like. But nevertheless, I’m a creator. I want to create things with tools. If you define yourself as a creator who’s using tools to create things, you flourish with AI. But if you define yourself as a coder who writes code and you’re in the business because you love writing code, you don’t care too much about what you create. You care a lot about the beauty of the code. If you’re a coder, then I think that’s a challenge for you to accept AI as a new tool. So these are three reasons that I see that developers resist AI adoption.

AI, Layoffs, and the Product Bottleneck

  • I’m a little bit gloom and doom. I think developers will lose their job. The reason is not AI per se, but what I’ve been saying for last 20 years that product is kind of a bottleneck. In a lot of companies I’ve seen, product is a bottleneck and that manifests in a way that product creates very shallow features. The most important thing in a lot of companies I see is that product management needs to keep the developers developing. That’s a primary goal. The product fails if developers have nothing to do when the sprint starts. So there are a lot of work that product does to keep developers typing. They don’t have deep thoughts and great features. The features become shallow just to keep developers working. That’s what I see. In this way, product management has been a bottleneck. And I tell startups, you need more product managers, not more developers. You need to have better ideas, not faster development. But it worked. But now with AI, this is breaking down, the product is becoming a bottleneck. The number of good ideas they’re coming up with is the bottleneck. And I say if a company lays off developers, it means they don’t have enough ideas. We become more efficient. Yes, but bottom line is you don’t have enough ideas to feed a development engine that’s now five times more productive. So the doom and gloom in me thinks, yeah, there will be layoffs, but the reason is not AI by definition, but because companies are limited on great ideas.

Opportunities for Junior Developers in the AI Era

  • There is a huge opportunity for juniors. Why? Some of my clients and I think parts of the industry are moving to a product engineering role, which means taking on engineering, managing AIs, but also understanding product and coming up with features and all the minor decisions yourself without a product manager. I wrote a large article about Amdahl’s Law and how it relates to product and AI, but that comes into play as one of the drivers that we might see the industry moving to a product engineering role.

  • I’ve been doing this for 40 years or 45 years. I’ve seen so many transformations in our industry. Not as big as AI, but like moving from machine language to C, moving from C to Java, moving perhaps from Java to Python. It’s just like these ancient Fortran developers, they don’t change. It’s not Fortran developers who jump into Java. If you look at it, it’s junior Java developers. It’s juniors coming from university who want to jump into Java, who want to use Java jobs. Whereas senior C developers had a great resistance moving to Java. So there was always an opportunity for juniors to jump into new technologies to get a headstart in new technologies, because the incumbents, mostly a lot of incumbents resist the change. So I think there is a huge opportunity for juniors to become proficient in AI, to know about AI, to become product engineers. And then they will become senior. They will not become senior Java developers or senior Python developers, but they will become senior product engineers.

Critical Thinking and Moving Up the Abstraction Layer

  • Critical thinking is important. You should think about what you’re doing and especially why you’re doing things. I’m a huge proponent in asking why you do things. That said, I think, yes, you’re losing. As I just mentioned, I made a transition. I started in dabbling in BASIC, but then I did most of my programming in machine code or in Assembler and partially in machine code. And then, on 8-bit and 16-bit CPUs. And when I moved to C, I forgot all of that. All of the really cool stuff that you could do with registers and address modes and all of these really cool skills I had. Optimization machine code and Assembly optimization skills in my head. I lost this, but I learned new things and C enabled me to do things I could not do in Assembler. Same thing with Java. And with AI, we see the same thing. My decisions move one step up. It’s just one meta level. I’m making meta decisions and then I’m making meta meta decisions. So I just move up the food chain. And I make different decisions and have different kind of thinking. And not just when I need to write C code, I have a different kind of thinking than compared to writing Assembly code. That’s how I see it. We just move up the food chain. It’s not like some people say they become dumber. I can’t remember phone numbers. Does it make me dumber? I don’t know, perhaps, but I can do so much more with my phone and do so much more thinking and I think it enabled me a lot so I can live with that. I don’t remember any phone numbers except that of my parents.

Vibe Coding: Benefits and Pitfalls

  • I recently wrote an article which says I no longer look at source code. But there are a lot of things that enable me to do this because the projects are small. I’m also like 45 years developer. So I know probably how to prompt things because I know where Claude Code could derail or where there are challenges around caching, cache consistency, where to store data. And so a lot of stuff that I know can go wrong, I put in my prompting. So I think I’m a good vibe coder on one hand, so it works. But I need to be a good coder to be a good vibe coder because your prompting is very different. And I’m happy with the outcome. The application that I’m currently writing is 35,000 lines of code. And I’m not looking at the code and I’m happy. Works great. Lot of tests and stuff.

  • The second thing about vibe coding, there is a danger if you start from scratch. I think people do not understand what an AI is or what an LLM is. LLM is just a non-deterministic probability machine. And a very important input to this is your existing code. And LLM is not a senior developer who looks at bad code and says, oh, that code is bad. You want that feature? I need to first, A, refactor this one, and B, build the feature in a great way. People think all the talking about how an AI is trained on GitHub code, it’s bad because it’s trained on bad code and all of these things. I think that’s a gross misunderstanding on how LLMs work. It’s much more useful to think of LLMs like they take your code as input, they take your prompt as input, and based on their training and probabilities, they create the most likely code that fulfills the prompt and your existing code base. So if your code base is bad, the added code would also be bad. It’s not like, oh, I read some real great code on GitHub so I apply this here. No. If your code base is bad, it will create bad code. So I think it’s very important to start from a great code base with a great template on one hand. And I think it’s also important that at some point you need to refactor it. From getting it from here to here, and then you’re happy. But if you stay here, it gets worse and worse. And these two things are reasons why vibe coding goes bad. So, if you are doing it right, it can be great for small right architecture that I mentioned before. But it’s easy to shoot yourself in the foot if you don’t know what you’re doing. It’s really dangerous.

Upskilling Toward Product Engineering Roles

  • The industry for various reasons will move towards this product engineering role. It just is a sweet spot, solves several problems. So I would upskill myself more on product and product thinking and business thinking. Then I would upskill on algorithms. And the second thing is you need to work with AI and see its deficits and see its benefits and learn about it. If you want to be a great prompter, you need to do a lot of prompting. If you want to be a great coder, you need to write a lot of code. So that’s how I see it and what people should do. Just experiment and see what’s happening and what’s not happening. It’s kind of a transitional problem because I really believe that today, for vibe coding, you need to have a good developer background to create a positive outcome. I’m not so sure about what happens in five years. Because I believe that in the future, code generation is a transitional technology. There will be no code, there will be no software, no code. There will just be AI in some years, I dunno when, but in some years. But for a transition, I think moving, taking on more product understanding is something I would go. In the long term, I think source code is not that important anymore. So don’t worry as a junior. It’s just a transition.

Building an Effective AI Adoption Strategy

  • First, you need strategy. Do we want to lay off 20% of people so we make more profit and we use AI for this? That’s not a strategy. That’s not an AI strategy. You need to have a vision where you want to be and how AI helps you in that vision or how AI is part of that vision. And then from that drive a strategy and then say, okay, we need to do this and this and this, and this needs to be in place. For a strategy, you need to have two things: things you want to achieve and things that you have as capabilities. What are the things that I need to have in place and what are the things that I need to achieve? And that would be part, for example, you need to re-architecture probably your code base, you need to change your processes, you need to add the deep prototyping. So you need to have a lot of changes in place and say, I need to do this, this, this, this, this, this. And that’s part of my strategy. Whatever you want to do with AI, you need to have a strategy that shows that you can leverage AI to get there. That’s what I think is important. Not just being reactive and say, okay, let’s lay off 20% of people so we increase profits and that’s not a strategy.

AI Adoption Strategy for Development Teams

  • Move to prototype first. Have a product funnel of prototypes, MVPs, PMF, and use AI to drive all of that to a certain point before you take over the source code. That’s very important part of an AI strategy today. Second, for the whole company, you also should have a strategy on how you want to automate, what you want to automate, who owns this? It’s also part of the strategy, who owns AI? Is everyone owning their own AI? Is there an AI officer who makes sure efforts are coordinated and compliant and secure? That’s a decision to be made. Re-architecture, I mentioned several times. That’s a part of your strategy. Also part of the strategy, how do you get all developers to adopt this? And if you want to become product engineer, you need to do A, B, C, which is part of that is creating prototypes, doing AI. So that creates a natural push for everyone to adopt AI and to think about how to use it so they become product engineers. It’s just not only a relabeling, which would not be the best thing. The strategy needs to be explained how you get these people on AI. Also how do you do AI training and security that ties into do I have an AI officer or not? But nevertheless, what’s the strategy about data? Do you want to build your own models? Is there a benefit in owning your own models, training them or not? A lot of these things need to be decided and put in place so that you can believe you make it to your vision.

Avoiding the AI Tech Zoo

  • We have a model zoo. There is a slew of models and tools. If I talked to my clients, they have a license for Copilot because that comes with their Microsoft stuff. Some developers prefer IntelliJ so they have Junie, then some have Claude Code. So that’s a challenge. And also model drift is a challenge. Models change. So it’s something that works now, does not work with the next model. Like stuff that works in a certain way with Opus 3.x then suddenly stops working with four. So there is a new dimension on this. So managing models is important and everything that part of your strategy. Personally, I’m using ChatGPT, Claude, and Gemini. And I feel they are different. And what I would do is really pay someone in my company to experiment with different models on coding tasks and on other tasks, and to find out what’s the best model for us. When I talk to my clients, they think there is no difference. But I feel there is a huge difference between models. If I argue with a model, I found ChatGPT and Claude are very good at arguing and taking my arguments and going back to sources and arguing with me. Whereas I feel Gemini is very bad at arguing. It’s not arguing, it just says, no, no, you’re wrong. I’m right. Because this is this. So it’s not taking any input. So I would pay someone to have a great understanding about tools, what’s there, how they differ. If I think this is a competitive advantage, I would want to know what to use.

Navigating Data Privacy and Security Concerns

  • AIs are very good at broad knowledge. So they know about zip codes in China. I don’t. Probably Chinese developers don’t know about zip codes in Germany. This kind of broad knowledge is something that AIs are very good at. Where you as a person are very good at is very deep knowledge about your problem domain of your company. And AI companies, the most precious thing you have is this deep knowledge that they don’t have. And I would be concerned that beside regulations, compliance, GDPR, and all of these things, I would be concerned that this deep knowledge that only I as a company have, is getting out. Because that’s what the AI companies want, their very broad knowledge, but they want to have also very deep knowledge.

  • It depends on what you believe. As a CTO, how paranoid you are. It’s more on a paranoid scale than on science. What I would not do if they say we use your data and your source code for training, then probably I would not. And I have some deep knowledge. If I have no deep knowledge, I might not care too much. Like this coaching operations thing, I don’t care if Claude Code is training on that code because there is no really deep insights that I have. So it depends on your business and on your paranoia level. Currently, I probably would not run my own models because in the trade off between being competitive and running my own models, I would rather be more competitive, and use better models than the ones I can run on my own. But if it starts plateauing, I would think more about using my own models. I’m interested in running own models. It seems no one else is really interested. There are no benchmarks. If I look on the internet, there are gazillion benchmarks for games like this hardware is running this game at 105 FPS. But I’m not seeing a lot of benchmarks of this hardware is running this model at this size with 50 token per second. There are some benchmarks, but I don’t see a lot of them. So I think there is not a lot of interest currently to do this. So for now, probably I would use better model than running stuff on my own. It also depends on your business case.

AI’s Impact on the CTO Role

  • You can write code as a CTO if you have some time. If everything works, if all the operations works and you have great business impact as an executive and you have time left, well, and you enjoy coding, do write some code. So I don’t have something against coding by itself. But usually I think you need to do a lot of other stuff before you can come back to coding again.

  • One challenge of the CTO role is that you have not enough opportunities to shine. So if everything is well, no one recognizes you. If things go bad, everyone is up your neck. And that’s often the case because sometimes CTOs confuse their role with that of a VP of engineering. They see themselves as an execution machine which is more like being the VP of engineering. Whereas the CTO should have business impact and have visibility in the board. And everyone should be able to answer the question, why do we have a CTO on the board? If you go into AI, if you do prototyping, if you also as a CTO, if you especially prototyping or show you can do a lot of the stuff I said before, tying AI to your data warehouse and then asking the AI why are our best customers our best customers and they showed that to the CEO or to the top management board because you’re the CTO, because you have a technical understanding of what is possible. You could do things earlier than other people, perhaps. And that’s a way to shine. AI is a great opportunity for CTOs to shine in the top management. Something that was difficult from the time when tech and product was split in two and the shining things were taken away from the CTO, from that point on I think it was difficult to shine for a CTO. But AI brings it back. And you can easier shine. Rise and shine is something that’s important for a CTO and for your career and for your success. So that’s where AI is really helpful.

3 Tech Lead Wisdom

  1. Be a leader.

    • The concept of leadership, sometimes people make it very complicated. I think leadership is simple and hard at the same time, but it’s simple in a way you think people should move somewhere, organizations should move somewhere, and then you get people to get there, to move there. That’s a leader. You decide we should go there, let’s go there. That’s leadership. Something that a lot of CTOs fall into trap is servant leader. They identify as servant leader, which is kind of fine if they would not concentrate on being a servant instead of being a leader. Servant leader is a trap. So be a leader. That’s important.
  2. AI is not software.

    • I strongly believe we’re going to move to a setup that favors AI without source code. As a leader, a tech leader, you should think today what that means for you and what microservices you might already be able to replace from code to AI.
  3. The biggest prompting mistake people make is you should iterate on the plan and not iterate on the prompt.

    • I use Claude Code. Probably it’s the same in other tools. You can have plan mode and then you go into plan mode. And then you plan and plan and plan, until you tell Claude also check, recheck the documentation, recheck the code if your plan works. And then after some iterations on the plan, you tell Claude Code to go. Whereas what I see is people iterating on the prompt. They say, do this, and then the outcome is not what they expect. They say, no, don’t do this, do that. And then it’s still not working. And then they say, do this. So they iterate on the execution, and then that’s not working. They just get deeper and deeper in the woods, to a point where the agent can’t find out anymore and they’re lost in the woods. So iterate on the plan, but don’t iterate on the prompt.
Transcript

[00:02:03] Introduction

Henry Suryawirawan: Hello everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today I have with me a repeat guest, uh, Stephan Schmidt. Um, so if you still remember, it’s about a year, maybe a few months ago, that we talk about The Amazing CTO’s the Missing Manual, from Amazing CTO, right? So Stephan today is coming back to continue the theme of talking about how to become amazing CTO. But this time I’m sure all of us know that what is happening these days is about AI. So probably today’s topic will be predominantly about AI based. Welcome back Stephan to the show. Looking forward for this exciting conversation.

Stephan Schmidt: Thank you, Henry. Looking forward for this ex, uh, to this, conversation and thanks for having me again.

Henry Suryawirawan: Right. So Stephan, I think like one year ago, if you still remember, we talk about the missing manual for amazing CTO. So what made you wanna come back this time? Uh, is there something else still missing from your manual?

Stephan Schmidt: I’m rewriting parts of it and especially I’m adding a chapter on AI because that’s the elephant in the room, obviously. And that’s from my CTO coaching also one of the biggest challenges currently, how to structurally introduce AI and transform software development organizations into AI-first organizations. That’s a challenge for a lot of people.

[00:03:19] Transforming Into AI-First Organizations

Henry Suryawirawan: So maybe if you can share a little bit from your customers, what are some of the main theme that they are talking about, they’re asking you, or they’re confused about, I guess?

Stephan Schmidt: Their situation is basically, they get a lot of pressure from the CEO, from business to introduce AI, uh, and to push really hard. Then they have developers who are experimenting with AI. Uh, they are using Cursor or Claude Code or Copilot, something. And then the CTO in the middle needs to somehow create a structure from the engineers that experiment and do stuff on different speeds. Some are very, very fast, some are very, very slow. And the business pressure to deliver and also managing the expectations. Like business expects everything to be twice as fast or 10 times as fast, uh, which is the kind of difficult difficult to deliver. So this is where they find themselves in.

[00:04:13] Managing AI Development Velocity Expectations

Henry Suryawirawan: Yeah. So I guess if we still remember back then when we talk about the, one of the role for CTO is actually to become a bridge between business and tech, right? Um, so I guess this time, I would imagine the leaders, the executives seeing all the news, the crazy thing that, you know, a lot of AI vendors are kind of like promising. So I guess the pressure is kind of real, right? We, they expect the development to be, I dunno, twice, 10x faster. So first of all, do you think this is something that is, you know, true, that development can actually be that much faster?

Stephan Schmidt: Um, yes and no. Uh, sorry for that answer. Like I’m currently, I’m also doing a lot of AI stuff on my own, in private, not being a coach. But I write myself, for example, a coaching operating system to make my coaching smoother and the operations of that smoother. And I don’t look at the code for some time now and just let Claude Code write the code. And I feel like, yeah, I think it’s five times faster or even more. I’m very, very, very productive on one hand. On the other hand, there are two things. First, um, it’s very, very straining. I feel like as an engineer, doing like four features a day or something compared to one feature a day and that’s, that has a high cognitive load and a lot of stress. And also managing AI is kind of stressful on one hand and the other.

I think it works for me because I have the right guardrails in place in my new project. And the architecture of the project is also done in a way that works for AI because it was built, kind of built with AI in mind. The struggle that the CTOs have by for creating five times more, five times more output, is the architecture is not in place. And also the processes and the control structures are not in place. You know, the, so control is also very, very, challenging thing with AI. So business needs to also adapt.

Henry Suryawirawan: Yeah, you mentioned something very interesting, right? So when we have kind of like the process, the guardrails, maybe the rules, right, the architecture in place. Probably AI can be a good leverage, but I would imagine it’s not. I mean, it’s, it could be a luxury in so many software development team to have all these in place, right? People are still expecting the development team to actually still speed up no matter what the situation. So do you think, um, software development team needs to be able to convey something different to the executives? And if so, how should they actually communicate it back about, you know, the managing the expectations?

Stephan Schmidt: So first, yes, I think they should communicate differently. And what they should do is… I’m a big fan of framing. You know, there is this book, Don’t Think of an Elephant. I think that framing is a very, very important skill for managers to have. And the frame would be, for me, would be our architecture is not yet AI ready and we need to get it AI ready. And we are not a greenfield project. We are, we have a existing code base and we need to put stuff in place. So yes, we can benefit from AI and from generating code faster and more features, but we need to get into position. That’s one thing.

And the other thing there is stuff that can be done. One thing you can do is have much more prototypes. That’s something like, people at Google called, I think called, prototype first. There’s been discussion about this on LinkedIn, on other, in other places. Don’t do requirements first or don’t do ideation first. Go for prototype first. And I think this is something where CTOs can deliver already if you have a pipeline of prototypes and also MVPs. Um, you can show the benefit of like having, I dunno, having five prototypes for great ideas per week. That’s something that wasn’t possible before. That’s now possible with AI. So you can deliver on some promises of AI to the business and keep them happy until your architecture is more in place to enable this boost in productivity.

Henry Suryawirawan: Yeah, so definitely I think everyone needs to be kind of like be trained, I guess like in terms of how to use AI because there are so many spectrum of experience in terms, for example, senior, juniors, right? They all could benefit differently, I would think, right? And especially if you mentioned there’s no guardrail, then it becomes, uh, much more important for the seniors or maybe the executives, the CTO to actually put them in place.

[00:08:35] AI Use Cases Beyond Code Generation

Henry Suryawirawan: But talking aside from coding for a moment, right? So what do you think are some of the other use cases? I know that people are talking to ChatGPT or Gemini, you know, asking some questions. Are there some other use cases that you think in, you know, in your client situation that they could benefit from AI.

Stephan Schmidt: Yes. I tell all my clients, uh, if you only think about generating code, you’re doing it wrong. And I also do a lot of workshop, AI workshop, helping in the transition. Mostly helping with the motivation of developers and motivating developers to use AI because there are some who don’t, a group of developers who don’t want to use AI and a group who are very eager. And so most of my workshops are around motivating people. And I show a lot of examples. You can do a lot of security scanning, bug scanning. You can use prompting to get into a new code base to understand things. I think, um, Claude is very good at finding bugs with the right prompts and the right guardrails. So there’s a lot of stuff that, and some stuff that was been, has been tedious before for developers, that’s now much easier. Like sometimes fixing a bug or some, some other stuff. Creating some documentation, creating some tests. If you do it right, I think Claude Code is good at, even better at creating tests than developers are, if you’re doing it the right way. So there’s a lot of things that developers can do and engineering can do besides generating code.

And on the other hand, I show a little, it’s very naive perhaps example, but if I connect GitHub issues, Zendesk with MCP into Claude and the code and everything, and I ask Claude Code what features should I develop, you know? And then it looks at Zendesk and customer requests and bugs and issues and all of these things. And, perhaps a strategy document in, uh, in Notion, and then it comes up with a list of features that you should implement. Or you can say, well, this is my strategy, this is my roadmap. Does the strategy map to the roadmap or not? Or does the roadmap fulfill the strategy?

So that, I think that’s where business needs to adapt. They obviously adapt in marketing and sales, but that’s where business needs to adapt in product development. Uh, where I know something about, I dunno, about sales or marketing. But, um, I think in product development, business is not focusing enough on AI using or AI usage for strategy, vision, features and KPIs. Finding KPIs that if we build this, how can we find out if it works? This kind of stuff that’s pre-development. Um, I think there is a huge potential for business to use AI in product development before it gets to a developer. And just focusing on generating code and saying, okay, we want to use developers to generate more code with Claude. With Claude, I think that’s too narrow focused.

Henry Suryawirawan: Yeah, so I think even like for me, when you mentioned about, you know, checking your strategy, vision. Even communicating that strategy and vision itself can benefit from using AI to maybe structure your sentence, make sure it aligns with your strategies, your KPIs, like you mentioned, right? I think definitely it’s, um, up to your creativity on how you use AI. Uh, sometimes definitely it can give you some hallucination or something that is totally wrong. But it’s really up to you at the end to actually criticize or maybe accept the suggestions done by AI.

[00:12:04] Leveraging MCP for Organizational Impact

Henry Suryawirawan: You mentioned about MCP and I think many people might benefit a lot from using MCP, right? So even I read, like I think Shopify mentioned about, you know, the usage of AI everywhere within the company and they connect everything through MCP internally. So tell us what kind of power that someone can leverage in an organization if let’s say they have so many MCP available systems or sources that they can connect to.

Stephan Schmidt: So first of all, I think integrating systems with MCP is a low hanging fruit. Introducing, uh, MCP servers or MCP bridges proxies, API proxies, is a low hanging fruit. It’s a low effort, low effort, big gains. So that’s the first thing.

The second about MCP is there are a lot of use cases that also my clients use. The very first one is just connect your data warehouse with MCP to a chat bot or to Claude Code, and then show the CEO that he can ask questions about the data and about what’s the best customer and what should we do, how can we upsell things. And so there’s a huge benefit of connecting your data sources with MCP.

But then there are also some other things like, simple, very simple example, but if you connect GitHub, perhaps not for MCP but you can also use it for MCP, GitHub to Claude Code. A build process fails, you can just say, okay, the build process fails. What would we need to do to fix it? And then if Claude Code comes up with the right plan, you say, okay then execute the plan. So you can also do a lot of operational things, um, with MCP and integrating into Claude.

And something that some of my clients do is also do a lot of compliance work with MCP. Like you connect, uh, various sources, your processes, your implementations, code and all of these things. And then you can pre-screen for compliance before your audit and say, okay, am I compliant or am I not compliant? And then probably, the AI agent will come up with a list of stuff that you’re not compliant. There is a ticket that is not approved, there is this and this and this. So, uh, it really helps you stay compliant if you do this iteratively and fast or continuously.

Henry Suryawirawan: Yeah. So I think some good tips there, right? So I like the one that you mentioned connecting your data warehouse or maybe even database, a simple database, right? That lets the non-technical users to query it using so-called natural language, right? And this is something that I think maybe some people realize it or not, right? So by leveraging MCP combined with the chat interface, right? Actually you can kind of like execute a lot of things through natural language and AI would somehow be able to translate that to kinda like the API calls, maybe summarizing and making inferences and all that. So definitely MCP is a good thing, especially for non-technical users, I think. Because previously maybe it’s difficult to actually get the data source, analyze, use the tools and all that, but probably now it’s getting much, much easier.

[00:15:04] Why Developers Resist AI Adoption

Henry Suryawirawan: So I think one mention, one mention you, uh, initially said is you have some jobs to motivate developers to using AI. So I’m a bit intrigued by this, because I would assume many developers would love to use AI to help them, you know, maybe writing some code and things like that. Tell us the case why some developers are not motivated?

Stephan Schmidt: There are several reasons I, so I don’t know. You know, I’m not in the head of the, of developers. I’m not… I can’t read the mind of people, but from what I see and from what… I’ve been writing code for 40 years. So from what I see and what I see in people, I have some ideas. One idea is the company comes to the developer and says, use more AI so you can have a more stressful job, produce five times the features and kicker, same salary. So you know that, you know, that’s where reticent makes sense. In which world does this make any sense? I don’t know. I think it doesn’t make any sense. So that’s one thing. The other thing is look, because everyone benefits. The product manager benefits, the CTO benefits, the CEO benefit, everyone benefits basically from the simple generate code setup. The developer does not. And so that’s one thing.

The other thing is, the other thinking is if your value proposition is you’re a senior developer with five years or 10 years of React - I hope React is that old but I guess - so if you have 10 years of React experience, and then you should use AI and Claude Code is quite good at, if you tell it to read this source and this documentation, this documentation, it’s quite good at doing some hooks and components and all of that stuff that you thought, that distinguished you. The knowledge has distinguished you from other developers and that’s taken away. So AI is taking a lot of things away that distinguishes you from other developers. So why should you like AI? It’s leveling the game, the playing field. And if you’re at the top, that does not make a lot of sense either, you know?

And the third thought is, that’s how I see it is. There are coders and there are creators. When I was a kid, probably, uh, I told you last time, but when I was a kid, I wanted to write video games. So I learned, I teach myself programming in a department store as a tool to write video games. I didn’t want to be a coder. I want to create a, be a creator of video games. And over time I like coding, I liked the puzzles, I liked the intrinsic beauty of code. That’s all the things I like. But nevertheless, I’m a creator. I want to create things with tools. And if you define yourself, I think if you define yourself as a creator who’s using tools to create things, you flourish with AI. But if you define yourself as a coder who writes code and you’re in the business because you like, love writing code, you don’t care too much about what you create. You care a lot about the beauty of the code. And I like a little bit beauty of the code, but mostly I’m a creator. But if you’re a coder, then I think that’s a challenge for you to, uh, to accept AI as a new tool. So these are three reasons that I see that developers resist AI adoption.

[00:18:35] AI, Layoffs, and the Product Bottleneck

Henry Suryawirawan: Yeah, very, very interesting, uh, reasons that you mentioned there. I think it could also be, like a threat, like when you mentioned, right? It could be seen as a threat to their kind of like skillset, existence, job existence, I mean. We know that a lot of people are saying, you know, we need lesser developers, right? Team size is gonna shrink. We see a lot of layoffs. Personally, what do you think? Would, um, developers, a lot of developers lose their jobs? Or something else could happen with the usage of AI?

Stephan Schmidt: So a little bit, I’m a little bit gloom and doom. Uh, so I think developers will lose their job. The reason is not AI per se, but what I’ve been saying for last 20 years that product is kind of a bottleneck. In a lot of companies I’ve seen, product is a bottleneck and that manifests in a way that product creates very shallow features. They always need to… The most important thing in a lot of companies I see is that product management needs to keep the developers developing. That’s a primary goal, you know. The product fails if developers have nothing to do when the Scrum, when the sprint starts, you know. So so there are a lot of work that product does to keep developers typing. And so they are already, I think seeing product management is already thinned. They don’t have deep thoughts and great features. The features become shallow just to keep developers working. That’s what I see. So in this way, product management has been a bottleneck. And I tell startups, you need more product managers, not more developers. You know, you need to have higher better ideas, not faster development. But it worked.

But now with AI, this is breaking down, you know, so the product is becoming a bottleneck. The number of good ideas that they’re coming up with ideas is the bottleneck. And I say if a company lays off developers, it means they don’t have enough ideas. You know, it’s not like we are going, we become more efficient. Yes, but bottom line is you don’t have enough ideas to feed a development engine that’s now five times, two times, five times more productive. Um, so the doom and gloom in me thinks, yeah, there will be layoffs, but the reason is not AI by definition, but because companies are limited on great ideas.

Henry Suryawirawan: Very interesting because I rarely hear this kind of perspective, right? So I think it kinda makes sense in a way, right? Because if you don’t have too much so-called innovation, product development, like good features to build, right? Obviously, if we can automate lots, like big parts of our software development, you probably won’t need a lot of developers, right?

[00:21:22] Opportunities for Junior Developers in the AI Era

Henry Suryawirawan: So I think, um, definitely it’s very interesting perspective. And I find that a lot of developers actually think that AI is also taking a lot of their jobs in terms of, you know, like now, you know, I don’t need so many juniors anymore. Or now I don’t need to hire like different stack developers anymore. So what do you think? Is this something that is valid and what do you think should juniors do then if let’s say that’s the case?

Stephan Schmidt: I think there is a huge opportunity for juniors. Why? Some of my clients and I think parts of the industry are moving to a product engineering role, which means taking on engineering, managing AI but also understanding product and coming up with features and all the minor decisions yourself without a product manager. Because I wrote a large article about Amdahl’s Law and how it relates to product and AI, but that comes into play as a driver, uh, as one of the drivers that we might see the industry moving to a product engineering role. And I think like that’s, I see it more like, you know, when I… I’ve been doing this for 40 years or 45 years. I’ve seen so many transformations in our industry.

Not as big as AI, but like moving from machine language to C, moving from C to Java, moving perhaps from Java to Python, though for me it was more Perl, Python, Java, not the other way around. It’s just like these ancient Fortran developers, they don’t change. It’s not Fortran developers who jump into Java. It’s basically, if you look at it, it’s junior Java developers. It’s juniors coming from university who want to demand to jump into Java, who want to use Java jobs. Whereas senior C developers had a great resistance moving to Java. So there was always an opportunity for juniors to jump into new technologies to get a headstart in new technologies, because the incumbents, mostly a lot of incumbents resist the change.

So I think there is a huge opportunity for juniors, to become proficient in AI, to know about AI, to become product engineers. And then they will become senior. They will not become senior Java developers or senior Python developers, but they will become, senior product engineers. So I think there is a huge opportunity for juniors.

Henry Suryawirawan: Yeah, I think that, that’s, again, very another unique insight, right? So I think the key message for juniors is don’t get discouraged, right, about this AI. They can actually become a much better AI native engineers. And in fact, in some random posts or articles that I’ve seen, right? And even personally myself, I’ve seen some of my juniors actually leverage AI differently than what I could think about. Simply because, you know, they are trained using AI in the first place, right? Like they are exposed to AI. I always think of it like the social media era, right? Sometimes, when you are not exposed from the very beginning about social media versus the youngsters who are very savvy with social media, the way that you can leverage on social media is so much different than, you know, the older people. So I think, don’t get disheartened, uh, use AI to the best of what you can do, actually. I think again, this is a great insights.

[00:24:36] Critical Thinking and Moving Up the Abstraction Layer

Henry Suryawirawan: So speaking about leveraging AI a lot for developers, right? I personally, myself, feel that, after a few time, right? I get addicted myself and I feel every time I wanna start work, I would just leverage on AI first. My worry is that a lot of people fall into this same trap, and I think the portion of thinking becomes lesser and lesser, and we kind of like outsource a lot of thinking and decisions to AI. So do you think this is a valid concern? Uh, will there be any issue with our critical thinking going forward?

Stephan Schmidt: I think on one hand, critical thinking is, um, is important. You should think about what you’re doing and especially why you’re doing things. I’m a huge proponent in asking why you do things. That said, I think, yes, you’re losing. Like as I just mentioned, I made a transition. I started in a little bit in dabbling in BASIC, but then I did most of my programming was in machine code or in Assembler and partially in machine code. And then, on 8-bit and 16-bit CPUs. And when I moved to C, I forgot all of that. All of the really, really cool stuff that you could do with registers and address modes and all of these really cool skills I had. Optimization machine code and Assembly optimization skills in my head. I lost this, but I learned new things and C enabled me to do things I could not do in Assembler. Same thing with Java.

And I think with AI, we see the same thing. My decisions move one, one step up. So I do not make the decision, should I create a caching layer here or should I do this here or there, you know? But I think it comes, like, I just move one level up. Uh, it’s just one meta level. I’m meta, making meta decisions and then I’m making meta meta decisions. So I think I just move up the food chain. And I make different decisions and have different kind of thinking. And not just when I need to write C code, I have a different kind of thinking than compared to writing Assembly code.

So that’s how I see it. We just move up the food chain. It’s not like some people say they become dumber. Yeah, I, yes. I mean, I can’t remember, um, phone numbers. Does it make me dumber? I don’t know, perhaps, but I can do so much more with my phone and do so much more thinking and I think it enabled me a lot so I can live with that. I don’t remember any phone numbers except that of my parents. So that’s, uh, yeah.

Henry Suryawirawan: Yeah. I also don’t remember a lot of phone numbers, uh, a lot of birthdays, a lot of mathematics operations. So I think, uh, in one sense, yeah, you could say a little bit dumber, but at the other end, like what you mentioned, there are a lot of possibilities, how you can leverage the technology.

[00:27:24] Vibe Coding: Benefits and Pitfalls

Henry Suryawirawan: I like that you mentioned that we are moving one level up, right moving to more the, like, higher level abstraction, right? So thinking about, you know, not just the code itself, the programming language, but thinking maybe about design, architecture and maybe features, outcomes, tests, and all that. But I think one of the counterargument is like some people into this vibe coding. So, you know, you just type your thing and, uh, AI create the code for you without even looking at the code itself. So what do you think about vibe coding? Have you tried vibe coding? Is it a feasible way of building software, do you think?

Stephan Schmidt: I mean some of my insights, it might be shallow one or deeper one, but some of my insights on vibe coding. So I recently wrote an article line which says I no longer look at code, at source code. But there are a lot of things that enables me, I think enables me to do this because the projects are small. I’m doing on one hand, but I’m also like 45 years developers. So I know probably how to prompt things because I know how where Claude Code could derail or where there are challenges around caching, caching, cache consistency, where to store data. And so a lot of stuff that I know can go wrong, I put in my prompting. So I think I’m a good vibe coder on one hand, so it works. But I need to be a good coder to be a good vibe coder because your prompting is very different. And I’m happy with the outcome. It’s like the application that I’m currently writing is 35,000 lines of code. And I’m not looking at the code and I’m happy. It’s, um, when, so. Works great. Lot of tests and stuff. So that’s one thing.

The second thing about vibe coding, I think if you start from scratch, there is a danger if you start from scratch. I think people do not understand what an AI is or what an LLM is. AI is this big thing, LLM and neural networks. LLM is a part of that one. And I think LLMs, they don’t understand what LLM is that, it’s just a non-deterministic probability machine. And a very, very important input to this is your existing code. And LLM is not a senior developer who looks at bad code and says, oh, that code is bad. You want that feature? I need to first, A, refactor this one, and B, build the feature in a great way. But people think like they, all the talking about how an AI is trained on GitHub code, it’s bad because it’s trained on bad code and all of these things. I think that’s a mis, a gross misunderstanding on how LLMs work. I think it’s much more useful to think of LLMs like they take your code as input, they take your prompt as input, and based on their training and probabilities, they create the most likely code that fulfills the prompt and your existing code base. So if your code base is bad, the added code would also be bad. It’s not like, it’s not, oh, I read some real great code on GitHub so I apply this here. No. If your code base is bad, it will create bad code.

So I think it’s very, very important to start from a great code base with a great template on one hand. And I think it’s also important that at some point you need to refactor it. From getting it from here to here, and then you’re happy. But if you stay here, it gets worse and worse and worse. And I think this is something, these two things are reasons why vibe coding goes bad, you know? So, um, if you are doing it right, I think it can be great for small right architecture that I mentioned before, uh, it can work. But it’s easy to shoot yourself in the foot if you don’t know what you’re doing. It’s really dangerous, I think.

Henry Suryawirawan: Yeah. So that’s why I think we hear and see a lot of different outcomes when people vibe coding, right? So I think definitely the few key things that you mentioned. First, you need to be a good coder, right? You really need to know how a good design, good coding looks like because, otherwise when AI suggests you a lot of code suddenly in one go, right? Uh, you would be able to understand whether it’s going in the right way or in the wrong direction, and you kind of like tweak along the way, right? And the danger here is like, let’s say many, many people sell this promise that now non-technical people can actually write code simply by vibe coding. And a lot of such tools are built, you know, like Lovable, Bolt and things like that. Uh, but I think for prototype, simple things that you mentioned, uh, maybe it would work. But to make it something that is more robust, enterprise ready, secure, and all that, probably needs a lot more engineering fundamentals, right?

Stephan Schmidt: Yeah.

[00:31:59] Upskilling Toward Product Engineering Roles

Henry Suryawirawan: So definitely, I agree with, uh, your approach, right? So speaking about the approach here, I think you mentioned, that you have been around for 45 years, right? So a lot of people don’t have this luxury in their career. So what do you think will be a good advice, thinking about your head as a CTO coach, what would be a good advice for maybe, you know, five years, 10 years of experienced coders, right? How should they approach AI? What kind of things that should maybe learn more or upskill more in terms of leveraging AI the best way?

Stephan Schmidt: I think it’s important. I think the industry for various reasons will move towards to this product engineering role. I think that’s, it just is a sweet spot for, it solves a lot of problems or several problems. Uh, so I would upskill myself more on product and product thinking and business thinking. Uh, then I would upskill on algorithms, you know, so that’s what I would look into. And the second thing is you need to work with AI and see its deficits and see its benefits and learn about it. If you want to be a great prompter, you need to do a lot of prompting. If you want to be a great coder, you need to write a lot of code. So that’s how I see it and what people should do. Just experiment and see what’s happening and what’s not happening.

It’s kind of a transitional problem because I really believe that today, as I said, I, for vibe coding, you need to have a good developer background to create a positive outcome. I’m not so sure about what happens in five years. I’m giving talks to companies, universities, which is one of them is called Beyond Software or AI is Not Software. That’s one of the talks I’m giving. And some months ago, one example is playing tic-tac-toe. Because I believe that in the future, code generation is a transitional technology. You know, there will be no code, there will be no software, no code. There will just be AI in some years, I dunno when, but in some years, you know.

And some months ago, I wanted to play tic-tac-toe, uh, with ChatGPT as an example to, there is no tic-tac-toe game in ChatGPT. What is ChatGPT able to do? And I played tic-tac-toe and it didn’t work. You know, it made mistakes and there were very, very, a lot of handholding. Because I have a conference talk coming up this week. I tried the same tic-tac-toe example with Claude and there was no problem. So I was just saying, let’s play a game of tic-tac-toe and Claude played a game of tic-tac-toe with me. Um, made no mistakes. And so we had, I’ve seen some progress on that. Very, very low difficulty level obviously. Tic-tac-toe is a very simple thing but I’ve seen some progress on can’t play to I don’t need to write a tic-tac-toe game, I can just go to the AI and say, let’s play tic-tac-toe.

And because I do a lot of these experiments, I did a minesweeper experiment some time ago where, um, I vibe coded from one prompt, prompt shotting Minesweeper, worked, and now I thought, okay, let’s play Minesweeper with Claude Code or ChatGPT. That did not work yet. So as of yesterday, that’s not there. But I believe we are moving to a no-source-code set up in the future. So for a transition, very long talk, sorry. But for a transition, I think moving, taking on more product understanding is something I would go. In the long term, I think source code is not that important anymore. So don’t worry as a junior. It’s just, it’s a transition.

Henry Suryawirawan: Yeah. So definitely there could be a possibility where we can kind of like, uh, write software by leveraging on a lot of natural language, just like vibe coding in this case, right? So maybe in the future, the AI could be much smarter, right? In terms of writing better design as to what we want it to be. So definitely looking forward for that future.

[00:35:59] Building an Effective AI Adoption Strategy

Henry Suryawirawan: And I think a lot of organizations these days definitely want to roll out AI, right? So maybe as part of your experience as well, how do you think would be a good strategy for organizations to start rolling out AI or making sure that they get the most effective benefits from using AI?

Stephan Schmidt: First, I think you mentioned it. First, I think you need strategy, you know. And the strategy and… Do we want to lay off 20% of people uh, so we make more profit and we use AI for this? That’s not a strategy. That’s not an AI strategy. I think you need to have a vision where you want to be and how AI helps you in that vision or how AI is part of that vision. And then from that drive a strategy and then say, okay, we need to do this and this and this, and this needs to be in place. Like my example for strategy is, which shows I have no clue about Mount Everest, but my example is about Mount Everest. For a strategy, you need to have two things: things you want to achieve and things that you have as capabilities.

For example, if you want to climb Mount Everest, there is Base Camp One and Base Camp Two and the North Col, Ridge, and then The Summit. That’s the things you need to achieve. And then there are things you need to have like an ice pick and a tent and an oxygen mask and check it and stuff that you need. And that’s what for me is a strategy and that maps to AI. What are the things that I need to have in place and what are the things that I need to achieve? And that would be part, for example, you need to re-architecture probably your code base, you need to change your processes, you need to add the deep prototyping. So you need to have a lot of changes in place and say, I need to do this, this, this, this, this, this. And that’s part of my strategy, you know. Whatever you want to climb, if it’s Mount Everest or do something else, cross the world or you know. Whatever you want to do with AI, but you need to have a strategy that shows that you can leverage AI to get there. That’s what I think is important. Not just being reactive and say, okay, let’s lay off 20% of people so we increase profits and that’s not a strategy.

[00:38:06] AI Adoption Strategy for Development Teams

Henry Suryawirawan: Right. I still remember back then when we discussed part of the role of CTO is to come up with a strategy, right? And you mentioned that a good strategy enables people to make better and easier decision, right? Um, so I think the same thing here, right? If you can lay out a good AI strategy, definitely you can help people make good decision on how to leverage AI or benefit with AI.

Maybe if you can give some examples, I don’t know whether you have something in mind, right? What are the typical good AI strategy that people can adopt, you know, especially for software development team?

Stephan Schmidt: A lot of the stuff that I mentioned. I think you need to have uh Move to prototype first. Um, have a product funnel of prototypes, MVPs, PMF, and use AI to drive all of that to a certain point before you take over the source code. I think that’s very important part of an AI strategy today. Second, for the whole company, that’s not CTO, but mostly CIO stuff. But you also should have a strategy on how you want to automate, what you want to automate, who owns this? It’s also part of the strategy, who owns AI is like everyone owning their own AI? Is there an AI officer who makes sure efforts are coordinated and compliant and secure? So that’s part of the strategy. That’s a decision to be made, I think.

I think, re-architecture, I mentioned several times. That’s a part to to of your strategy. Also part of the strategy, how do you get developers, all developers to adopt this? Two of my clients just made, uh, the decision to re… to have a new position, which is product engineer. And if you want to become product engineer, you need to do A, B, C, which is part of that is creating prototypes, doing AI. So that creates a natural push for everyone to adopt AI and to think about how to use it so they become product engineers. It’s just not only a relabeling, which is something I would’ve done, I would relabel the titles which would not be the best thing. I think this is something to earn is greater. But the strategy needs to be explained how you get these people on AI.

But also like how do you do AI training and security that ties into do I have an AI officer or not? But nevertheless, what’s the strategy about data? I think part of the strategy could also be do you want to build your own models? Is there a benefit in owning your own models, training them or not? Are you a prompting company? Are you using prompts for 25, 26, and then move to your own model? So what are your… I think a lot of these things need to be decided and put in place so you, so that you can believe you make it to your vision.

[00:40:44] Avoiding the AI Tech Zoo

Henry Suryawirawan: Yeah. So definitely a few things are very interesting. I wanna pick up first thing first, right? Because these days there are so many AI tools available, and even, for example, model, right? We know there’s a Claude Code, there’s a Codex, OpenAI, right? There’s a Gemini, open model as well, Chinese model. And I remember back then, we talked about tech zoo, you know, having a lot of tech, in your, uh, stack, right? I think is it the case? Same thing that will happen, AI tech zoo? And what will be your strategy? Because these tools are just gonna be, you know, exploding, right?

Stephan Schmidt: Yeah, yeah, yeah, yeah, yeah. Totally, totally with you. Didn’t, I didn’t make that connection. It’s, I will use that in the future. Thank you for coming up with that.

Yeah, I think we have a model zoo. There is a slew of models and tools, uh, like if I talked to my clients, they have a license for Copilot because that comes with their Microsoft stuff. Some developer prefer IntelliJ so they have June, uh, Junie, then some have Claude Code. So, yeah, that’s a challenge. And also model drift is a challenge. Like models change. So it’s something that works now, does not work with the next model. Like stuff that works in a certain way with Opus 3.x then suddenly stops working with four. So there is a new dimension on, um, on this. So managing models I think is important and everything that part of your strategy, yeah.

What I would do is because I personally, I’m using ChatGPT, Claude, and Gemini. And I feel they are different. So I think, still think Claude personally, personal opinion, uh, not a scientist, Claude is the best coding agent, I feel. Uh, Sonnet 4.5 is really great, I think. But I use, also use Gemini because it’s part of my Pixel phone, integrates great with Pixel. But I think they are different. And what I would do is really pay someone in my company to experiment with different models on coding tasks and on other tasks, and to find out what’s the best model for us. Because I feel like a little bit, when I talk to my clients, they think there is no difference. You know, if I talk to my clients, they think there is no difference. It’s no difference if we use Copilot or Claude or Climax or something, you know. But I feel there is a huge difference between models. And, um, a very minor thing which has nothing to do with software development, but if I argue with a model, I fair found ChatGPT is I found, yeah, ChatGPT and Claude are very good at arguing and taking my arguments and going back to sources and arguing with me. Whereas I feel Gemini is very, very bad at arguing. It’s not arguing, it just says, no, no, you’re wrong. I’m right. Because this is this. And I say, I bring another argument and says, no, no. Yeah, I know why you think this, but you’re wrong because, so it’s not taking any input. Gemini, I feel that’s how I use it. Non-native speaker, doing a lot of English in Gemini. I feel Gemini is bad at arguing and thinking, whereas Claude and ChatGPT are much better. So I would pay someone to have a great understanding about tools, what’s there, how they differ. If I think this is a competitive advantage, I would want to know what to use. And quite frankly, the belief of my clients that all the tools are the same, all the models are the same, I think that’s not true.

Henry Suryawirawan: Yeah, so I personally think it’s not true as well. Uh, I think we can tell the difference. And especially if you layer it on top with agentic AI, right? That becomes much more different, right? So maybe if you use Cursor versus either Windsurf or Junie and all that. I think the difference could be really, really huge because some could do more plan-based thing. They can reverify the task before they hand it over to you. I think, yeah, definitely there are a lot of things that could change and even the upgrade of the model itself could introduce a new behavior. Maybe some are more credit, like more credit consumption. Uh, you know, they will do and some will be lesser. So I think these are, I think, forever changing. Uh, and don’t forget to always keep researching on that.

[00:44:48] Navigating Data Privacy and Security Concerns

Henry Suryawirawan: So speaking about data privacy, security, I think this is one of the major concerns, especially for, you know, CISO, the InfoSec people. As the CTO coach, uh, what would you advise people in order not to neglect this part of data privacy and security?

Stephan Schmidt: I have a mental, mental model, which might be right, might be correct, might be incorrect, but at least, I think it makes sense. It makes sense to me. AIs have a very, are very, very good at broad knowledge. So they know about zip codes in China. I don’t. Probably Chinese developers don’t know about zip codes in Germany. So that’s something, this kind of broad knowledge is something that AIs are very, very good at. It’s like these on Hacker News. I’m a vivid Hacker News reader. There’s always these threads about, uh, I think misconceptions developers have about money, about addresses, about, like this is where the people are very bad, broad knowledge. This is where AIs are very good. Where you as a person are very, very good in a company is a very deep knowledge about your problem domain of your company. And AI companies, the most precious thing you have is this deep knowledge that they don’t have, you know. About perhaps, TSMC might have very special knowledge about how to produce chips with a, with the newest node. Other companies are struggling. Something is behind that interest, behind that. So other fabs are behind that. That’s very, very deep knowledge only they have. But also you as a startup might have, because you’re doing a lot of research, you have some very, very deep knowledge on something. And I would be concerned that beside regulations, compliance, GDPR, and all of these things, I would be concerned that this deep knowledge that only I as a company have, is getting out. Because that’s what the AI companies want their very broad knowledge, but they want to have also very deep knowledge. And um, yeah, that is something I would be concerned about.

Henry Suryawirawan: Yeah, I think that’s a valid mental model, I would say, right? So, um, but one of the challenge, for people, right? ‘Cause these tools are so easy to access, right? And sometimes we kind of like inadvertently, uh, leak out something that is not supposed to be. And do you think just by subscribing maybe more like an enterprise or business plan, where they say we have zero retention of your data, should you still be worried with such a, I dunno, like an agreement or clause? Or if not, should everyone now start thinking about running model in-house?

Stephan Schmidt: It depends on what you believe. I think if it depends what you believe as a CTO, how paranoid you are. Yeah, I think it’s more on a paranoid your scale than on science. But if they say, what I would not do if they say we use your data and your source code for training, then probably I would not. And I have some deep knowledge. If I’m just a, I dunno. If I have a very, if I have no deep knowledge, I might not care too much. Like this coaching operations thing, I’m right. I don’t care if Claude Code is training on that code because there is no really, really deep insights that I have. So it depends on your business and on your paranoia level. But on this very, we use your data to train our model, that’s probably what I would not do, uh, as a company. And then it depends again on your paranoia. If you think, okay, you trust the company that they don’t train if they tell you this. Or you say, I don’t trust them, I run my own models.

Currently, I probably would not run my own models because in the trade off between being competitive and running my own models, I would rather be more competitive, you know, and use better models than the ones I can run on my own. But perhaps I’m not sure where this is going and how programmer… like before Sonnet 4.5, I would say like Opus 4.1, I think. I thought like the models kind of plateaued. I think Sonnet 4.5 is a big jump forward again for coding. So it’s not plateauing. But if it starts plateauing, I would think more about using my own models.

But I’m interested in running own models. It seems no one else is really interested in. There are no benchmarks. Like if I look at, on the internet, there are gazillion benchmarks for games like this hardware is running this games at 105 FPS, you know. But I’m not seeing a lot of benchmarks of this hardware is running this model at this size with 50 token per second. You know, that’s not what I, there are here and there are some benchmark, but I don’t see a lot of them. So I think there is not a lot of interest currently to do this. So for now, probably I would use better model than, uh, running stuff on my own. If I, but I have some clients or talk to people at conferences and clients who work, uh, in the defense industry. And, uh, well, they don’t, they don’t have that option. So it also depends on your business case, perhaps.

Henry Suryawirawan: Yeah, so I think we have to understand our own trade-offs, right? Our own situations and contexts. So yeah, I think what you mentioned the benchmark probably is not a lot available, right? Probably it’s also like running the benchmark itself could be expensive, right? Plus you have to run a lot of, you know, like context, tokens, and all that in order to produce a meaningful result. So we’re probably looking forward one day when we have all this available.

[00:50:31] AI’s Impact on the CTO Role

Henry Suryawirawan: I wanna go to the next thing, um, which kind of like ties back to the role of CTO itself, right? So definitely we can see the impact of AI to the software development team and organizations, but I wanna understand from you what is the impact of AI to a CTO, right?

And again, I remember back then when we had this conversation, you said that AI has the potential of making CTO feel more creative and innovate, right? And maybe one year ahead after that, you know, conversation. What do you think is the impact of AI to a CTO, right? Is it something that CTO should leverage a lot more? And should CTO be hands-on coding again, with something that you didn’t actually agree back then?

Stephan Schmidt: You can write code as a CTO if you have some time. Like if everything works, like if all the operations works and you have great business impact as an executive and you have time left, well, and you enjoy coding, do write some code. So I don’t have something against coding by itself. But usually I think you need to do a lot of other stuff before you can come back to coding again.

Yes, I think I’ve discussed this with several clients. One, uh, One challenge of the CTO role is that you have not enough opportunities to shine. So if everything is well, no one recognizes you. If things go bad, everyone is up your neck. So that’s I, uh, see the CTO role. And that’s often the case because sometimes CTOs confuse their role with that of the, of a VP of engineering. You know, they see themselves as an execution machine which is more like being the VP of engineering. Whereas the CTO should have business impact and have visibility in the board. And everyone should be able to answer the question, why do we have a CTO on the board? You know, that that’s, or in the management board.

And I think AI is something, if you go into AI, if you do prototyping, if you also as a CTO, if you especially prototyping or show you can do a lot of like also the stuff like I said before, um, tying AI to your data warehouse and then asking the AI why are our best customers our best customers or something like this? You know, and they showed that to the CEO or to the top management board because you’re the CTO, because you have a technical understanding of what is possible. You could do things earlier than other people, perhaps. And that’s a way to shine.

I think AI is a great opportunity for CTOs to shine in the top management. Something that was difficult from the time when, uh, tech and product was split in two and the shining things were taken away from the CTO, from that point on I think it was difficult to shine for a CTO. But I think AI brings it back. And you can easier shine. And I think, uh, rise and shine is something that’s important for a CTO and for your career and for your success and for all of these things. So that’s where AI is really helpful.

Henry Suryawirawan: Yeah, I think it’s a great point, right? Because naturally if you’re a good CTO yourself, right? So you would have a good understanding of the business, right, the business impact and also the potential of tech, right? And by having the ability to kind of like, I would say a creative thinking where you think which area that AI can help you and tie it back to the business impact. Yeah, probably you have a good chance to actually, uh, rise and shine within your organization to show something that would never be possible before, I guess. Like, because, uh, now AI simply can open up a lot of rooms for innovations and potential things that could be easily done. Before, probably we would worry about how we execute that, right? Because you probably need to develop code, you need more people to help. So I think that’s a very good point.

Are there other things that you think a CTO would be able to leverage AI, uh, maybe day to day from your conversation with your clients or maybe what you’ve seen in the industry so far?

Stephan Schmidt: Otherwise, I haven’t seen too much. Um, we would need to talk again in a year, I think. Currently, most of my, really as I laid out in the beginning, most of my clients, which is also to be sure, um, to be clear, my clients are from like five to a hundred developers. So that’s my kind of expertise and that’s where I’m working with. So I don’t have insights into companies with 500 developers. So the dev of 5,000. So these have different setups. And so there might be something true that I don’t know. So just to be clear, it’s, that’s where I have expertise. But in this area, they are just struggling to adopt AI in a meaningful way with a lot of chaotic things going on from developers and demand from developers, faster, slower business like that’s where they currently are, most of my clients are currently with AI.

Henry Suryawirawan: Yeah, I think that sums up a very good situation for everyone, right? Scrambling to adopt AI, but at the same time also trying to rationalize the benefits of using AI. Some people talk positive about AI, some talk about negative. I think, yeah, we are in this like midst of chaos, I would say, like, try to rationalize AI.

So we have talked a lot about, you know, AI. Are there any other things that you think, uh, we should also, um, discuss before we move on to the last questions that I have?

Stephan Schmidt: I really, I mean, there are all these stuff on CTO things. Uh, they, a lot of the stuff that’s, that has been there is not going away. So AI is getting on top of a lot of these other things, and makes some things more difficult. Like, um, most of my clients have also struggled with the proper organization and roles and accountability and all of these topics that my clients struggle with. And I feel like AI is just making it more complicated and more, sorry, there is a sun now shining in a window on the opposite house and it’s shining directly in the face. Um, um, so, so it makes things more difficult what the role looks should look like, what the organization should look like. So it’s, so that’s, but it’s really, really about a lot about AI and strategy. Um, so I don’t have anything more interesting than what we discussed last time, I think, beside AI.

Henry Suryawirawan: So yeah, definitely what you mentioned is valid, right? So for people who wants to become a good CTO, I would still highly suggest, uh, reading your book, right, the Amazing CTO’s Missing Manual, right? Because a lot of stuff I think would still be relevant, if not more relevant because like AI, yeah, we could amplify certain things where you can get more things done probably. But there are other aspects that AI currently is not able to leverage. So things like what you mentioned, accountability, providing good strategy, and not to be swamped into the day-to-day tasks, right? So always thinking strategically and helping people to grow. So I think that’s still kind of like part of the big responsibility of the CTO.

[00:57:23] 3 Tech Lead Wisdom

Henry Suryawirawan: So Stephan, uh, it’s been a great conversation. Um, before we wrap up, I would like to ask you the same question I asked last time, which I call the three technical leadership wisdom. So maybe, uh, after this conversation, do you have some version of, uh, wisdom that you would like to convey this time?

Stephan Schmidt: Yeah, I have three. Two are about AI and one is not about AI. The first one is, be a leader. Just today or yesterday, I read something on LinkedIn, which is people make, sometimes make it very complicated, the concept of leadership, sometimes people make it very complicated. I think leadership is simple and hard at the same time, but it’s simple in a way that you want to, you think people should move somewhere, organizations should move somewhere, and then you get people to get there, you know, to move there. That’s I think, is a leader. You decide we should go there, let’s go there. That’s leadership. Um, and, uh, and something that a lot of CTOs fall into trap, uh, CTOs fall into is this, uh, servant leader. They identify as servant leader, which is kind of fine if they would not concentrate on being a servant instead of being a leader. So that’s a trap. Uh, I think servant leader is a trap. Uh, so be a leader. That’s important.

Uh, the second thing is AI is not software. We’re going to move towards… I strongly believe we’re going to move to a setup that favors AI without source code. And, uh, you should, as a leader, a tech leader, you should think today what that means for you and what you can already, what microservices you might already be able to replace from code to AI.

The third one is not a leadership thing per se, but something I think you should push in every developer, because it’s, I think the biggest prompting mistake people make is you should iterate on the plan and not iterate on the prompt. I use Claude Code. Probably it’s the same in other tools. Uh, you can have plan mode and then you go into plan mode and click plan mode. And then you plan and plan and plan and plan, until you tell the Claude also check, recheck the documentation, recheck the code if your plan works. And then after some iterations on the plan, you tell Claude Code to go. Whereas what I see is people iterating on the prompt. They say, do this, and then the outcome is not what they expect. They say, no, don’t, don’t do this, do that. And then it’s still not working. And then they say, do this. So they iterate on the execution, and then that’s not working. You know, they just get deeper and deeper and deep in the woods, to a point where the agent can’t find out anymore and they’re lost in the, lost in the woods. So iterate on the plan, but don’t iterate on the prompt. That’s the third one.

Henry Suryawirawan: Nice tips, right? So for people who still fiddle with the prompt, uh, including myself, right? Sometimes I just try to make the prompt better. Uh, but yeah, I think, uh, making the plan better sounds better because you iterate on the small thing, right? You break down the things that you wanna do in chunks, right? And making sure that you know the plan for those, uh, iteration looks good. And, you know, you let AI tries to solve that one by one. So definitely pro tips for using AI.

So Stephan, if people want to continue, uh, this conversation, they would like to ask you more things or they expect an updated version of your book, where they can find you online?

Stephan Schmidt: They should search for me on LinkedIn. They can find me on LinkedIn. Then they can connect and ask me whatever they want or tell me their opinion or whatever share. Uh, I’m very open. So the primary method would be going to LinkedIn. The other thing is I have a website, which is called amazingcto.com. People can also go there.

Henry Suryawirawan: Right. Thank you again for your time, Stephan. I’m very excited, you know, having this possibility to learn about possibilities of using AI, and especially for those CTOs out there who still are struggling how to adopt AI successfully, on how to think about using AI much more effectively. So I hope this conversation inspires you. So thanks again, Stephan.

Stephan Schmidt: Thanks. Thanks, Henry, for having me. And I hope I can be back in a year.

Henry Suryawirawan: Looking forward for that.

– End –