#251 - Design the System, Not the Hero: Building Trust in the AI Era - Andrew Stevens
“Scale is about building a system that makes good decisions without you. If success depends on your personal heroics, it will never scale.”
In a world where AI can build your MVP overnight, what actually gives you a lasting competitive edge? Andrew Stevens argues it’s not the software — it’s the data, the trust, and the systems you build around them.
In this episode, Andrew Stevens, CTO of Sakura Sky and a technology leader with 30+ years of experience building, scaling, and selling companies, shares hard-won lessons from his journey across startups, enterprises, and AI ventures. He explains why product-market fit matters more than shipping fast, why data outlasts software as a competitive moat, and how leaders must design systems that don’t depend on their own heroics. Andrew also shares how a near-fatal accident reshaped his thinking on resilience, delegation, and what it truly means to build something that scales. From hiring for attitude over technical skill to building AI governance that accelerates rather than blocks innovation, this conversation is packed with practical wisdom for anyone leading in the AI era.
Key topics discussed:
- Why data — not software — is your real moat in the AI era
- What breaks when a startup scales past 10–100 people
- How to make decision rights explicit to move faster
- Design the system, not the hero: building beyond you
- Hiring for resilience and attitude over technical skill
- How governance can speed up AI adoption, not slow it down
- What trustworthy AI agents actually require
Timestamps:
- (02:45) What Breaks When You Scale a Startup From Zero to 100 People?
- (08:44) Why Is Product-Market Fit More Important Than Building an MVP?
- (17:20) How Do You Build a Lasting Moat in the AI Era?
- (21:29) Why Must Leaders Learn to Let Go to Scale?
- (23:27) What Can Leaders Learn From a Near-Fatal Motorcycle Accident?
- (26:29) How Do Technical Leaders Stay Hands-On Without Becoming a Bottleneck?
- (31:32) Why Should You Hire for Resilience Over Technical Skill?
- (34:56) How Do You Build a Team That Innovates Safely in the AI Era?
- (41:12) How Do You Build AI Governance That Speeds Up Innovation?
- (47:37) Are AI-Driven Layoffs Justified or Just an Excuse?
- (52:06) How Do You Build Trustworthy AI Agents?
- (59:34) 3 Tech Lead Wisdom
_____
Andrew Stevens’s Bio
Andrew Stevens, CTO of Sakura Sky, is an executive leader and hands-on technologist who has scaled AI and cloud ventures from idea to acquisition. Based between Europe and the US, he blends deep expertise in cloud architecture, machine learning, and security with a track record in fintech, media, gaming, and AI.
Known for making complex tech relatable - often with pop-culture twists - Andrew brings sharp insights on AI guardrails, infrastructure resilience, and the creative edge humans hold in an AI-driven world. Whether advising founders, investing in early-stage startups, or speaking on global stages, Andrew helps audiences cut through the hype and focus on what matters most.
Follow Andrew:
- LinkedIn – linkedin.com/in/andrewjstevens
- Sakura Sky – sakurasky.com
- 📖 The Executive AI Playbook – https://www.sakurasky.com/white-papers/ai-playbook/
- 📚 Executive White Papers & Frameworks – https://whitepaper.download/
Mentions & Links:
- Minimum viable product (MVP) - https://en.wikipedia.org/wiki/Minimum_viable_product
- Product-Market Fit (PMF) - https://www.productplan.com/glossary/product-market-fit/
- Retrieval-augmented generation (RAG) - https://en.wikipedia.org/wiki/Retrieval-augmented_generation
- Gemini - https://gemini.google.com/
- Claude - https://claude.com/
- Codex - https://en.wikipedia.org/wiki/OpenAI_Codex
- Apigee -https://docs.cloud.google.com/apigee/docs/api-platform/get-started/what-apigee
- LiteLLM - https://www.litellm.ai/
- Hugging Face - https://huggingface.co/
- HackerOne - https://www.hackerone.com/
- Perl - https://www.perl.org/
- PHP - https://www.php.net/
- ColdFusion - https://en.wikipedia.org/wiki/Adobe_ColdFusion
- Rust - https://rust-lang.org/
- Golang - https://go.dev/
- Replit - https://replit.com/
- Lovable - https://lovable.dev/
- Bolt - https://bolt.new/
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
What Breaks When You Scale a Startup From Zero to 100 People?
-
Companies such as Amazon, they spend a lot of time to build that trust. They built those systems and that really talks about what we need to be thinking about as engineers and founders as well. Being able to scale your companies really about trust. It’s about data, it’s about trust. And it’s also about leaning into the people around you and their expertise and really discovering what others around you can add.
-
In the early days was really for me about being the most knowledgeable Perl programmer, the PHP programmer, ColdFusion, whatever I wrote. I really thought I added value strictly through my ability to make code. Churning code and pushing the needle more. And as time grew, you find out other values in yourself and those other values of people around you. And really success is built through trusting.
-
Trust is not only you trusting your business partner or your team, it’s also your customer trusting you. And you need to build those systems and methods for them to build you.
-
So really going from nothing, like I’m sitting by myself, I wanna build something, you know, the founder as an engineer sort of paradigm, it’s really about speed and innovation and getting something out. And scale is not just about people, it’s about your customers, it’s about your reach. It’s about more than that.
-
One of my early companies I built, we went from two of us and we hit over a hundred people. Decision systems break. Staying centralized and making everything founder-led is great to a degree but when you’ve got people in different time zones and different skill sets, you’re not gonna be the expert in the room every time. So you need to build a way to trust those people.
-
Team sitting around waiting for your approvals are not good either. Context gets lost, speed drops.
-
Scale is really about building a system that makes good decisions without you.
Why Is Product-Market Fit More Important Than Building an MVP?
-
AI is a great amplifier. If you’re really good at what you do, AI’s gonna make it better. If you’re not so good at what you do, AI’s really gonna expose that to your audience.
-
Product-market fit within the age of AI is less about novelty and building something new, it’s more about repeatability. Can you deliver the same outcome? Can you deliver it safely? Will it happen every time? Guardrails and repeatable outcomes are really what AI nowadays is about.
-
Claude can help you build code really quickly or Gemini or Codex or whatever. I use that for scaffolding. I use that to demonstrate value quickly. I use that to demonstrate or explore an idea. For me, I don’t take that code to production.
-
It’s learned from Stack Overflow, learned from Reddit, and it’s learned from open source and GitHub. It’s learned from those things and it’s not necessarily picked up the right habits. And it’s learned at scale on some bad patterns.
-
The difference between a good product and an AI product, it’s gonna become more and more amplified. We’re gonna see what a good engineer brings to a product developed by AI. It’s going to be probably more secure. It’s going to be scalable differently because an AI tool is only as good as its prompt and its training material. It’s still probabilistic in terms of what it outputs.
-
This is why I don’t feel necessarily a threat from my role in technology today, because the day that all my customers or all of my users can express what they want succinctly or in totality to an AI to get what they want, is a day that I probably am at risk.
-
People are employing less juniors. And that I think is a real risk for the industry, because those juniors aren’t being trained and in five years time, who’s gonna be the seniors?
-
A lot of them are building what I consider a demo and not a product. Some people are building just a simple wrapper around an LLM. And an LLM is not an AI system. An LLM, to me, agentic AI is just a new software pattern. An LLM is just a new user interface.
-
Don’t ship a demo, ship a product when you’re thinking about PMF. People too focused just on the next model and not about better workflows and better channels.
How Do You Build a Lasting Moat in the AI Era?
-
I do believe data is a moat and the trust. So with good data comes good trust. If I can trust I’m getting the right data from my system, I’m always going to go back to the system and try it again.
-
If I get one single query goes wrong, people don’t trust your system anymore.
-
Tim Berners-Lee said something along the lines of software comes and goes, but it’s data that persists between systems. Data is what comes and what stays, software comes and goes.
Why Must Leaders Learn to Let Go to Scale?
-
I’m working with one organization today that the CEO’s in every meeting. And they’ve become real bottleneck that they can’t scale. No one feels empowered to make a decision. The CEO doesn’t trust the people that they work with to make a decision. They feel that nobody else can do it but them.
-
If you have people you work with, you need to enable them and they need to feel empowered to make decisions. That’s often very tough thing to let go of as a founder, because it’s your baby. It’s such a moment of trust. And I struggle with that trust every day.
-
I love it now ‘cause every moment I feel I need to focus on the trust between myself and other people, I go, well, what’s a real risk here? And I constantly analyze my risk. I work very hard to make sure that the people that I’m with are empowered to make decisions.
What Can Leaders Learn From a Near-Fatal Motorcycle Accident?
-
I was in a motorcycle accident, you know, 2006. My business partners had a tough time picking up from where I was. I realized then there was so much pointed at me and so much dependent on me showing up every day that I could never have a holiday or I could never take a moment out.
-
I loved the high intensity. I loved getting on the whiteboard. I loved drawing up that diagram. I loved hearing that problem. I can still do that today. But I can also enable the people around me to be involved and take ownership of that and scale.
-
If someone gets hit by a bus, it happens. And that inflection point is important. My relationship with risk changed. How do I de-risk my business and how do I de-risk my idea? My idea, if it’s great, it’s only gonna succeed if other people can get involved.
-
Resilience is built, not wished for. The business was only resilient as I am, and if I’m taken out, the business folded.
How Do Technical Leaders Stay Hands-On Without Becoming a Bottleneck?
-
I’m still hands-on every day, I still code. I can’t relate to my engineering team if I can’t understand the latest tool, I can’t understand the code. I do that not to pry on them. I do it in a way that I can have a conversation I can contribute to.
-
My recommendation for someone who has been a technical person now in leadership, maintain your code. You don’t necessarily need to do it in things that are related to the role directly, but you need to be in technology. You need to be hands-on. You need to be going to that meetup. You need to be thinking of ways of how your engineering team works. That is really a way to de-risk the conversation. You can have the conversation with your engineering team, but if you have no basis in reality, they’re not gonna respect you.
-
When I started to get larger teams, I’ve had teams of 8,000 engineers report to me. And how do I scale that. I’ve had teams of two, I’ve had teams of a hundred. You need to be able to let go in a way that enable those people around you to grow. And you need to be able to trust. The only way for me to do that is I trust my own skills and continue to do that.
-
Stay hands-on. It’s our duty as professionals in our field to continually educate and learn, but also focus on the human element.
-
Technology is great, but ultimately everything we do is for people. Companies only exist to make sales and people only use software to get an outcome. It’s all about people.
-
Back in the day I used to think it was all about being the best technologist in the room. But today for me, it’s all about getting the best outcome and that’s enabling the people around me.
Why Should You Hire for Resilience Over Technical Skill?
-
Some of the values that I have in myself is curiosity. I’m always curious about what’s happening next. I really value collaboration. So how we can collaborate together. Autonomy, you need to be able to know when you can work by self or you need to know when you need to work as a team.
-
I used to work towards perfection. That and the system must be exactly that or it’s a fail. Design for perfection is where I started, but today, I design for resilience. That’s understanding how people can come up with systems and enabling systems. It’s about contributing. It’s about collaborating. It’s about curiosity, like understanding what can go wrong and planning for it.
-
Resilience is the ability to deal with change. It’s the ability to deal with the unknown.
-
Looking for people that are curious, looking for people that wanna collaborate, looking for people that can work autonomously. I don’t necessarily want the strongest engineer. I want the best engineer in the team. Because I can teach skills, but I can’t teach attitude.
-
Your product is only as good as your team. It’s only as good as your market you can define and your total addressable market.
How Do You Build a Team That Innovates Safely in the AI Era?
-
I want people that are curious. I’ve tested it and I want you to come to me with an opinion and say, you know, I’ve tested Clawdbot or whatever and it is terrible, or it’s great, it’s awesome. I want people that are out there testing it and come back to me with ways to do things better.
-
I want people that will push back. I want the people to stand up and just say, we should be doing it. If you’re just standing there saying yes all the time, you’re not necessarily doing the right thing by the product, by your team. You’re not building resilience. You’re building yes. And yes is not always right. I want people that can say no and I want people that can feel safe saying no. They’re working together with different opposing views will we get a better product.
-
I want people that can rapidly prototype. We don’t wanna slow innovation. AI can only increase the pace of our innovation in our software development cycles.
-
I do it in a two-speed model. I have a sandbox, fast experimentation, strict containment. That is where I do innovation, I do rapid prototyping. Then I have production, which is your controlled deployments, your auditability, your safeguards. How do you get from the sandbox to production safely? And for me, that’s governance.
How Do You Build AI Governance That Speeds Up Innovation?
-
Governance isn’t a gate, it’s a guardrail that lets you get there faster. The reason why you put brakes on your Ferrari so you can drive it fast. If you didn’t have brakes on that Ferrari, it’s not gonna go very fast.
-
You want to produce software as quickly as possible. You want to innovate. You want to be having the coolest stuff out there really fast, but you also need to be able to do it safely. You’ve innovated, but you’ve broken trust, and you haven’t produced a resilient outcome.
-
I want people that can build safe to say no, feel safe to innovate, feel safe to fail. But there’s gotta be a way that that can get into production as well. It’s about taking the smart risks and understanding based upon our experience, what will work and what will fail.
-
The things I look for when I’m looking at from a AI exec strategy is picking outcomes and KPIs. What are important to my business? Is it costs, cycle time, quality, revenue, risk? Work out the workflow that we need to go through. Where are decisions made? Where is data produced? What will build the trust both in the system, within our users, within our team, to come to a good outcome for the product, for the market, for our investors.
-
I use multiple tiers – sandbox, production. I look at autonomy levels. What can the system make the decision themselves, or what can people do pre-approved patterns and workflows?
-
The fastest team are the people that can make decisions safely. It’s not the people without rules, it’s the people with the rules that can be applied the fastest. What you need to do is have clear guardrails that allows them to operate effectively and quickly and know what the boundaries are. And that’s a fast team.
-
Strategy is choosing where AI creates the leverage for you. Execution’s building the operational model or the operating model, that keeps AI and strategy safe and repeatable. That is strategy, is choosing where AI creates leverage and that is really where you need to be.
-
Governance isn’t a gate, it’s a guardrail that lets you drive faster. Think brakes in Ferraris. I want that Ferrari to drive as fast as it can, but it needs brakes when there’s a bend in the road. So what are those breaks? And keeping it minimal.
Are AI-Driven Layoffs Justified or Just an Excuse?
-
AI-based layoffs are really more of an excuse right now. I’m seeing impacting the lower end. I’m seeing juniors not employed as much. I’m seeing repeatable jobs being affected more. The innovative roles, the engineering roles are still not under real risk yet from real AI.
-
Anyone out there saying that AGI is gonna take your job is not right today. Maybe in some time, it will be different. But today we still need engineers. We still need the data people. We still need the expertise.
-
Those traders I talked about that lost their jobs to help train the OpenAI model. I’ve got skills in my company I should be looking to how I can apply those in the era of AI. I shouldn’t be downsizing just because I think there could be an impact. And that’s where the power is. You should be pivoting your model, not just a knee jerk reaction to firing people or downsizing just because it may be impacted.
-
And stand up and own it. If you’re optimizing because there are other market forces. Sure, take it.
-
People like to say AI right now because it looks good on an executive report. It’s more appeasing investors and appeasing the market than necessarily reflecting reality.
How Do You Build Trustworthy AI Agents?
-
You trust Excel, that when you go down, you sum up a column that that sum is always gonna be correct. Right now you can’t do that in AI because it is not deterministic. It’s probabilistic. It guesses. And you need to be able to have a way that you can trust that AI.
-
A lot of the impacts or the issues we have with AI is the same as what we had years ago. Prompt injection is something you raised earlier. And for me, prompt injection, it’s synonymous with SQL injection in software frameworks in the early noughties. Prompt injection is a real thing, and control number one is prompt injection. Real world agent failure mode is misused via our manipulated instructions.
-
Zero trust is applicable. Trust everything input into your system as untrustworthy. Isolate data, you gotta validate your actions, you gotta constrain things. You need an allow list. You need content boundaries. Action confirmations.
-
Biggest one for me, verifiable audit logs and deterministic replay. If I cannot replay the exact context of your interaction with an agent, it’s still not trustworthy. If you get a different answer every time and I can’t repeat that, how am I gonna debug it? How am I going to stand up in front of an auditor or in a court of law and say, I know how the system works. I’ve been involved in legal cases where we’ve gotta prove software does something specifically. And today you can’t do that.
-
AI is just a new software pattern, and we need to be able to have those good tooling around it.
3 Tech Lead Wisdom
-
When you’re looking at software, design the system, not the hero. If success depends on your personal heroics, it will never scale. Build robust, repeatable systems that plan for your redundancy. I can sit back and if my team works without me, it’s a great place to be. They’re enabled, they’re empowered, they can deliver, and that is how you scale.
-
Make decision rights explicit. So speed is clarity. The who decides with what and by when. If you give them good frameworks, and think brakes on the Ferrari, if you know how to use your brakes, you’re gonna go faster. If you dunno how to use your brakes, you’ll panic and it won’t go anywhere. You need to be able to give a way for people to achieve speed and know how to navigate those corners.
-
Trust comes from observability. If you can’t see it, you can’t improve it. You can’t safely automate it. So you need to have those guardrails that collect evidence, that collect data. You need observability. If you have no lens into what’s happening into your product, your market, your team, you can never scale and you can’t improve it. Find the lens. Record it. And if you don’t record it, you won’t improve it.
[00:02:01] Introduction
Henry Suryawirawan: Hello. Welcome back to another the new episode of Tech Lead Journal podcast. Today I’m very excited to have someone who has decades of experience, you know, building companies, scaling companies, selling companies as well. So his name is Andrew Stevens. I think today he’s gonna share a lot of learnings from his journey, and obviously not to mention things about AI which is what he’s doing now with Sakura Sky, his company. So Andrew, welcome to the show. Really looking forward for this conversation.
Andrew Stevens: Yeah. Thank you Henry. Great to be here and I appreciate the opportunity to speak with you. Yeah, look, my, my background goes back a long way now, so hopefully we can find something interesting to chat about today. And I’m looking forward to seeing where we go.
[00:02:45] What Breaks When You Scale a Startup From Zero to 100 People?
Henry Suryawirawan: Yeah, so when I look at your profile, I mean, it says like you have 30+ years of experience. You know, you have, you know, a lot of experience building companies, scaling companies, selling companies. And not just, you know, as a tech practitioners, but also at the leadership position, you know, like CTO, you know, contractors, consultants, and all that. So I think the first thing that I’d like to probably learn from you, you have gone through this journey, you know, building startups from zero to one, one to 10, and you know, going, you know, up to selling the companies. So I think, maybe if you can start, like bring up some topics that you think could be, you know, good learning points from you to teach us.
Andrew Stevens: Absolutely. Yeah, look, I started off luckily, I guess, in the early nineties is really where my IT career started. So it’s been a long time now. So, you know, I came through working at a university. I’ve gone through working at software companies, I’ve worked for manufacturing companies. I’ve sat on a, on the train to and from work every day when I had a long commute and I was coding on the train to really, really believed in. So, you know, it’s been a great journey and sometimes I look back at these things and think, oh my God, my laptop out now in some of the places I’ve lived in, you know, where I’ve lived in New York City, I’ve lived in San Francisco. I’d never put my laptop out in those places. But, you know, back in the nineties, it was so exciting and new to get caught up in this internet thing. And, you know, I started in the early days and I really wanted to learn about how dial-up internet worked. And I heard about this new thing called Linux and I got really involved in that. And that’s so exciting. And built, you know, just a couple of friends and I, we built a dial-up internet company and we sold that. And that was really just three guys sitting around eating pizza and coding on a weekend or after work. And that was so much fun, right?
There was the opportunity to explore new ideas and it was a safe place to build something, fail and fail terribly because, you know, I remember my first order off Amazon.com, right? I bought a Perl book. And I think it was a Camel book way back then. And I looked at it and I couldn’t buy it in Australia for less than like a couple hundred dollars, but I could buy it off Amazon in the US for like 60 Australian Dollars. But it took six months to come via sea freight. And it was a long time. And it went and finally arrived, it was amazing to get it. But then I went out and immediately and canceled my credit cards. I thought, oh my God, I’ve given my credit card away to this unknown company. Nowadays people look at that and laugh and think, you know, Amazon’s a household name. And I think that journey is amazing to see that the trust that’s built with the internet. And, you know, companies that such as Amazon, whatever you may think of things now, but they’ve, they spend a lot of time to build that trust. They built those systems and I think that really talks about what we need to be thinking about as engineers and founders as well. And, you know, being able to scale your companies really about trust. It’s about data, it’s about trust. And it’s also about leaning into the people around you and their expertise and really discovering what others around you can add.
And, you know, in the early days was really for me about, being the most knowledgeable Perl programmer, the PHP programmer, ColdFusion, you know what, whatever I wrote. And nowadays, you know, I love Rust and Golang. But back then it was Perl. And that’s what I did and that’s what I sucked my notes down. And I really thought I added value strictly through my ability to make code. And it wasn’t Git back then, you know, churning code and pushing the needle more. And as time grew, you know, you find out other values in yourself and those other values of people around you. And really success is built through, you know, trusting. And trust is a lot, you know, it’s not only you trusting your business partner or your team, it’s also your customer trusting you. And you need to build those systems and methods for them to build you.
So really going from nothing, like I’m sitting by myself, I wanna build something, you know, the founder as an engineer sort of paradigm, right? It’s really about speed and innovation and getting something out. And nowadays people talk about product-market fit. And I’m not seeing the conversations about MVP as much anymore but it’s all about PMF and, you know, how do you get there? You know, back then, how do I get there? And it’s easy when you’re a team of one, but you’ll never reach scale. And scale is not just about people, it’s about your customers, it’s about your reach. It’s about more than that. And how do you scale from zero to one and one to many is another thing as well. And you really find things at break.
You know, one of my early companies I built, we went from two of us and we hit over a hundred people. And I had people in Malaysia, I had people in the Philippines, I had people in Australia. And just scaling that was difficult. You know, decision systems break. You know, staying centralized and making everything founder-led is great to a degree but when you’ve got people in different time zones and different skill sets, you’re not gonna be the expert in the room every time. So you need to build a way to trust those people. You know, team sitting around waiting for your approvals are not good either. You know, context gets lost, speed drops, and, you know, if you are the founder and you’re in every single meeting, in the early days, that’s okay. But once you’re hit a certain size and speed it’s difficult. So I think there’s a lot.
So it’s all about communication, you know, scale isn’t about adding people. Scale is really about building a system that makes good decisions without you. And that was a really tough lesson for me to learn in the early days where I thought I had to be hands on in everything, and only I knew the product. Only I knew that line of code. But, you know, there’s a lot more to it than that. So, you know, I think some of those things that breaks at scale, I think you’ll find it fairly quickly. Sorry, Henry. I’ve been chatting too much.
[00:08:44] Why Is Product-Market Fit More Important Than Building an MVP?
Henry Suryawirawan: Yeah. So thank you for sharing some of the interesting stuff that you have gone through throughout your journey, right? I wanna pick one thing that you mentioned earlier, which I find really interesting, especially in this era, right? So we can validate it. So you mentioned that these days people less talking about, you know, MVP, building MVP, but more towards product-market fit, PMF, some people call it.
And now also with the advent of AI, right? So I think building MVP is arguably simpler. You know, there are so many vibe coding tools these days, like Replit, Lovable, Bolt and all that. So I think building MVP might not be a challenge anymore these days. So tell us, maybe this shift, what you see for people who are starting up startups, you know, like building new products, right? What about PMF that you wanna tell those people so that they can actually shift their focus instead of just building MVP, but also think about the PMF aspect.
Andrew Stevens: Yeah, I think scale and trust. Everything, AI is a great amplifier. You know, if you’re really good at what you do, AI’s gonna make it better. If you’re not so good at what you do, AI’s really gonna expose that to your audience. And then PMF or product-market fit within the age of AI is less about novelty and building something new, I guess. It’s more about repeatability. You know, can you deliver the same outcome? Can you deliver it safely? Will it happen every time? I’ve been in demos with AI models where it literally worked 15 minutes before the demo. Come in, turn it on, and, you know, the prompt was slightly different or something. And suddenly I’ve got a very unexpected response. And it’s been a terrible experience for myself, being terrible experience for the product, being a terrible experience for the user, the customer. You know, guardrails and repeatable outcomes are really what AI nowadays is about.
So, you know, it’s great to scaffold an idea with, you know, Claude or something, or, you know, I’ll have my IDE open with… Sorry, my cats decided to join me, so apologies. So, you know, it’s about, you know, Claude can help you build code really quickly or Gemini or Codex or whatever. But, you know, I use that for scaffolding. I use that to demo, demonstrate value quickly. I use that to demonstrate or explore an idea. But for me, I don’t take that code to production that that’s great for understanding what I can achieve. But it’s not great for scaling necessarily, right? It’s learned from Stack Overflow, learned from Reddit, and while it might have some greater, oh, it’s learned from open source and GitHub whatever. Apologies to open source people. You know, it’s learned from those things and it’s not necessarily picked up the right habits. And it’s learned at scale on some bad patterns. And the difference between a good product and an AI product, it’s gonna become more and more amplified. We’re gonna see what a good engineer brings to a product developed by AI. It’s going to be probably more secure. You know, it’s going to be scalable differently because, you know, an AI tool is only as good as its prompt and its training material, right? And it’s still ran, it’s still probabilistic in terms of what it outputs.
And this is why I don’t feel necessarily a threat from my role in technology today, because the day that all my customers or all of my users can express what they want succinctly or in totality to an AI to get what they want, is a day that I probably am at risk. And we all know the reality is that business requirements change, market conditions change. Customers will see something and realize new value. Or when you produce a bit of code, you’ll actually spin it up, look at it and go, oh, if I did that, I can make it better, right? And AI can’t necessarily do that. And sure, you can instruct it to do a little bit more. But it’s like drip feeding and you’re not going to have that push of quality or whatever into production. So there is a definitely a great gap between what AI is producing and what you can produce as a skilled engineer. What I think the risk is for people at the moment and I’m seeing it a lot, is people are employing less juniors. And that I think is a real risk for the industry, because those juniors aren’t being trained and in five years time, who’s gonna be the seniors? You know, and that I think is a risk for the IT industry in general.
You know, I used to worry that the hyperscalers and that or even Microsoft or anyone producing operating system is gobbling up the people that are capable of producing good operating systems. How hard is it today to actually produce a startup that scales without going to a hyperscaler? You know, we end up by paying the people that we’ll eventually end up by keeping competing with to deliver our products. And, you know, the, there’s a certain amount of work we need to do as engineers and professionals in our field to really understand our technology.
And that’s always on us. But we also need to understand the realities of the business. That if I’m going out there and I’m scaling my product, how am I going to get that? And what’s the payoff point, right? I mean there’s a reason why Meta, for example, are not in Google Cloud or in Amazon. Because there’s obviously a point in time where, you know, the cost of using a hyperscaler cloud is less cost effective than doing it yourself. And understanding those points of inflection for you as an engineer or somebody building a product is really essential. You need to know those points in your timeframe. And sometimes working with AI is not gonna help you. It might give you some ideas. But, you know, you need to be really hands on at the wheel and understand.
So what I, my IDE, I have Codex on one side, I have Gemini Enterprise on the other. I’ll get one to help me plan a product or, you know, plan a brief or plan it. Then I’ll have the other one review it. Then I’ll have one scaffold it for me, and then I’ll have the other one check my scaffolding versus my brief. You know, so I’ll duel my AIs to try and get the best out of it. And I’ve tried a lot and, you know, there’s a lot of AIs I haven’t tried yet, but I’ll continue to use ’em. So I love AI as a productivity tool, but it’s not a production tool. You know, code that goes into production, it’ll help me get there. But, you know, I would never push my code blanket into production.
Henry Suryawirawan: Yeah.
Andrew Stevens: I’ve gone off on a big tangent, but, anyway, around product-market fit, why are startups dying in that space? You know, a lot of them are building what I consider a demo and not a product, right? Some people are building just a simple wrapper around an LLM. And an LLM is not an AI system, you know. An LLM, to me, agentic AI is just a new software pattern, right? And an LLM is just a new user interface. We, as architects or engineers, need to become familiar with this new agentic AI software architecture like microservices like, you know, client server going back further. It’s just a new software architecture that we need to understand and we need to understand this new way of users interacting with software and LLM chat interface is a new way to do it.
You know what I’m seeing some great outcomes in the data, our data space where people have been spending, you know, months trying to get reports pixel perfect, getting the right bar chart or something. And in a deterministic agentic AI space, you should be able to query your data sort of sets in natural language and get the results in the format that I want through agentic AI, rather than doing a necessarily complex dashboard. But the trick is most people don’t know how to do the deterministic agentic AI at the moment and they’re all stuck in probabilistic AI, getting changed answers every time and getting some terrible outcomes. So taking that step up is where, you know, we all need to be focused. It is possible and it is happening and I’m seeing it.
So, in fact, one of the products I’ve worked on recently, it’s in a highly regulated financial industry in European Union. It’s passed audit. It’s compliant with the GDPR. It is deterministic. It’s in banking so people can get the exact specific data in a repeatable fashion. It’s explainable. So you get the exact SQL query executed on your data and you get the exact outcome. So those are things, so don’t ship a demo, you know, ship a product when you’re thinking about PMF. You know, and another people thinking about often, oh, sorry, Henry. Go. I was just gonna say people too focused just on the next model and not about better workflows and better channels.
[00:17:20] How Do You Build a Lasting Moat in the AI Era?
Henry Suryawirawan: Yeah. Sorry, to disrupt. So one thing that I find really interesting these days, right? So people are so easy to build something, you know. Like thinking about what you’re saying, right? Building demos and all that. And in fact, many of those things are just wrappers on top of LLM because, you know, the LLM models are still improving by a lot, I guess. Although it’s kind of like slowing down a bit lately. But arguably, I mean, these LLM models are so powerful and then people are building on top of it. And there are so many competitions. Like for example, if you innovate one type of product today, I think I’m sure, tomorrow or next week or so, other people can replicate such similar product in a fast manner, right? So tell us how do you actually advise startup lead, startup or leaders, right, to actually build some kind of moat when they build, you know, their PMF?
Andrew Stevens: Yeah. I’ve seen a lot of discussion lately about this moat and it’s becoming more and more discussed. I see it going through Reddit. I see it going through LinkedIn. And people, I’ve seen people on two sides of the discussion. I do believe data is a moat and the trust. So with good data comes good trust. If I can trust I’m getting the right data from my system, I’m always going to go back to the system and try it again, right? But if I get one single query goes wrong, people don’t trust your system anymore. And that’s where it gets down to, and that’s the kind of trust I’m talking about. Can people trust your software to come up with a good response? And for me, it’s about finding the right data that really makes your platform or your offering unique, you know.
I’m working with one person at the moment where they’ve got a very unique audience and they’ve got excellent data. No one else on the planet has it. And rather than focusing on that, which they should be doing, they’re focusing more on competing with people that have less focus or less generic, or sorry, more generic adoption in the industry. And they’ve got a very strong market position. And they should be out there telling their upstream suppliers or their upstream partners that, you know, they own the audience and this is the data that we can get. I think that Tim Berners-Lee said something along the lines of, you know, software comes and goes, but it’s data that persists between systems. And you can Google that. I can’t remember the exact quote right off the top of my head right now, but, you know, data is what comes and what stays, software comes and goes.
And that was actually, I think, another moment for me in my career. Yeah, I would write a system, I would write it what I think was perfect. And then I realized like four years later, it’s not perfect anymore. It’s terrible. It’s old, it’s slow. Doesn’t follow the nearest frameworks, you know. It’s not deployed properly anymore. So everything I do is transient. But my software trends are the… Something I bought five years ago can still help drive AI today as to my behaviors. So, you know, data is there and that’s the moat to focus on. What is a unique offering around the data space that you’ve got? What is a unique interface that you can offer? People are saying SaaS is dead as well. So I do see a lot of competition and taking SaaS, I guess. Is it? I don’t know. Can I build a SaaS-like software? I can build stuff very quickly now. But can I scale it? Maybe not. Can I pick up my old data? I’ve gotta work and sit down and go through that. So yeah, it’s all about data in the moat for me still today.
Henry Suryawirawan: Yeah. Thanks for pointing that out, right? Because in these days, right, where… In fact, building LLM itself is not particularly a moat anymore, right? Because you think it is such a novel thing that only smart people can build. But actually there are thousands of models available if you just see in the Hugging Face. So people are building open source version, people are building, you know, like big models and all that. So I guess like, LLM itself is not necessarily a moat these days. So I think what you’re mentioning, building on top of data, you know, and trust, right? So building applications that people trust, people want to use, I think is still the key.
[00:21:29] Why Must Leaders Learn to Let Go to Scale?
Henry Suryawirawan: So maybe let’s switch channel a little bit. I wanna talk about your leadership journey. Because I think it’s very, very interesting, right? So one in particular thing, I think, especially in those early days, you mentioned early as well that you tend to be hands-on. You want to be involved, you know, in your leadership, right? You want to know everything. But obviously throughout, you know, your journey, you realize that that doesn’t scale, right? And typically this is something that is a major bottleneck in any kind of startup that is growing, especially rapidly growing. So tell us, why such important thing for leaders to be slightly more hands-off? And in this AI era especially, some leaders are kind of like enticed back to be more hands-on simply because AI can help them more to be hands-on. So what’s your take about this?
Andrew Stevens: Yeah look, I think… Oh, I can think of three things in there. One, you know, I’m working with one organization today that the CEO’s in every meeting. And they’ve become real bottleneck that they can’t scale, right? No one feels empowered to make a decision. The CEO doesn’t trust the people that they work with to make a decision. They feel that nobody else can do it but them. So why have senior people? You know, you might as well cut the senior team and get like juniors and AI to do that task, right? There’s no need. And you’ll probably have great outcomes that way as well. But, you know, if you have people you work with, you need to enable them and they need to feel empowered to make decisions. And that’s often very tough thing to let go of as a founder, right? ‘Cause it’s your baby and, you know, you’ve worked on it. And, you know, it’s such a moment of trust. And, you know, I struggle with that trust every day. You know, I and I love it now ‘cause every moment I feel I need to focus on the trust between myself and other people, I go, well, what’s a real risk here? And I constantly analyze my risk. And, you know, I can… I work very hard to make sure that the people that I’m with are empowered to make decisions. And, you know, if I can’t enable that for them, I’ll be disappointed in myself. So it is something I try and aim for.
[00:23:27] What Can Leaders Learn From a Near-Fatal Motorcycle Accident?
Andrew Stevens: Some things that really changed for me. I was in a motorcycle accident, you know, 2006. And, you know, I was on my way home from work. Someone else ran a stop sign. Took me out. I ended up in intensive care, all that kind of stuff, long time. And my business partners had a tough time picking up from where I was. You know, I lost value in my investments. I lost value in my companies. And a whole bunch of things happened in 2006 because of this exact moment. And that was, for me, quite an inflection point because I realized then there was so much pointed at me and so much dependent on me showing up every day that I could never have a holiday could never, you know, take a moment out and… which was fine, you know, I was in my early thirties and I was enjoying, you know, the center of attention. I hate to think of it like that, but I loved being, I loved the high intensity. I loved getting on the whiteboard. I loved drawing up that diagram. I loved hearing that problem. And you know what, I can still do that today. But I can also enable the people around me to be involved and take ownership of that and scale. Because the moment, and, you know, I was saying is, you know, if someone gets hit by a bus, it happens. And that inflection point is important.
So for me, what changed for me was my relationship with risk. And so I’m a risk. How do I de-risk my business and how do I de-risk my idea? My idea, if it’s great, it’s only gonna succeed if other people can get involved, right? And that’s the only way I can scale it. And it’s the only way that I can de-risk the adoption. I can de-risk the happening. You know, what did I learn? Resilience is built, not wished for. So, you know, the business was only resilient as I am, and if I’m taken out, the business folded. So that, for me, definitely changed.
So nowadays I try and design environments for people to fail safely in. You know, if you can’t feel like you can push the envelope and have an outcome, whether that be a good or a bad outcome in an environment, you can never really find the boundaries of where you can push, right? You’ve, that’s why scientists, the scientific method, you know, people push, experiments fail. And that is allows them to find out the boundaries of the scientific knowledge. And I think that’s true for engineering as well. So, you know, when you’re pushing for that, you really gotta push forward and change things. So and I had another idea, and I’ve completely slipped my mind now, but, you know, that those are the sorts of things that you need to be thinking of for sure.
Henry Suryawirawan: Yeah, I love your quote, right? Resilience is built, not something that you wish for, not something that it could happen, you know, as you go through tough time. But you actually need to practice, train, build a system, guardrails, to build the resilience within your company, not just individually, right? So I think building fail-safe environment also quite important, because, you know, no matter where you are, if you are not feeling safe, I think people are not thriving as well, because they’re just fearing of their, you know, maybe job, their career and all that. I think it’s really important.
[00:26:29] How Do Technical Leaders Stay Hands-On Without Becoming a Bottleneck?
Henry Suryawirawan: So, one question that you haven’t actually answered is about, you know, with these AI tools, these days, leaders maybe kind of like dragged back into being hands-on. So what advice that you think leaders should do in order to balance, you know, the trade of being hands-on and also trusting other people to actually do the things?
Andrew Stevens: My secret on this is, and I’m still hands-on every day, I still code, right? I can’t relate to my engineering team if I can’t understand the latest tool, I can’t understand the code. You know, and I do that not to pry on them. You know, I do it in a way that I can have a conversation I can contribute to.
So recently, you know, some of my hands-on work for one customer or one thing I’m involved in is declining. So I’ve started up, I founded a new startup in January this year, so like four weeks ago. And for me, it’s a high security environment. It’s all about workflows to handle inbound security research reach outs. So, you know, if somebody finds a vulnerability in your software system, how do they report that? How do they prove that and make that easy? And a lot of people get these emails and they panic and they go, oh my God, they found a flaw. And you know, is it real? Is it not real? Is this person try to rip me off? They’re asking for a bounty, you know? And how do you manage that? So, you know, I’ve looked at how HackerOne do it. I’ve looked at how all these other tools look, do it. And I thought, well, I could probably do something different. And I’m not aiming for the big end of town. I’m aiming for the small to medium enterprise, mostly mid-market and looking at that. And for me, that’s my way to, it’s 2026, I’m going to apply my knowledge, my hands-on, and I’m gonna use that as a way for me to understand my teams. And I’ll always do that. So my recommendation for someone who has been a technical person now in leadership, maintain your code. You don’t necessarily need to do it in things that are related to the role directly, but you need to be in technology. You need to be hands-on. You need to be going to that startup, sorry, that meetup. You need to be thinking of ways of how your engineering team works, right?
And that is really a way to de-risk the conversation. You can have the conversation with your engineering team, but if you have no basis in reality, they’re not gonna respect you. They, you know, you’re just gonna say something and they just go, oh, this guy doesn’t know what he’s talking about. And, for me, that’s actually a moment of reflection again. You know, when I started to get larger teams, you know, I’ve had teams of 8,000 engineers report to me. And how do I scale that, right? I’ve had teams of two, I’ve had teams of a hundred, you know. And you go, how do I scale? And when you start getting up in those numbers, you’re a people manager or eventually you’re just managing a spreadsheet of budget. And you need to be able to let go in a way that enable those people around you to grow. And you need to be able to trust. And the only way for me to do that is I trust my own skills and continue to do that.
So that’s how I built it. So I do a startup. I love to find a new problem. And most of my businesses I’ve built over the years, you know, I mentioned learning Linux and the very first one, I wanted to learn Linux. I did a startup. I built and sold it. Another day, I was sitting at a pub with a couple of mates and I wanted to learn a particular tech, and we saw an opportunity popped up or in the pub, we actually saw it in real time. We thought we could do that online, and I wanted to learn tech. And I just said, right, I’m just gonna write it in that. So I would often select a new technical model to learn. And today for me, that technical model is I want to see how agentic AI can be pushed in new ways. And for me, rolling my sleeves up, getting hands-on, and contributing to not only my knowledge of my team but those around me is a way that I do it.
So stay hands-on. You know, it’s our duty as professionals in our field to continually educate and learn, but also focus on the human element. And, you know, technology is great, but ultimately everything we do is for people. Companies only exist to make sales and people only use software to get an outcome. It’s all about people. And we need to focus on that and keep that going. Like I said, again, you know, back in the day I used to think it was all about being the best technologist in the room. But today for me, it’s all about getting the best outcome and that’s enabling the people around me. And I try to do that. If I fail, I’ll try again. So failing safely.
Henry Suryawirawan: Yeah. Staying hands-on definitely is quite key these days, especially when you are faced with, you know, an advancement in technology that is so, so fast. Because if you really cannot catch up with all these advancement, you lose touch and you probably just, you know, follow from the news and all that. I think it’s kind of misleading. But staying hands-on, not necessarily becoming a bottleneck for your company, right? So you still need to trust people and build some kind of systems so that they can thrive as well.
[00:31:32] Why Should You Hire for Resilience Over Technical Skill?
Henry Suryawirawan: So at one point you mentioned about resilience, I think also very important. You mentioned to me before our recording today is that you tend want to look for people that has resilience, you know, in their attributes. So why, tell us why this is important and how do you actually assess resilience in a candidate?
Andrew Stevens: Yeah. Look, some of the values that I have in myself or what I like to think I have in myself is curiosity, right? So for me, I’m always curious about what’s happening next. I really value collaboration. So how we can collaborate together. And autonomy, right? So, you know, you need to be able to know when you can work by self or you need to know when you need to work as a team, right? And, you know, I used to work towards perfection. That and the system must be exactly that or it’s a fail, right? And it must be exactly two pixels to the left otherwise it’s ugly and ruined. And being that hard task master on myself, and because that was all I up in my head, nobody else ever had an opportunity or the chance to know what those KPIs look.
So design for perfection is where I started, but today, I design for resilience. And that’s understanding how people can come up with systems and enabling systems, right? And it’s about contributing. It’s about collaborating. It’s about curiosity, like understanding what can go wrong and planning for it. And or even having the capability to do that, right? And that’s resilience. And I’m not necessarily gonna define it from a software architect’s resilience perspective, but it’s the ability to deal with change. It’s the ability to deal with the unknown, right? Because sometimes we don’t know, especially today in the age of AI. You know, I’ve worked with frameworks or any of our software development kits where a dot-point difference means success or failure of a product. That is unbelievable that, you know, you look at the documentation, it says it can do it, but ‘cause I’m using 17.1, not 17.2, it doesn’t work.
So the ability to work through that and deal with that is part of the resilience, right? Working with others, looking for points of failure. And for me, that’s how do I recruit for that? You know, looking for people that are curious, looking for people that wanna collaborate, looking for people that can work anonymously. I think that’ll add up to resilience, because a network of those people really helps build resilience in your product and your business and your teams. So that, that’s certainly an aspect of that. And I think after a while you kind of understand who you want to work with. I like to define how I work with people or, you know, outcomes by things I don’t want in the environment. So, you know, you’ve gotta pick those attributes that will add value, right? You know, I don’t necessarily want the strongest engineer. I want the best engineer in the team. ‘Cause I can teach skills, but I can’t teach attitude. And that’s really important, is the good attitude and working with people because your product is only as good as your team. It’s only as good as your market you can define and your total addressable market, right? And if you limit your product so much, you can’t address that market. So again, that’s all elements. So many things. There’s a lot of answers in there. But a lot of things to talk through.
[00:34:56] How Do You Build a Team That Innovates Safely in the AI Era?
Henry Suryawirawan: Yeah. So if I can pick a little bit more, right? So because these days, again, with the introduction of AI, I think many people are also kind of like rethinking how they hire people. Some people say the team size will be shrinking, right, to a smaller team. Some people say that every individual now is expected to be more generalist, T-shaped, M-shaped whatever that is, right? So apart from resilience, what do you think are some attributes that people should maybe focus a lot more, especially with this, you know, introduction of AI. Are we looking for different types of people? Maybe critical thinking, maybe… Curiosity definitely is gonna still be top of the attribute. But are there some things that you think you apply as well within your, you know, company’s scaling?
Andrew Stevens: There is definitely a different approach to engineering today. Like once upon a time, you know, you could give… Like when I first started engineering, I worked with this fantastic engineer who you could give, you know, specs like this. You know, you, it was waterfall. You’d write like a thousand page spec or something, you’d hand it to the engineer in the corner, they would work on it for six months and produce that perfect re, you know, data capture form, perfect database design, the perfect report, or whatever. And it was to that spec to a T, right? And that as a model back then worked really, really well. And you’d get exactly what you designed. And you see that now in some of the big global names in IT, they know that you as a customer are not gonna be able to spec that form properly. They’ll charge you what you’ve asked for and then you get stunned for all these fees because you don’t, you can’t possibly think of everything till you see it, right? And these are the sorts of things that these companies make their money out of.
And for me, I much prefer more agility. You know, it’s really frustrating as an engineer to have, you don’t want to change priorities, but you want feedback on your software, right? So part of the resilience, I guess, again, is, you know, being able to rapidly prototype something, which AI is great for, right? And historically I’ve always had a rapid team that will prototype something that’ll prove the value and then I’ll hand over to the BAU team that will then productionize it, right? And AI can really work that model. I’m thinking large enterprise, thousand developers sort of teams. You know, you’ll have something rapidly prototyped, you get it out to A/B testing paths, you push it out on a special release channel. People test it, you get the feedback and then you productionize it if it tests well. So that’s something I’ve used to do, but now I can do that so much faster.
AI for me is a tool or a channel, right? As a tool, it can make me faster and better at my job. It can amplify what I do. And I talked about curiosity earlier, and the people I’ll look for today remain the same. You know, I want people that are curious, okay? Can I better at my job by looking at this tool or, you know, I’ve tested it and I want you to come to me with an opinion and say, you know, I’ve tested Clawdbot or whatever and it is terrible, or it’s great, it’s awesome, or it’s hacked in a milliseconds or whatever, you know. And I want people that are out there testing it and come back to me with ways to do things better. Because you know what? I don’t know every way to do everything and people are gonna know better ways. And those are the people that I want. I want people that will push back. I want the people to stand up and just say, we should be doing it. If you’re just standing there saying yes all the time, you’re not doing the right thing necessarily by, oops, sorry, that’s my cat again. You’re not necessarily doing the right thing by, you know, the product, by your team. You’re not building resilience. You’re building yes. And yes is not always right. I want people that can say no and I want people that can feel safe saying no. And those are the people that you want because… and they’re working together with different opposing views will we get a better product.
You know, I want people that can rapidly prototype. You know, I see it as a two speed. Sorry, I’m branching off on other factors again, but, you know, we don’t wanna slow innovation, right? We want people that can work. And AI can only increase the pace of our innovation in our software development cycles, right? And I do it in a two-speed model. I have a sandbox, fast experimentation, strict containment. That is where I do innovation, I do rapid prototyping. And then I have production, which is, you know, your controlled deployments, your auditability, your safeguards. And then we need some sort of, you know, paved road between the two. How do you get from the sandbox to production safely, right? And for me, that’s governance. You know, governance isn’t a gate, it’s a guardrail that lets you get there faster.
And I had this great conversation the other day. You know, the reason why you put brakes on your Ferrari so you can drive it fast, right? And, you know, if you didn’t have brakes on that Ferrari, it’s not gonna go very fast. And that’s, we gotta think about that as software as well. And that’s why you’ve got governance, you’ve got those methods around it. And, you know, you want to produce software as quickly as possible. You want to innovate. You want to be having the coolest stuff out there really fast, but you also need to be able to do it safely, right? And look at some recent releases in the AI world that will hack, people lost crypto, all sorts of things. So, you know, that’s great that you’ve innovated, but you’ve broken trust, you know, and you haven’t produced a resilient outcome. So, you know, just working on that process. And there is a point where, you know, I want people that can build safe to say no, feel safe to innovate, feel safe to fail. But there’s gotta be a way that that can get into production as well. And, you know, I’m not talking about burning investors’ money. I’m not talking about not meeting our deadlines. But, you know, it’s about taking the smart risks and understanding based upon our experience, what will work and what will fail. So, did I answer?
Henry Suryawirawan: Yeah. Yeah, definitely. And I think it’s also a good segue because I think you kind of like touch a lot of points from like the Executive AI Playbook, something that your company Sakura Sky published some time ago, right? I think these days almost every companies, every leader in their mind is about AI, right? So because if you don’t implement AI somehow in your company, right? I think there’s a very high chance that you can be disrupted and, you know, other people might take your business. And in fact, the pace of innovation probably will be also kind of slow if you don’t have AI compared to other competitors.
[00:41:12] How Do You Build AI Governance That Speeds Up Innovation?
Henry Suryawirawan: So you mentioned a few strategies just now. If I can just repeat, you have two speed, dual speed of, you know, innovation within your company. One that is within sandbox, building prototypes, you know, innovating, taking risks and all that. But once you prove the value out of those prototypes, I think you bring it to production. You know, building more governance, safety and all that. So I think the first question is about governance, because I think this is still like a moving thing, right? So many unknown unknowns about security, about governance that every leaders must think about, especially this is like a new threat factor. Because AI is such a unpredictable thing. You have prompt injection and all that. So tell us how do you practically build a good governance within a company, especially in this fast-moving AI world that is probably a lot of unknowns?
Andrew Stevens: Yeah, totally. I think there’s two things. The Exec AI white paper is a collaboration with, Sakura Sky and Roebling Strauss. And we worked together. On that team, Bill for example, he’s got a lot of experience with working with large companies and changing the way they work, transformation, optimizing the way that businesses adopt and move forward with new risk and governance and things like that. And, you know, myself, I bring on the technical side of things. Olivia, who also worked on the paper definitely works with data and AI, and she’s got a great understanding on how to bring that to fruition. Now we collaborated on a way, on the what we need to be doing, right? And we definitely cover that in the white paper. And, you know, the things I look for when I’m looking at from a AI exec strategy is, you know, picking outcomes and KPIs. You know, what are important to my business? Is it costs, cycle time, quality, revenue, risk? You know, those sorts of things. You know, work out the workflow that we need to go through. Where are decisions made? Where is data produced? What will build the trust both in the system, within our users, within our team, to come to a good outcome for the product, for the market, for our investors, all those sorts of things. So looked at that.
We looked at the forcing functions of AI as well. You know, what is changing with, what does AI bring to the table that we haven’t necessarily looked at before? And I think, you know, one of them definitely in there is, if you were to look at your business today with a fresh start, you know, what can AI do for your business today that you’ve not been able to tackle before? And I’ve spoken about, you know, there’s a decision that every exec or every software person needs to make, you know, will I use a public model or will I use a private model? You know, a public model’s great because it’s trained on lots of data. It’s fast to adopt and probably has a whole bunch of features you’d like. But it’s generic. It doesn’t necessarily have your data. Sure, you can RAG it or whatever, that aside. Or I spend the time and I build my own model, and that’s expensive, that’s slow. But it’s highly tuned, highly optimized, and I can get my outcome. And just say, you know, other finance company, I could license and resell my intelligence, my risk model. I’m a shipping company, I can sell a model that represents my highly tuned optimization model on logistics. You know, it actually opens up new models and that’s what you should be thinking about and how to get there.
And the governance around that is what we’ll get there safely. So, you know, I’ll look at the workflows and look at the products, how they stand today. I’ll often start with like a one small slice as well of the business. And so I try and ship value in weeks and not quarters. You know, if I talked earlier about the engineer with a thousand book, a thousand page spec in the corner that works for six months, no business can really handle that nowadays unless you’re in, you know, an SAP maybe or some big enterprise, right? Nowadays we want value today. And, you know, I’m working with one company at the moment and they need tangible changes to their interface today because their competition moves fast. So we need to come up with ways that we become more effective quickly. And we’re using AI to do that. We’re using AI to be personas so we don’t have to push out to an audience to test. We can do beta testing in-house. You know, we can scaffold the idea and get product-market fit modeling faster rather than a cycle of weeks or months, right? So we’re doing things in days and that’s the framework and the governance around that.
And like I said, I use two-tier system or I use multiple tiers – sandbox, production. I look at autonomy levels, you know, what can the system make the decision themselves, or what can people do pre-approved patterns and workflows? So I’ll definitely try. And, you know, the fastest team are the people that can make decisions safely, right? So it’s not the people without rules, it’s the people with the rules that can be applied the fastest, okay? So, oh, we can’t have any rules or we can’t have any process because that’s gonna stop agility. No, because they’re gonna develop something that’s not gonna be right in your mind. What you need to do is have clear guardrails that allows them to operate effectively and quickly and know what the boundaries are. And that’s a fast team, right?
Strategy is choosing where AI creates the leverage for you. So execution’s, building the operational model or the operating model, that keeps AI and strategy safe and repeatable. So, you know, again, that is strategy, is choosing where AI creates leverage and that is really where you need to be. Yeah, so, you know, governance isn’t a gate, it’s a guardrail that lets you drive faster. Think brakes in Ferraris. And every time I go to something, I think about, you know, I want that Ferrari to drive as fast as it can, but it needs brakes when there’s a bend in the road. So what are those breaks? And keeping it minimal so that, yeah, that’s where it’s at.
Henry Suryawirawan: Yeah, so I think definitely it’s very important for leaders, executives out there, right? To not just think about the innovation, the pace of things that AI can produce, but also the guardrails, right? In order to kinda like protect. ‘Cause again, like you mentioned several times now. Trust. Trust with your customers, trust with your people, right? Trust with the product. I think it’s really, really important. Once you break it, I think it’s gonna be difficult to win… win back the trust.
[00:47:37] Are AI-Driven Layoffs Justified or Just an Excuse?
Henry Suryawirawan: The other thing about in executive’s mind is actually building an AI-ready organization, right? So in the news, typically what we see today is about layoffs, you know, reducing the number of people within organizations, but less so talking about how we can build an AI-ready organizations. Do you have some tips here? How can leaders think about that?
Andrew Stevens: Yeah, look, I’ve heard about people like OpenAI employing lots of ex-traders to build trading models and things like that. So you hear team to 200-400 people being employed to help train a model. And, you know, if I was somebody on a trading room floor, and I’d just fired 200 people, and the AI that I intend to adopt just hired them all to use them to write the model that I’m about to pay an infinite amount of money for, I’d be worried that I made the right decision, right? So I, you know, I still feel that a lot of the AI layoffs are really based, really ironing operational optimizations. RAM went up in price a few weeks ago. You know, people were increasing the price of RAM for RAM that wasn’t built, to go into data centers that hadn’t been built, to run AI that hadn’t been built, to run models for people that haven’t bought it yet. You know, we are paying in advance for things that haven’t been done. But I mean, I guess that’s the nature of business and how things, you know, if you don’t plan for what you may need to afford later, you’ll never be able to afford it.
Sorry, that’s a long, long way away from where we were. But, for me, I think AI-based layoffs are really more of an excuse right now. Sure, they, I’m seeing impacting the lower end. I’m seeing juniors not employed as much. I’m seeing repeatable jobs being affected more. The innovative roles, the engineering roles are still not under real risk yet from real AI, right? Anyone out there saying that AGI is gonna take your job is not right today. You know, maybe in some time, it will be different. But today we still need engineers. We still need the data people. We still need the expertise. Those traders I talked about that lost their jobs to help train the OpenAI model. I’ve got skills in my company I should be looking to how I can apply those in the era of AI. I shouldn’t be downsizing just because I think there could be an impact. And that’s where the power is. I talked earlier about, you know, a logistics company being able to turn their optimization model into something smart. And that’s where you should be. You should be pivoting your model, not just a knee jerk reaction to firing people or downsizing just because it may be impacted, you know. And stand up and own it, you know. If you’re optimizing because there are other market forces. Sure, take it. People like to say AI right now ‘cause it looks good on an executive report. So, you know, I think it’s more appeasing investors and appeasing the market than necessarily reflecting reality.
But, you know, that’s my position, that’s what I’m seeing today. Am I right? I hope I’m right that what I’m seeing it’s not really driven by real AI adoption. We are still seeing AI fail in a lot of places. You know, if you’re looking at the Gartner hype cycle, where are we? Are we at the peak of inflated expectations or are we in the trough of disillusionment? Gartner, I love the hype cycle model and I apply it frequently in my decision making, and where I look at a product I think where it is. I think we’ve probably started to head down into the trough of disillusionment now. And when we hit the plateau of productivity, we’ll see more jobs needed, more experts needed and more engineers. Will it look different? Yes. I will be expected to produce more because AI tooling will help me produce more. Will my role today look the same in five years time? No, it will be different.
Henry Suryawirawan: Yeah. So I think that’s a very valid point, right? So roles will be different. Your job will be different, right? But the pace of, you know, innovation, competition will just keep increasing, right? And I think it goes back to the characters that you mentioned earlier, curiosity, building resilience is still like something that we as individuals need to build within ourselves and also in your organizations, I think that’s a very great thing.
[00:52:06] How Do You Build Trustworthy AI Agents?
Henry Suryawirawan: So another thing that I saw Sakura Sky is publishing lately about building trustworthy AI agents. I think one of the things as well that people are thinking, when they think about adopting AI, implementing AI is to build AI agents, not necessarily just using tools, agentic AI tools and all that, but building AI agents that could transform their business. Be it building more automations, improving their workflows and all that. So tell us why it is imperative now for any organizations to actually adopt agentic AI, building agentic AI within their organizations?
Andrew Stevens: Today people are still mystified with what AI is. They still haven’t worked out quite what it is and there’s still the trust factor there. You know, sometimes you can ask the AI a question, you get a different response now. You get a different tomorrow. And you go ask Claude versus Gemini versus OpenAI, you’ll get a different response, right? And I think that builds a lot of trust issues just in consumer products. And, you know, I’m not going to, when I’m talking about trustworthy AI, I am talking enterprise. I’m talking about business grade AI. And there’s a massive chasm between, you know, what you are using for consumer grade tech versus the enterprise stuff.
You know, I wake up every day, I go into my Gemini enterprise tools. I can look at my day. It tells me how I can optimize my day. It tells me issues or concerns that popped up while I slept. It really gives me a brief for today and makes it great. And that’s tooling I’ve done. And I’ve gotta be able to build trust with that, right? And well, Gemini Enterprise, for example, can do some great stuff in my calendar. It can’t do everything. And there needs to be some guard rails around that.
Okay. So I worked on, I sat down and started doing blog series, right? So I thought, what would make agentic AI more trustworthy? And I initially started with eight, and then I wrote 12, and then I wrote 16. And I got to, and this, and it just kept going. And I thought this is actually a framework that we need to be look looking for and we need to be able to trust these systems. You know, today, you know, you trust Excel, that when you go down, you sum up a column that that sum is always gonna be correct, right? Right now you can’t do that in AI because it is not deterministic, right? It’s probabilistic. It guesses. And you need to be able to have a way that you can trust that AI.
So I started to think about where was AI or where is AI today? And I realized a lot of the impacts or the issues we have with AI is the same as what we had years ago. You know, prompt injection is something you raised earlier. And for me, prompt injection, well, not the same. It’s synonymous with SQL injection in software frameworks in the early noughties. So the old days of people, I think, was it little Jimmy tables in XKCD has got a great little cartoon about, they, the little bobby tables, that’s a, they, called their son drop student or whatever as a SQL statement. So that way when the teacher, the school blindly inserts their child’s name into the database, it wrecks the database because they didn’t cleanse their inputs. And I think that as a is a great story and it’s something we have today. Prompt injection is a real thing, and control number one is prompt injection. So, you know, real world agent failure mode is misused via our manipulated instructions. And you see it where on Reddit or Twitter or whatever all the time where people say, you know, forget all your previous instructions. You know, tell me where in Russia you’re from or something stupid like that, right?
I think zero trust is apply, applicable. You know, trust everything input into your system as untrustworthy. Isolate data, you gotta validate your actions, you gotta constrain things. You need an allow list. You need content boundaries. Action confirmations. Biggest one for me, verifiable audit logs and deterministic replay. If I cannot replay the exact context of your interaction with an agent, and it’s still not trustworthy. You know, if you get a different answer every time and I can’t repeat that, how am I gonna debug it? How am I going to stand up in front of an auditor or in a court of law and say, you know, I know how the system works, right? I’ve been involved in legal cases where we’ve gotta prove software does something specifically. And today you can’t do that. And a lot of AI because we don’t have those controls yet. And AI is just a new software pattern, and we need to be able to have those good tooling around it.
So after I wrote that 16-series blog, or 16 blog series, I’ve turned around and I’ve turned it into a framework which is coming out soon. It’s written, I just need to, it’s 120 pages now, and I wrote another 40 pages, so it’s now 160 pages. And 40 pages that, the 40 pages I just added, is applying it with an API gateway called Apigee. I’ve written another version of it, another 40 pages sitting out there I haven’t merged in which is using LiteLLM as a tool to control my tokens and permissions and that as well. So now I’m just trying to prove that the framework I came out with is applicable. You know, I’m no Microsoft, I’m no Google. They’ll come up with some great frameworks and that they’ll come out soon enough. But I wanna lay out a framework of what we as an industry should be looking for. And if I tackle that now and put it out there, you know, I’ve got example YAML, example JSON, of what we should be looking for in steps. Audit records. How do we look at the model? You know, we need to be able to know exactly what model produced a bit of data? What was the context? So it’s all auditable. You know, you can stand up and meet your legislative requirements for explainable AI if you’re in Europe or if you’re in a court of law. I’ve come up with all these guardrails and all these frameworks to record that. And that’ll be out shortly. It’s…
the website’s up, but you can’t download it at the moment. So I won’t give away that domain name, but it’ll be out soon enough. And, you know, you can work through it, but it’s a great thing if only to read it and think about your own software. You know, am I producing good guardrails? Am I producing trust in my system? Am I going to meet legislation as my product grows? You know, it’s easy to get away with stuff when you’re small, but the moment you cross a border, you cross a continent or you get, you know, too many users, you’re suddenly on the radar for compliance and people are gonna come knocking. And if you read this now, you’re gonna work out what you should be thinking about. And we all should be doing that. So some of it is the old things that we solved a while ago, but it’s a new way of doing it, and we have to tackle the same things all over again.
Henry Suryawirawan: Thank you for sharing such a thorough thing about this trustworthy AI agents, right? I think this is definitely a new territory for many people, right? Especially if you’re not AI researchers. If you even still don’t know exactly how LLM works, right? Definitely building a system on top of something you don’t understand is very risky, right? And I think having this kind of like guardrails, you know, governance, is very, very important. Super important like what we discussed earlier. And so I highly recommend people to check out these frameworks that Andrew, hopefully by the time this episode is released, you know, the web, public website is available. So I think do check it out because I think if we don’t know what we need to protect, I think it’s very risky.
[00:59:34] 3 Tech Lead Wisdom
Henry Suryawirawan: So Andrew, it’s been a great conversation. Unfortunately, due to time we have to wrap up pretty soon. I have one last question, which is like a tradition for my podcast. I would like to ask you to share what I call the three technical leadership wisdom. So it’s like something advice you wanna give to the listeners before we wrap up.
Andrew Stevens: Yeah, sure. And you gave me this one ahead of time, so thank you for preparing me for it. So number one, I think, you know, when you’re looking at software, design the system, not the hero, right? If success depends on your personal heroics, it will never scale. So build robust, repeatable systems that plan for for your redundancy. I can sit back and if my team works without me, it’s a great place to be. They’re enabled, they’re empowered, they can deliver, and that is how you scale.
Number two, make decision rights explicit, right? So speed is clarity. The who decides with what and by when. So if you give them good frameworks, and think brakes on the Ferrari, if you know how to use your brakes, you’re gonna go faster, right? And if you dunno how to use your brakes, you’ll panic and it won’t go anywhere. You won’t even move the car. So you need to be able to give a way for people to your achieve speed and know how to navigate those corners.
And lastly, I guess, and trust. Great way to end it ‘cause I’ve talked about it a lot. Trust comes from observability. If you can’t see it, you can’t improve it, right? And you can’t safely automate it. So you need to have those guardrails that collect evidence, that collect data. And that’s the objective of the trustworthy agents, right? You need to collect that data. You need to collect that evidence, and you need to see what it can do. If you’re in DevOps or SecOps, you know, you collect evidence all day. If you’re a web developer, you collect your Apache logs perhaps, and that’s how you debug, right? And you need that. You need observability. If you have no lens into what’s happening into your product, your market, your team, you can never scale and you can’t improve it. Find the lens. Record it. And if you don’t record it, you won’t approve it. Those are my three.
Henry Suryawirawan: Wow, I think this is my first time seeing the trust in the lens of observability. I think that’s kind of like insightful. Thanks for sharing that. And I like also the first one where you need to design the system, not design the hero. So I think, especially in the startups where you have a small team, right? So I think, you know, we rely on heroics many, many times, but obviously as you scale, you need to move away from that. And, you know, design a system that is gonna be helping you to scale much better.
So, Andrew, if people love this conversation, they wanna reach out to you, ask you more things, or find your resources, is there a place where they can find you online?
Andrew Stevens: Yeah. Well, the easiest way I guess is there’s website for the white paper that I’ve recently worked on. It’s whitepaper.download. And from there there’s the AI playbook to see that. That’s got my LinkedIn profile, it’s got the white paper. You can have a read, it’s free. You’ll have to put in your details to download it, but there’s no spam. We don’t spam, we’re not interested in that. Grab it, have a read of it, see what you think. And reach out to me on LinkedIn.
Henry Suryawirawan: Thank you so much for today’s conversation. Really learned a lot, especially on the, you know, AI aspect, the leadership journey that you went through. I think those are really, really insightful. So thanks again for your time today, Andrew.
Andrew Stevens: Yep. Thank you very much Henry. And I look forward to listening to your podcast more. So, thank you.
– End –
