#226 - Ex-Google Duplex Eng Lead on Disrupting $2B Clinical Trials with AI - Patrick Leung
“AI won’t replace humans per se, but people who are able to effectively wield AI will definitely replace people who aren’t.”
Ever wondered how AI is being applied in the world of clinical trials where human lives are at stake?
In this episode, Patrick Leung, CTO of Faro Health and former Google Duplex Engineering Lead, reveals how AI is transforming the clinical trial process — a process that can cost up to $2 billion per drug and take over 10 years to complete. Patrick reveals how Faro Health’s AI systems generate complex clinical documentation in minutes instead of months in which hallucinations aren’t acceptable, while navigating the strict regulatory requirements of the healthcare industry.
Patrick also reflects on the evolution of AI technologies, the realities of large language models, and offers practical advice on how to thrive in the rapidly changing AI-driven era.
Key topics discussed:
- The evolution of AI from image recognition and Google Duplex to LLMs
- How Faro Health uses AI to transform clinical trial process
- The challenges of applying AI in highly regulated industries
- AI’s potential to save time and millions in clinical trials
- How to tackle AI hallucinations and ensure high-quality outputs
- Patrick’s thoughts on AGI and the future of AI beyond current capabilities
- The viability and limitations of vibe coding
- Strategies and advice for individuals to thrive in the AI era
Timestamps:
- (02:09) Career Turning Points
- (02:46) The Advancements of AI in the Past 10 Years
- (04:13) Non-LLM Types of AI
- (05:42) The Google Duplex
- (07:28) The Use of AI in Faro Health
- (09:44) Tackling AI Hallucination for Clinical Documents
- (12:25) Building the Evaluation Process on AI Results
- (14:28) AI as a Research Assistant
- (16:40) The Need of Building Custom AI Model
- (18:50) The Huge Impact of AI in Clinical Trials
- (21:15) The Regulations on Applying AI Technology
- (23:28) AI Success Stories in the Life Science Industry
- (25:16) The Possibility of AGI
- (28:36) The Path to AGI Using LLM
- (30:43) Actions People Should Take in the AI Era
- (35:48) AI Engineers and AI-Enabled Engineers
- (38:37) The Viability of Vibe Coding
- (41:03) Hiring AI Engineers
- (42:26) Important Engineer Attributes in the AI Era
- (44:23) Important Leader Attributes in the AI Era
- (46:59) The Room for Juniors in the AI Era
- (49:04) Inspirational Story of a Successful Junior
- (51:33) 3 Tech Lead Wisdom
_____
Patrick Leung’s Bio
Patrick Leung is a Chief Technology Officer at Faro Health, a company at the forefront of optimizing clinical trial development through the use of artificial intelligence.
In his role, he is instrumental in applying large language models and other AI technologies to enhance protocol design and outcomes for clinical trials. A native of New Zealand, Mr. Leung holds degrees in Computer Science and Finance.
His career includes being a foundational member of an early e-commerce software company, where he played a key role in guiding the company from its initial stages to a successful initial public offering.
Follow Patrick:
- LinkedIn – linkedin.com/in/puiwah
- Twitter – x.com/puiwah
- Website – farohealth.com
Mentions & Links:
- 📝 A Method to Redesign and Simplify Schedules of Assessment and Quantify the Impacts. Applications to Merck Protocols – https://link.springer.com/article/10.1007/s43441-024-00666-x
- 📝 The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise – https://www.hbs.edu/faculty/Pages/item.aspx?num=67197
- 📚 The Innovator’s Dilemma – https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244
- The Google Duplex – https://research.google/blog/google-duplex-an-ai-system-for-accomplishing-real-world-tasks-over-the-phone/
- Large language model (LLM) – https://en.wikipedia.org/wiki/Large_language_model
- Generative AI – https://en.wikipedia.org/wiki/Generative_artificial_intelligence
- Machine learning – https://en.wikipedia.org/wiki/Machine_learning
- Recommendation system – https://en.wikipedia.org/wiki/Recommender_system
- Image recognition – https://www.ibm.com/think/topics/image-recognition
- Eval model – https://platform.openai.com/docs/guides/evals
- Retrieval augmented generation (RAG) system – https://en.wikipedia.org/wiki/Retrieval-augmented_generation
- Artificial general intelligence (AGI) – https://en.wikipedia.org/wiki/Artificial_general_intelligence
- Cyc – https://en.wikipedia.org/wiki/Cyc
- Vibe coding – https://en.wikipedia.org/wiki/Vibe_coding
- Google Assistant – https://en.wikipedia.org/wiki/Google_Assistant
- Gemini – https://gemini.google.com/
- ARC-AGI – https://arcprize.org/arc-agi
- AlphaGo – https://en.wikipedia.org/wiki/AlphaGo
- iPod – https://en.wikipedia.org/wiki/IPod
- Yann LeCun – https://en.wikipedia.org/wiki/Yann_LeCun
- Geoffrey Hinton – https://en.wikipedia.org/wiki/Geoffrey_Hinton
- Marty Cagan – https://www.svpg.com/team/marty-cagan/
- Steve Jobs – https://en.wikipedia.org/wiki/Steve_Jobs
- Clayton Christensen – https://en.wikipedia.org/wiki/Clayton_Christensen
- Faro Health – https://farohealth.com/
- US Food and Drugs Administration (FDA) – https://en.wikipedia.org/wiki/Food_and_Drug_Administration
- Merck – https://en.wikipedia.org/wiki/Merck_%26_Co
- Anthropic – https://www.anthropic.com/
- Apple – https://en.wikipedia.org/wiki/Apple_Inc
- Shopify – https://en.wikipedia.org/wiki/Shopify
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
The Advancements of AI in the Past 10 Years
-
It’s easy to forget now, because this large language model revolution has completely taken over everything. When most people on this planet think about AI, they’re thinking about large language models as opposed to AI, in general, like data science, in general.
-
What I saw when I was back at Google was there was already an AI revolution well in full swing, which was mostly related to image recognition. Because processing power passed a certain threshold, we became much more able to interpret images using AI. This resulted in all sorts of amazing advances that started showing up in Google Photos and other products like that. But now, all that’s forgotten, because it got completely superseded or overshadowed by this more recent large language model revolution.
-
There have been multiple waves here that’s been really interesting to see how, as processing power has increased, there are these new forms of AI that come out and start solving certain problems really well.
Non-LLM Types of AI
-
It’s pretty extraordinary how general purpose and powerful these models have become. These multi-model large language models are pretty incredible, going beyond human language into things like genetic coding.
-
Computer coding, like actually programming, has become a huge use case for these language models. And they originated out of the Transformer architecture that Google and others introduced a while ago, which originally was applied to the use case of translation from one human language to another. So from that initial use case, it’s become much more general purpose.
-
I continue to think that there’s always going to be many use cases out there for which other forms of machine learning that have been largely overlooked during this LLM revolution are going to continue to be useful for solving certain other types of problems, particularly involving quantitative information, like predicting numbers.
The Google Duplex
-
It didn’t use LLMs at all; it predated that. It was almost a preview of what was to come because people started freaking out when we launched this product ‘cause it sounded so lifelike.
-
Some people, like the press, had this huge media frenzy around this, because Google introduces some AI that sounds like a human being and it’s like, “oh, it’s unethical,” or “oh, game over, the bots are gonna take over the world.”
-
Some prominent members of the press thought that we faked the whole thing. So it was super interesting to be involved in a media frenzy situation like that. It was on the Good Morning Show. It was on Colbert. It really hit a very big nerve in popular culture.
-
And yet, it wasn’t using any of this modern LLM architecture at all. It was using other methods that were much simpler. Since then, it’s just gotten way crazier in terms of what these systems are capable of. But it was a privilege and so much fun to be involved in a system that was very much at the forefront at the time in doing this kind of thing.
The Use of AI in Faro Health
-
Faro has developed a SaaS platform for designing clinical trials. A clinical trial can take upwards of 10 years. It costs on average $1 to $2 billion per drug to get to market. These are huge numbers and huge timeframes. So we developed a platform that allows biotech and pharma companies to design better clinical trials.
-
We’re applying AI in a couple of different ways. First of all, one of the big time and cost sinks in the whole development process is clinical writing. So writing really complex clinical documents, like the clinical protocol that defines the drug trial. It lays out the whole design, and the schedule, and all the things that need to happen, all the details. This is a really complex document. It can be north of 150, 200 pages and typically takes an entire team of people to write. It takes months.
-
Starting last year, we started applying large language models to automatically generate various sections in the clinical protocol document.
-
And you might think, “well, what’s the big deal? It’s easy to ask GPT to generate a document,” and we do it a lot of time to write better emails or to do homework or whatever the use cases. But it turns out that writing clinical grade documentation that passes muster with regulatory groups like the FDA, and where the stakes are really high—we’re talking about human lives and human safety and things like that—is incredibly challenging. This resulted in a lot of new architecture and new processes and all sorts of interesting things in order to make that happen.
Tackling AI Hallucination for Clinical Documents
-
At first, we naively assumed that we could just adjust our prompts. Every time we saw hallucination, we needed to go back and change the prompt. But it turned out that what we really needed was to evaluate comprehensively the output systematically.
-
So when you generate, say, a PK sampling section or an ECG section, some highly technical part of the protocol doc, you need to actually evaluate the output according to a whole bunch of criteria. And do this per subsection because each subsection is so different and has different technical details that are fed in through our Faro Study Designer tool. We have this whole SaaS platform that’s modeling the trial in great detail and we pass that data into the LLM to generate the protocol document sections.
-
We found that we had to rigorously query the results and say, “did you include this? Did you have this tone? Did you phrase this in this way?” And we have this very specific checklist for every single subsection that we run to make sure that the quality is high.
-
We’ve also designed this in such a way that customers can make their own checks. So if they have additional requirements that are particular to them, they can augment or change the ones that we ship our product with. So we’ve made it very highly configurable to match the customer’s needs.
-
Over time, that initial evaluation architecture has evolved into more of an agentic architecture where there’s all sorts of different agents examining the content of the generated clinical protocol section and looking at it for stylistic consistency with the rest of the document and tone and the level of detail and formatting and stuff like that, in addition to the factual content to prevent hallucinations.
-
In addition to hallucinations, we also found that sometimes the LLM just misses things. So it doesn’t make things up, it just misses things. And so we have to check for that as well. So there’s a whole bunch of examination that occurs once you generate the output and then you feed that feedback from the evaluation models back into another iteration of the generation cycle. So we developed this whole architecture that’s way more complex than just sending queries to ChatGPT in order to get the system actually working.
Building the Evaluation Process on AI Results
-
In this respect, what we’ve built with clinical protocol writing is similar to other AI systems I’ve worked on in the past where in the early stages when you’re first getting the AI model to start producing interesting content, you need to be very hands-on and have a human examine everything with a fine tooth comb. And then after a while, you can codify that feedback into these checks, and into queries that you can then use to automate the process.
-
And after a while, you build enough confidence against a range of different inputs to make sure that the content is right. Because looking at a cancer trial, an oncology trial versus an immunology trial or infectious disease trial, it’s really different. And so we want to make sure that the models we build are applicable across a range of different types of clinical trials.
-
Once we reach a level of confidence that, “oh, it looks like, in the 80% case, the content is pretty good,” then we’re confident enough to share this with our early adopter customers, and get them to evaluate themselves. And they probably come up with some issues as well. And then we gradually improve the quality of the model using that process.
-
We have quite an in-depth process involving engineers, data scientists, clinical writers, QA, all sorts of different people involved in this to bootstrap the process and get everything running. And then after a while, it becomes more automated as we refine the model.
AI as a Research Assistant
-
We’re also looking at developing what we’re calling an AI research assistant. The idea is that, right now, designing the clinical trial is quite laborious. Choosing what kind of activities you want to include in the schedule, designing the population schema, all these kinds of things, objectives and endpoints. These all require a lot of care, and in many cases, a lot of research. Like going out there on the internet, checking out the latest clinical research that’s being published. Taking a look at identifying and taking a look at comparable trials that are out there. Learning from them, reading through hundreds of pages.
-
This is super laborious and we can automate a lot of that. So we are applying AI to help automatically identify trials that are similar to the one that you are working on. And then, aggregate metrics and stats, do comparisons. Look at what happened with these trials. Did they require amendments? Were they able to enroll patients? Were they able to stay on track? Did they result in a successful outcome or not?
-
Clinical scientists who are working on these trials can really accelerate the process by which they come up with an optimal model. We can provide insights, lots of different metrics that help guide what you’re doing and help you come to the right balance of patient experience, site feasibility and overall cost and time to market.
-
It staggers me that many people, many groups still do this in Microsoft Word. Not even Excel. It’s like people use Microsoft Word to model, not just author, these trials. People deserve a better tool than that. So that’s what we’re developing with the help of AI: really modernizing and shifting the way that people develop and design these trials in such a way that it makes it much quicker and with much better outcomes.
The Need for Building Custom AI Models
-
We haven’t found the need to do that yet. The good news is that a lot of the major LLM AI vendors like Google and Anthropic are focusing quite a bit on the life sciences space and producing models that are specific to that domain. Because we have this evaluation system, we can pretty easily evaluate new LLMs and figure out which ones are going to produce the best clinical output.
-
It’s quite possible that we might in the future want to potentially have our own model with respect to maybe vectorization, which is a process that all the text goes through before it gets indexed. So we built this retrieval augmented generation (RAG) system that allows customers to upload documents that we can then parse and use to generate certain pieces of the protocol document that require really technical information.
-
That’s an area where we might want to develop our own vectorization scheme that’s really oriented around clinical terminology. But so far, we haven’t seen the need to even fine tune an existing model. Even fine tuning these days can be quite an expensive and laborious process.
-
What we’ve really seen is that this evaluation model combined with the ability to fine tune prompts, is enough to produce clinical grade documentation. It might change in the future as we get into different documents and domains and so on. But so far that’s been really good for us.
The Huge Impact of AI in Clinical Trials
-
The interesting thing is that when you’re talking about a project that takes upwards of 10 years in many cases or more to execute, it means that small changes in the design upfront can result in huge cost savings.
-
We actually published a paper with Merck that analyzed the benefit of using the Faro Study Designer to optimize clinical trial design. And we found that the savings over the course of the lifetime of the trial can be north of a hundred million dollars. And this was even before some of these AI features that we have coming up, like clinical writing and so on. So the true savings could be even more than that when you factor in the time savings of accelerating the clinical writing process and making the output better.
-
Because getting to market faster is hugely valuable to a pharma company, every day is hundreds of thousands of dollars of additional revenue that you can realize sooner when you launch the drug faster. So it’s not only cost savings, it’s also getting to market faster with the new treatment.
-
There’s also the potential over time as this technology becomes widespread, for smaller players to come in and really try things out. And it’s going to enable a wider range of different treatments and trials to become more feasible. And we’re really excited by that. Because there’s so much innovation going on in the very early stage drug discovery realm, but it gets blocked or delayed when it comes to actually going through trials. And so we want to really widen the pipeline there. And so hopefully there’ll be a lot more treatments for more conditions and lives saved and suffering alleviated, and all the good things.
The Regulations on Applying AI Technologies
-
By their very nature, regulatory bodies are very oriented around evaluating the quality of what’s presented to them. If the quality of the output is good, at least as good as human generated protocols, and we strongly believe that it will be, then the regulatory bodies will be okay.
-
There are various risks involved in using AI. There’s many different types of risks and so we can mitigate a lot of them. For instance, we are running our AI models in a private environment where data is not shared with any other groups and so on. So we don’t have any of those risks of leakage and people being able to discover what other companies are doing or submitting.
-
There’s a bunch of others involving potentially bad actors coming in and trying to discover things they shouldn’t be discovering by manipulating queries and stuff like that. That’s mitigated by the fact this is an enterprise tool. It’s not a consumer tool. And so only people who have signed up as Faro customers will be able to use the system. So we just have to go case by case and look at all the different risks involved in AI and mitigate them.
-
We have an AI governance policy that we’ve been working really hard on to alleviate our own customer’s concerns. Because however concerned the FDA is, our customers are even more concerned about all these things, because it’s their crown jewels, it’s their intellectual property that they’re developing using our platform. And so we take this very seriously and we want to at some point probably publish our AI governance policy so that other people can learn from it, and we can help raise the bar across the whole board in our industry. Because we just think this is really important for everybody to get right.
AI Success Stories in the Life Science Industry
-
The Merck study is what we have as far as publicly available case studies of how impactful this technology can be.
-
Anecdotally we have customers using this document generation system and they’re super impressed with how quickly it’s able to come up with documentation that normally takes potentially months and it can be generated in a matter of minutes. That’s a startling, extraordinary multiplier in terms of productivity. We’re just really excited about that.
-
In some ways the industry is waiting for a really widespread and impactful application of AI. Oftentimes when you get past the surface level of like, “oh, I played with GPT, and it’s really cool and interesting,” once you start to get serious about applying AI, then that can rapidly turn into disappointment or, “oh, it doesn’t go deep enough.”
-
There’s all these stories of people who start building with AI, but then they get disillusioned, because it turns out to be way harder than they think. And so that’s where companies like Faro come in, is that we’ll take it to the next level. We will figure out all the hurdles and the challenges and the risks involved in actually applying AI to do something really impactful, like automate clinical protocol documentation or optimizing trial design. And so that our customers don’t have to do that themselves because that requires a lot of data science expertise, software engineering expertise, clinical design expertise and so on that they may not want to invest in becoming a tech company themselves.
The Possibility of AGI
-
In the past, everybody thought AGI was really far away. The joke is that for the last 50 years, AGI has been 20 years away. It’s always in the future. But now there’s a lot of very prominent people in this field who believe that it’s actually only a matter of three to five years away, maybe even shorter, who knows? And I am not in that camp.
-
There’s a test out there called ARC-AGI, and you look at what this test actually involves and it’s the kind of thing that many of the questions are like a bright kid could answer these questions. Like moving blocks around on a grid, in such a way that the block fills in a space. Stuff that a smart kid should be able to do. There’s systems that are gradually getting better at this, but it’s not yet human level. And to me that just means that we’re so far away from having something that’s genuinely creative and intelligent.
-
I will say that there have been brilliant moments in the history of AI. If you remember, AlphaGo, which was this non-LLM based system, it was using reinforcement learning. It was using other methods of AI that are now completely forgotten in the wake of the LLM revolution. But AlphaGo, there was a moment when it was playing against the World Human Champion and it made this move that nobody in the 3000 year history of this game had ever seen before and people, all the experts, were gasping, “oh my God, this is incredible! What a brilliant move! How did it come up with that?”
-
AI can, under the right conditions, be capable of behaving like a genius, but that was such a specialized example. And so I’m sure, as time goes on, there’ll be more examples of that where, “oh, there’s a flash of brilliance and AI came up with some mathematical proof that was beyond what any scientist has come up with.” But I think in terms of just having an intelligent conversation and really being inspired and looking at a piece of art and being truly like, “this belongs amongst the classics,” I think we’re ways away from that. But I could be wrong.
-
Definitely, the progress even since these large language models were first introduced has been nothing short of amazing in terms of the increasing capabilities and so on.
-
AGI, in terms of like, “this is a peer to humanity,” like a system that we can have a truly intelligent conversation about life and philosophy and things like that, I’m not so sure.
-
I guess I come down on the Yann LeCun side of things where just knowing a little bit about how these systems work it is like a super glorified, super well read auto complete. And I know that there are efforts out there to introduce more real world knowledge and factual information into these LLMs to give them some grounding and reality. But I think that’s going to be a long path to make sure that grounding is actually super accurate and meaningful. But we’ll see.
The Path to AGI Using LLM
-
I find it hard to believe that an LLM based architecture alone would be capable of producing AGI. However, there are groups that are looking to merge in knowledge systems that are aware of facts in the world and how they relate together. Going back to some of the much older architectures for AI where CYC where they were attempting to actually model reality. Like create this big knowledge graph that links together all these different concepts that collectively model reality. This was a path that AI took a long time ago that was abandoned when deep learning came along in favor of more statistical probabilistic methods, including LLMs, ultimately.
-
I can’t help thinking, and I’m not alone in this, like some kind of hybrid where we have this fantastic language generation capability of the LLMs combined with an actual factual system that helps to guide the LLM to not say stupid things sometimes that ignore reality. And we’ll just have to see how that goes.
-
It’s one of those things where the more computing power and the more efficient these algorithms and architectures become, the more things we can try out. And of course, there’s a whole wild card of quantum computing. Once quantum computing reaches a level of maturity where it’s able to be practically applied to this, all bets are off as far as what’s possible.
Actions People Should Take in the AI Era
-
A little bit of an inspirational story. I love to give this example of radiologists. Seven or eight years ago, Geoffrey Hinton, who’s a brilliant guy, one of the forefathers of deep learning, one of the people who helped make this first AI revolution happen. He made this prediction that there would be no more radiologists within a few years, that we should forget about radiologists, because they can be replaced by computer vision based systems. And fast forward to today, and there’s more radiologists than ever. So how could that be?
-
And the answer is, radiology is a high stakes field in the sense that there are human lives at risk here and so there is a need for a human in the loop. And the fact that we automated so much using machine learning means that there’s just more and more people out there looking to have radiology based procedures performed.
-
These days, if you get an injury, it’s like, just get a diagnostic MRI to find out if you might have some problem. It’s become so commonplace that could not have happened without these computer vision advances. And it’s resulted in more radiology jobs because there still needs to be human in the loop. So it’s a volume thing where even though a lot of tasks were automated, the volume increased. So that’s point number one.
-
Point number two is there’s this quote by I think a former CEO of IBM saying, “AI won’t replace humans per se, but people who are able to effectively wield AI will definitely replace people who aren’t.” And so if you’re listening to this, if you’re going to take anything away from this whole conversation, it would be, really immerse yourself. Look into how you might be able to use AI to make your own job better, to make the way you do your job better.
-
There was a very recent study published by the Harvard Business Review called The Cybernetic Work Companion, they actually quantified this and said, “somebody who’s with the help of AI, suitably trained, can actually perform the work of two people.” And I think that actually might be higher than that. But it just goes to show like, wow, you could be at least twice as productive potentially using AI. So figure out how to do that. Because if you don’t, someone else might.
-
We’ve been looking into the use of AI internally, not just in our product, but internally the way we develop our technology and our software. And at first, I was a bit of a skeptic. Thinking like, “hey, it’s one thing to create a proof of concept or an early stage prototype using AI, but actual production code, no way! We’re still going to need a whole bunch of software engineers doing things pretty much the way they’ve always done them to do that.” But now, I’m not so sure.
-
We’ve had engineers do extraordinary things using the latest version of these code generation tools. And I really am starting to fully believe that The Harvard Business Review article I had mentioned before about how people can be twice as productive, it absolutely holds for software engineering. We’re using AI to develop software a lot more productively and faster than before. And it’s a pretty exciting time because the underlying tools also are evolving so fast. We’re finding that the latest versions of the code generation tools are way more capable than the ones even from a few months ago.
AI Engineers and AI-Enabled Engineers
-
Even before the LLM revolution came along, it’s always been quite different managing data scientists or ML engineers, simply because a lot of what you’re doing is exploration. But it’s not like traditional software engineering where you have this idea, you design it, you specify it, and then you build it all, according to spec, and it’s a very deterministic process. With data science, in general, you’re just going to find out like, “is this feasible?” Okay, I have some idea I can predict. That’s a thesis that you need to go out there and test. Kind of like science, right? That’s why they call data science. That inherently is quite different because you have to embrace more uncertainty.
-
What about all this AI vibe coding or AI enabled coding? How does that change? What it does is essentially it’s a force multiplier for people. So maybe to build a prototype these days, you don’t necessarily have to be a software engineer anymore. You can get to a certain point without any software engineering skills whatsoever.
-
What we’re observing out there actually is that there’s a lot of entrepreneurs and even groups that are selling who are basically building prototypes where before they would’ve been presenting mocks or conceptual descriptions of what they intend to build. And now it’s like, “no, we actually just went ahead and built it,” because these coding systems, these automated coding systems make things so much easier.
-
So there’s this huge shift towards, essentially, what a lot of the people who are really behind the Agile process were advocating, which is you just got to prototype stuff, try it out, and test it with real customers. In the past, a lot of people didn’t do that because it was just too much work. Prototyping, it just takes so much work to try to make all these ideas real, and then potentially throw them away. We don’t like throwing things away. Whereas now the bar is so much lower as far as the amount of effort it takes to produce a working prototype, that it’s much more feasible, which is just putting a whole bunch of stuff, within two weeks or less before you fall in love with the idea. Just test it out and throw it away if it doesn’t work. So that philosophy and that process will become way more feasible with AI, which is really exciting.
The Viability of Vibe Coding
-
Vibe coding is great, meaning just using these AI systems to generate code that you don’t even necessarily look at. You just prompt it and prompt it until something works. That’s great for doing proof of concept, initial zero-to-one prototyping. I don’t think that’s a viable path to actually building enterprise software. Simply because when things go wrong, as they inevitably will, with AI generated code.
-
I started playing with some of these vibe coding tools myself and, very quickly, I dug a hole. Once I passed a certain threshold of complexity, it just became really hard to add new features. They had this magical button called “try to fix it,” and I was pressing that button a lot.
-
It’s got a place and I think it’s wonderful that people who don’t have a computer science background or are not experienced software engineers can now actually build technology. That’s magical, that’s incredible. But it only goes so far.
-
In the future, sure, vibe coding will become more powerful, it will become better. But I do think, at the end of the day, if you are producing enterprise grade software that’s used by real paying customers, you’re going to want to have people who actually understand the technology, even if they’re using AI to help them code, to actually fix things when they go wrong and to ensure that things are designed in a good way.
Hiring AI Engineers
-
We’re still at the early stages of this. And what I do observe, as a hiring manager, is there’s a lot of resumes out there with AI on them. Because everybody wants to be perceived as someone who’s versant in this, because they know that there’s a lot of demand. But when we start interviewing, when we start testing people’s actual coding capabilities, because this is still relatively recent, it’s still difficult to find people who really do have experience with this and who have demonstrable ability in this area. And so I think that’s going to shift over time.
-
Just like in the old days, back in the nineties, it was hard to find people who knew HTML and who knew how to build full stack applications. And before too long, within a few years, everybody did. It was the norm. This is just a natural technology adoption curve where we’re moving into a more mainstream situation where everybody’s seeing the value. And the majority of software engineers out there are going to be conversant in this technology and before too long it’ll be ubiquitous. But we haven’t quite seen that in the hiring pool yet.
Important Engineer Attributes in the AI Era
-
I talk a lot about career development and about the qualities that I’ve observed over the many years that I’ve been working in this industry that lead people to become successful and more senior. And one of the big ones is curiosity. There’s just being inherently excited about learning new things and trying things out. And that’s a quality that I think really serves people well in this new world of AI. Because a lot of it is unknown. It’s like just trying to apply AI in ways that haven’t been done before. That’s an example of a quality that is a strong one.
-
Also people who can span multiple domains, like people who are able to think in terms of product, what makes a good product, what makes a well-designed product, usability wise, along with the ability to decompose that into commands that you can give to the AI that it’s going to understand. This is a combination of different skill sets. It’s not just like you do a computer science degree and you’re good. It’s more like, hey, there’s a part of this, which is customer facing, knowing what customers would respond well to. There’s a part of this which is communication, being able to express those requirements well like a product manager does. And there’s part of this, which is UX designer, and, of course, part of it, which is straight up programmer.
-
People who can think cross-functionally also will do really well in this new world, because they now have the force multiplier, in the form of these automated coding assistants and so on, to really be able to do the work of multiple people, potentially.
Important Leader Attributes in the AI Era
-
Every time there’s a technology disruption, it can be a little scary. Simply because perhaps the processes and tools and behaviors that have served you well in your career to date might be under threat or about to be transformed. And that can be exciting. And it can also be scary and threatening. Like, “oh my God, what do I do? I can’t depend on the same set of skills and habits and behaviors that have got me to where I am in my career.” The people who can take that uncertainty on board and re-step into the unknown, are the ones who can thrive.
-
That applies at a company level as well. Every single company out there at this point is wondering, “what do we do about AI?” And it’s the ones that are bold and are willing to disrupt themselves. That is the kind of thinking that is needed here. When this new technology wave comes along at the company level as well as the management, individual leader level is giving your teams, first of all, possessing that courage to step into the unknown yourself. But also inspiring your teams to do so and saying, “hey, if we don’t do this, someone else will, and we will get disrupted.” And you can look at the history of this. Companies get disrupted, because not all of them have the fortitude to jump on new technologies and disrupt themselves.
The Room for Juniors in the AI Era
-
There’s never a shortage for people who are able to create technology. And AI makes that easier. Yes, you could argue the field of programming is becoming more commoditized, because of all this automation. But AI can’t necessarily take the place of having good product sense or having the combination of programming skills and design skills and so on.
-
Thinking of it as being a force multiplier, like you’re able to do more, whether it’s as an entrepreneur or whether it’s as an engineer who’s part of a larger organization, if you’re able to wield these tools well, that’ll serve you very well.
-
We’re still also in the early stage of this. So if you are able to, for instance, start a side project, a personal project, and put that in GitHub or put that on your resume, and show that you have the ability to do this, even if it’s some personal project that you happen to be really excited about, definitely, do that. Because that shows that you can do it. That shows that you can adapt and that you have some level of mastery or facility in this area, and that will serve you well as you look for jobs. Because the employers that were looking for people who can do this now. And they’re still hard to come by. Maybe not for too many more months or years, but there’s an opportunity to jump in there if you really are proactive and potentially stand out from other candidates because of that.
Inspirational Story of a Successful Junior
-
Some of the most successful and interesting careers I’ve seen, of all the people I’ve worked with, have been people who really focus on just trying things out.
-
There was this one really junior product manager I worked with when I first joined Google. And, I encouraged them to become a user of our application, which at that time was this e-commerce product. And so I said, “hey, you should probably sign up as a merchant, start your own little fake business and try out the product.” And he ended up really getting into e-commerce. He built this company that ended up becoming successful and he ended up becoming a VP at Shopify.
-
I was giving my talk earlier this year at the company on career development, and I actually spoke to him and just said, “hey, how would you sum up your success?” Because it’s such an incredible career path that he followed, as an entrepreneur and now senior executive and all these things. And he just said, “you just got to try things out.” And he reflected back to me, “AI just enables so many more people to do this now.”
-
To do the kind of things that he did that at the time required software engineering expertise, machine learning expertise. Now, you can potentially vibe code some of the stuff that he did really easily. And so the opportunity for people who have ideas, if they observe problems in the world, the gap between observing that problem and trying to solve it is now much smaller than it was. And so whether you are trying to address some big company that you want to get a job out of, whether you actually want to start a new company, it’s just much easier to do that.
-
Learn from his experience and be inspired by his story. Because he was really underscoring the fact that it’s just much easier to try things out these days and trying things out is often what leads to great inventions and innovations in the world.
3 Tech Lead Wisdom
-
Curiosity. That’s super important.
-
Following your excitement in all things.
-
Just don’t settle for doing work that you don’t find personally inspiring, that you don’t feel like, “oh, I’m excited to get to work in the morning.” If you don’t feel that way, then think of other things that you might be more excited about.
-
And it sounds obvious, but I think that people get conditioned after a while to not doing that and just settling. It’s important to be excited about your work for a number of different reasons.
-
-
Find a mentor.
-
I think the majority of people don’t actually actively seek out a mentor. And it can be super rewarding.
-
I keep coming back to some of the people in my own career who have really helped me, even if it’s just a sentence or a catchphrase or a certain way of doing one thing, it can change your life over time, really. By just helping to reframe problems or deal with communication issues or whatever the case may be.
-
Over time in your career, that can have a really compounding effect. Seek out at least one mentor, ideally, multiple, over the course of your career and it’s really going to help you.
-
[00:01:27] Introduction
Henry Suryawirawan: Hello, guys. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I have with me the CTO of Faro Health, Patrick Leung. So we have been hearing a lot about AI development recently. Faro Health itself is trying to apply AI on, you know, clinical trials and life sciences and health and all that. I think it’s gonna be really interesting to understand how AI can be applied in this type of work as well. So Patrick, welcome to the show.
Patrick Leung: Thanks, Henry. Great to be here.
[00:02:09] Career Turning Points
Henry Suryawirawan: Right. Patrick, I always love to invite my guests to share a little bit more about yourself, maybe, by sharing any career turning points that you think we all can learn from you.
Patrick Leung: Yeah, I mean, I guess the beginning of my career was very entrepreneurial. Like I joined a company that was in the very early stages and it went all the way to an IPO. And then back down to, you know, near zero again during the dotcom boom. And so that was a huge learning experience. And then sometime after that, I made the big decision to join a big company. I joined Google back in 2007. And so that was a huge career turning point, because I never thought I’d find myself at a big company, but like against all odds I did. And I ended up there for over 10 years.
[00:02:46] The Advancements of AI in the Past 10 Years
Henry Suryawirawan: So I think it’s really interesting as well. Back then when you were at Google, right? You worked on some of the AI technologies like Duplex, right? So I think since then you have been working a lot on AI-related stuff, even before this GenAI era, right? So what do you see in the last, I dunno, maybe 10 years or maybe, eight years, right? So what are the some advancements that you can see working on AI things, right, in the past compared to now? So what do you see are the big differences?
Patrick Leung: Yeah, I mean, it’s easy to forget now, because this whole large language model revolution has just completely taken over everything. And so when most people on this planet think about AI, they, they’re really kind of thinking about large language models as opposed to AI, in general, like data science, in general. And so what I saw, when I was back at Google was there was already an AI revolution that was well in full swing, which was mostly related to image recognition. Like because processing power kind of passed a certain threshold, we became much more able to interpret images using AI.
And so this resulted in all sorts of amazing advances that started showing up in things like Google Photos and other products like that. But now, all that’s forgotten, because it got completely sort of superseded or overshadowed by this more recent large language model revolution. But there have been multiple waves here that’s been really, really interesting to see how, as processing power has increased, there are these new forms of AI that come out and start really solving certain problems really, really well.
[00:04:13] Non-LLM Types of AI
Henry Suryawirawan: Yeah, it’s interesting that we know LLM is like, LLM or generative AI, right? The bigger categories like the, all the things that people talk about regarding AI. So. In the past, I remember people are talking about machine learning, recommendation system, image recognition, just like what you said. It seems like all of those kind of like toned down a little bit. Do you still see advancement in those area or is generative AI is like the go-to AI strategy that we are going as a human species, I guess?
Patrick Leung: Yeah, I think it’s pretty extraordinary how general purpose and powerful these models have become. And of course, you know, with a lot of the image generation, there are different methods involved there. Like it’s not necessarily LLMs, it’s more like diffusion models. But just these multi-model large language models are pretty incredible. Like going beyond human language into things like genetic coding. And of course, you know, computer coding, like actually programming, has become a huge use case for these language models. And they originated out of the Transformer architecture that Google and others sort of introduced a while ago, which originally was applied to the use case of translation from one human language to another. So from that initial use cases, become much more general purpose. So it’s pretty incredible.
But I do think, I continue to think that there’s always gonna be many, many use cases out there for which other forms of machine learning that have been sort of largely maybe overlooked during this whole LLM revolution are gonna continue to be really useful for solving certain other types of problems involving, particularly involving quantitative information. Like predicting numbers and stuff like that.
[00:05:42] The Google Duplex
Henry Suryawirawan: Right. So back then you were also in charge on building the Google Duplex, right? So when, I remember back then when it was introduced in Google I/O or something like that, that was really, you know, like incredible to me. Did you already use something like LLM or is it using a different technology altogether?
Patrick Leung: Yeah, I mean, it’s really interesting because um it didn’t. It didn’t use LLMs at all. It predated that. And so in some ways, that was sort of like a, I wouldn’t say a prototype, it was almost like sort of a preview of what was to come because people started freaking out when we launched this product ‘cause it sounded so lifelike. And some people, like the press, had uh this huge media frenzy around this, right? Because Google introduces some kind of AI that sounds like a human being and it’s like, oh, it’s unethical, or oh, the game over, you know, the bots are gonna take over the world. And there were even some people. Some, you know, prominence members of the press who thought that we faked the whole thing. So it was super interesting to be involved in sort of like a media frenzy situation like that. It was on the Good Morning Show. It was on, you know, Colbert. It was like really hit this very, very big nerve in popular culture.
And yet, it wasn’t using any of this modern LLM architecture at all. It was using other methods that were much, much simpler. And so since then, it’s just gotten way crazier in terms of what these systems are capable of. But it was really a privilege and just so much fun to be involved in a system that was very much at the forefront at the time in doing this kind of thing.
Henry Suryawirawan: Yeah. I think Google itself has rebranded, you know, those kind of things, like becoming like Google Assistant. Now, Google Assistants becoming like Gemini, right? I think all those speakers that they introduce probably is like using some kind of things that you use in Google Duplex as well, because the human tone are so natural, uh, when I chat with like Gemini using voice, right?
[00:07:28] The Use of AI in Faro Health
Henry Suryawirawan: So I think it’s really cool technologies, you know, that comes out of those innovations. And today, specifically, we are going to talk how to apply AI in health or life science and clinical development. So yeah, company Faro Health. If you can share a little bit more, how does Faro Health use AI in terms of, you know, like solving your problem?
Patrick Leung: First of all, just in brief, Faro has developed a SaaS platform for designing clinical trials. So this is a really intensely complicated process that can take, you know, like… A clinical trial can take upwards of 10 years. It costs on average $1 to $2 billion per drug to get to market. Like these are huge numbers and huge timeframes. And so we developed a platform that allows essentially biotech companies and pharma companies to design better clinical trials. So we’re applying AI in a couple of different ways.
First of all, one of the big time and cost sinks in the whole development process is clinical writing. So writing really complex clinical documents, like the clinical protocol that, essentially, defines the drug trial. Like so it lays out the whole design, and the schedule, and all the things that need to happen, all the details. So this is a really complex document. It can be north of 150, 200 pages and typically takes a entire team of people to write. It takes months. And we, starting last year, we started applying large language models to essentially automatically generate various sections in the clinical protocol document.
And you might think, well, what’s the big deal? You know, it’s easy to ask GPT to go and generate a document, right? And we do it a lot of time to write better emails or to, you know, do my homework or whatever the use cases, right? But it turns out that writing clinical grade documentation that passes muster with regulatory groups like the FDA and where the stakes are really high, we’re talking about human lives and human safety and things like that, is incredibly challenging. And so we went through many iterations when we started designing this system to try to figure out the best way to actually produce really high quality clinical output that meets all the needs and requirements of a clinical protocol document. Yeah, this resulted in a lot of new architecture and new processes and all sorts of interesting things in order to make that happen.
[00:09:44] Tackling AI Hallucination for Clinical Documents
Henry Suryawirawan: Right. It’s very interesting that you mentioned applying like generative AI or LLM to solve something that is, you know, highly regulated. And it is also kind of like mission critical, right, so to speak, right? Because it involves people’s lives. So people, you know, these days know about the danger of hallucination and all that. How do you actually tackle these kind of things, applying this to kind of like this highly regulated and mission critical kind of a problem?
Patrick Leung: Yeah, I mean, at first, we sort of naively assumed that we could just adjust our prompts. You know, every time we saw hallucination, we’re like, oh, we need to go back and change the prompt. But it turned out that what we really needed was to evaluate comprehensively the output systematically, right? And so when you generate, say a PK sampling section or an ECG section, like some highly technical part of the protocol doc, you need to actually evaluate the output according to a whole bunch of criteria. And do this per subsection because essentially each subsection is so different and has different sort of technical details that are fed in through our Faro Study Designer tool. So we have this whole staff platform that’s modeling the trial in great detail and we pass that data into the LLM to generate the protocol document sections.
And so we found that we had to really rigorously sort of query the results and say, well, did you include this? Did you have this toned? Did you sort of like phrase this in this way? And we have this very specific checklist for every single subsection that we run to make sure that the quality is high. And we’ve also designed this in such a way that customers can actually make their own checks. So if they have additional requirements that are particular to them, they can augment or change the ones that we ship our product with. So we’ve made it very highly configurable to match the customer’s needs.
So this is what we found was needed. And over time, that initial evaluation architecture has evolved into more of an agentic architecture where there’s all sorts of different agents that are examining the content of the generated clinical protocol section and looking at it for stylistic consistency with the rest of the document and tone and the level of detail and formatting and stuff like that, in addition to the factual content to prevent hallucinations.
And also, in addition to hallucinations, we also found that sometimes the LLM just misses things. So it doesn’t make things up, it just misses things. And so we have to sort of check for that as well. So there’s a whole bunch of examination that occurs once you generate the output and then you feed that feedback from the evaluation models back into another iteration of the generation cycle. So we develop this whole architecture that’s way more complex than just sort of sending queries to ChatGPT in order to get the system actually working.
[00:12:25] Building the Evaluation Process on AI Results
Henry Suryawirawan: Yeah, you mentioned about missing, you know, information and also again, like the danger of hallucinating, right? So how do you actually do these kind of checks? Is it like still human involved? Like at the end, someone who is, you know, maybe capable enough or expert enough at reviewing the output? Because I would imagine, right, these kind of documents is highly technical, right? Sometimes you probably need a little bit more time to research whether it’s factual or accurate or not. Or do you completely rely on AI agents to do that for you? And if so, like how do you build this kind of like evaluation using AI? Is it like something like eval model that people are using to solve, you know, certain kind of programming challenge and things like that?
Patrick Leung: Yeah, I mean, in this respect, what we’ve built with clinical protocol writing is kind of similar to other AI systems I’ve worked on in the past where in the early stages when you’re first getting the AI model to kind of start producing interesting content, you need to be very, very hands-on and have a human examine everything with a fine tooth comb. And then after a while, you can codify that feedback into these checks, and into queries that you can then use to sort of automate the process. And after a while, you build enough confidence against a range of different inputs. You know, looking at different types of therapeutic areas to make sure that the content is right. Because looking at a cancer trial, like an oncology trial versus say an immunology trial or infectious disease trial, it’s just kind of really different. And so we wanna make sure that the models we build are applicable across a range of different types of clinical trials.
And so once we reach a level of confidence that, oh, it looks like, you know, in the 80% case, the content is pretty good. Then we’re confident enough to share this with our customers, our early adopter customers, and sort of get them to evaluate themselves. And they probably come up with some issues as well. And then we sort of gradually improve the quality of the model using that process. But we have quite an in-depth process involving engineers, you know, data scientists, clinical writers, QA, all sorts of different people involved in this to sort of bootstrap the process and get everything running. And then after a while, it becomes more automated as we refine the model.
[00:14:28] AI as a Research Assistant
Henry Suryawirawan: Thanks for sharing that. So apart from like content writing, right, is there anything else where you apply AI as part of the work?
Patrick Leung: Oh yeah. So we’re also looking at actually developing what we’re calling an AI research assistant. And so the idea is that, you know, right now actually designing the clinical trial is quite laborious. Like choosing what kind of activities you want to include in the schedule, designing the population schema, all these kinds of things, objectives and endpoints. These all require a lot of care, and in many cases, a lot of research. Like going out there on the internet, you know, checking out the latest clinical research that’s being published. Taking a look at identifying and taking a look at comparable trials that are out there. Learning from them, reading through hundreds and hundreds of pages.
This is super laborious and we can actually automate a lot of that. So we are applying AI to essentially help to automatically identify trials that are similar to the one that you are working on. And then, you know, aggregate metrics and stats, do comparisons. Look at what happened with these trials. Did they require amendments? Like why were they able to enroll patients? Were they able to, you know, stay on track? Did they result in a successful outcome or not?
And so clinical developers who are or clinical scientists who are working on these trials can essentially, really accelerate the process by which they come up with an optimal model. And of course, you know, we can provide insights. We can say, oh, here’s your patient burden of this proposed trial. Here’s your, you know, the site complexity and cost. Lots of different metrics that help guide what you’re doing and help you come to the right balance of patient experience, site feasibility and overall cost and time to market.
So this is a really sort of multi-variate kind of thing that people are trying to optimize when they put together these trials. And it just staggers me that many people, many groups still do this in Microsoft Word. It’s like sort of my sort of catch cry is like, not even Excel. It’s like people use Microsoft Word to essentially model, not just author, but model these trials.
And I think that, you know, people deserve a better tool than that. So that’s what we’re developing with the help of AI is like really modernizing and shifting the way that people develop these and design these trials in such a way that it makes it much, much quicker and with much better outcomes.
[00:16:40] The Need of Building Custom AI Model
Henry Suryawirawan: Yeah, so since this is like a very highly niche, you know, specific kind of a problem domain, right? So I’m just assuming like people know things like ChatGPT and Gemini and all that, right? It’s more like general purpose AI models. Do you actually have to fine tune or even build your own model to work on this specific niche? Because like one of the most important things when using general purpose AI is like what kind of data are they trained on, right? For example, if you’ve never seen this kind of problem before, I’m sure AI will hallucinate even much worse than, you know, if you train it with specific model and data. So do you actually have to tweak or fine tune the model or even come up with your own model?
Patrick Leung: Yeah, we haven’t found the need to do that yet. So the good news is that a lot of the major LLM AI vendors like Google and Anthropic are actually focusing quite a bit on the life sciences space and producing models that are kind of specific to that domain. And so we’ve designed our system in such a way that it’s kind of agnostic to the model. Like we’re not fixed on GPT or Gemini or Anthropic or anything like that. And so because we have this evaluation system, we can pretty easily evaluate new LLMs and figure out which ones are gonna produce the best clinical output. I would say it’s quite possible that we might in the future want to potentially have our own model with respect to maybe vectorization, like, which is a process that all the text kind of goes through before it gets indexed. So we built this kind of retrieval augmented generation (RAG) system that allows customers to sort of upload documents that we can then parse and use to generate certain pieces of the protocol document that require really technical information.
And so that’s an area where we might want to develop our own vectorization scheme that’s really oriented around clinical terminology. But so far, we haven’t seen the need to even fine tune an existing model. Even fine tuning these days can be quite an expensive and laborious process. And so we haven’t really seen the need to do that yet. What we’ve really seen is that this evaluation model can, combined with the ability to actually sort of fine tune prompts, is enough to be able to produce clinical grade documentation. It might change in the future as we get into different documents and domains and so on. But so far that’s been really good for us.
[00:18:50] The Huge Impact of AI in Clinical Trials
Henry Suryawirawan: Yeah, so what I know when you do clinical trials, right, like you mentioned, it could be years, right? It could be a decade or even more. Not to mention, you know, like having to do the testing and wait, you know, for the effect in multi years. So using this kind of technologies, what do you see, potential time savings or maybe even effort, right? You mentioned like $2 billion just to finish the whole clinical trial. So maybe what are some of the things that you think can be reduced or optimized by having these kind of AI tools?
Patrick Leung: Yeah, I mean the interesting thing is that when you’re talking about a project that takes, you know, upwards of 10 years in many cases or more to execute, it means that small changes in the design upfront can actually result in huge cost savings. So we actually published a paper with Merck. Maybe we can include a link to it. It’s on my, it’s somewhere on my LinkedIn profile. That sort of analyze like what is the benefit of using the Faro Study Designer to optimize clinical trial design. And we found that the savings over the course of the lifetime of the trial can be north of a hundred million dollars. So obviously, this was a really big, Merck is a huge, you know, top 10, top 20 pharma company and so on. And so that they do these really significant trials but it just gives you an idea of what’s possible, right?
And this was even before some of these AI features that we have coming up, like clinical writing and so on. So the true savings actually could be even more than that when you factor in the time savings as well of accelerating the clinical writing process and making the output better. Because getting to market faster is hugely, hugely valuable to a pharma company, right? Like every day is like hundreds and hundreds of thousands of dollars of additional revenue that you can realize sooner when you launch the drug faster. So it’s not only cost savings, it’s also getting to market faster with the new treatment.
So we’re really excited about that. Cause there’s also the potential to really over time as this technology becomes widespread, to really sort of like for smaller players to come in here and really try things out. And it’s gonna kind of enable a wider range of different treatments and trials to become more feasible. And we’re really excited by that. ‘Cause there’s so much innovation going on in the very early stage drug discovery realm, but it gets sort of blocked or delayed when it comes to actually going through trials. And so we wanna really widen the pipeline there. And so hopefully there’ll be a lot more treatments for more conditions and lives saved and suffering alleviated, and all the good things.
[00:21:15] The Regulations on Applying AI Technologies
Henry Suryawirawan: Yeah, and hopefully reduce the cost of the kind of medicine that is produced out of the clinical trials as well so that people can get much more options in terms of availabilities. And also like the cost is not as expensive. So you mentioned about regulations. So I think in some parts of the world, they are still kind of like cautious about applying AI technologies, again, especially if it affects a lot of population. So what do you see some of the stance from regulations? Or do you even have to convince them hard to actually ensure that the process is robust enough?
Patrick Leung: I think that, by their very nature, regulatory bodies are very, very oriented around evaluating the quality of what’s presented to them. And so, at least in my mind, like if the quality of the output is good, at least as good as human generated protocols, and we strongly believe that it will be, then the regulatory bodies will be okay. I think there are various risks involved in using AI. There’s many, many different types of risks and so we can mitigate a lot of them. Like for instance, we are running our AI models in a sort of a private environment where data is not shared with any other groups and so on. So we don’t have any of those risks of leakage and people being able to kind of discover what other companies are doing or submitting. So there’s that risk that we can mitigate.
I think there’s a bunch of others involving potentially bad actors coming in and trying to sort of discover things they shouldn’t be discovering by manipulating queries and stuff like that, that’s mitigated by the fact this is an enterprise tool. It’s not a consumer tool. And so only people who have signed up as Faro, you know, customers will be able to use the system. So we just have to go case by case and look at all the different risks involved in AI and mitigate them.
And we have an AI governance policy that we’ve been working really hard on to sort of like really alleviate our own customer’s concerns. Because however concerned the FDA is, you know, our customers are even more concerned about all these things, because it’s their crown jewels, it’s their intellectual property that they’re, you know, essentially developing using our platform. And so we take this very seriously and we want to actually at some point probably publish our AI governance policy so that other people can learn from it, and we can help raise the bar across the whole board in our industry. Because we just think this is really important for everybody to get right.
[00:23:28] AI Success Stories in the Life Science Industry
Henry Suryawirawan: Thanks for sharing that. So any kind of a cool success stories that you have done so far, you know, with Faro? Or is there anything that is already applied in the, you know, real world or in the life science industry?
Patrick Leung: Yeah, I mean I think that, again, the Merck study is sort of what we have as far as publicly available case studies of just how impactful this technology can be. I would say just anecdotally like we have customers using this document generation system and they’re super impressed with how quickly it’s able to come up with documentation that normally takes potentially months and it can be generated in a matter of, in some cases, minutes. That’s a startling, extraordinary kind of multiplier in terms of productivity. We’re just really excited about that. You know, like I think that in some ways the industry is waiting for a really widespread and impactful application of AI. Cause oftentimes when you get past the surface level of like, oh, I played with GPT, and, you know, it’s really cool and interesting. Like once you really start to get serious about applying AI, then that can rapidly turn into disappointment or sort of like, oh, you know, it doesn’t go deep enough.
And there’s all these stories of people who start building with AI, but then they kind of get disillusioned, because it turns out to be way harder than they think. And so that’s where, I guess, you know, companies like Faro come in, is that we’ll take it to the next level. You know, we will figure out all the hurdles and the challenges and the risks involved in actually applying AI to do something really impactful, like automate clinical, you know, protocol documentation or like optimizing trial design. And so that our customers don’t have to do that themselves because that requires a lot of data science expertise, software engineering expertise, clinical design expertise and so on that they may not wanna sort of invest in becoming a tech company themselves. And so we we’re really excited about that.
[00:25:16] The Possibility of AGI
Henry Suryawirawan: Yeah, so I think these days some people also kind of like envision the AGI thing, right, that could happen in some years. I dunno whether it’s short term, medium term, long term. So for someone who has worked on these field, AI fields, for quite some time now, how do you feel about this AGI? Is it gonna happen? Are you bullish about it or do you have some kind of skepticism? So maybe share a little bit your thoughts here.
Patrick Leung: Yeah, it seems like, you know, in the past, everybody thought AGI was really, really far away. Like, the joke is that for the last 50 years, you know, AGI has been 20 years away. So, you know, it’s sort of like always in the future. But now there’s a lot of very prominent people in this field who believe that it’s actually only a matter of like three to five years away, maybe even shorter, who knows? And I am not in that camp. Like I just think that…
There’s a test out there called ARC-AGI, A-R-C A-G-I, and it sort of presents itself as being like, oh, this is a test, you know, for AGI. And you look at what this test actually involves and it’s the kind of thing that, many of the questions are sort of like a bright kid could answer these questions, you know. Like moving blocks around on a grid, you know, in such a way that the block sort of fills in a space. Stuff that a smart kid should be able to do. And I sort of, to me, and, you know, there’s systems that are sort of gradually getting better at this, but it’s not yet human level. And I just, to me that just means that we’re so far away from having something that’s genuinely creative and intelligent.
Now, I will say that there have been really brilliant moments in the history of AI. Like if you remember, you know, AlphaGo, which was this non-LLM based system, it was using reinforcement learning. It was using other methods of AI that are now completely sort of forgotten in the wake of the LLM revolution. But AlphaGo, there was a moment when it was playing against the World Human Champion and it made this move that nobody in the 3000 year history of this game had ever seen before and people would just, all the experts were like gasping, like, oh my God, this is incredible! What a brilliant move! Like how did it come up with that?
And so, you know, AI can, under the right conditions, be capable of behaving like a genius, but it’s such a special, that was such a specialized example. And so I’m sure, as time goes on, there’ll be more examples of that where, oh, there’s a flash of brilliance and AI came up with some mathematical proof that was beyond the, you know, like what any scientist has come up with. But I think in terms of just having an intelligent conversation and really being inspired and looking at a piece of art and being truly like this belongs amongst the classics, I think we’re ways away from that. But I could be wrong. I mean, this, definitely, the progress even since these large language models were first introduced has been really nothing short of amazing in terms of the increasing capabilities and so on.
But I think AGI, in terms of like, this is a peer to humanity, this is a, you know, like a system that we can have a really truly intelligent conversation about life and philosophy and things like that, I’m not so sure. And I sort of, I guess I come down on the Yann LeCun side of things where just knowing a little bit about how these systems work it is sort of like a super glorified, super, super well read auto complete. And I know that there are efforts out there to introduce more real world knowledge and factual information into these LLMs to give them some grounding and reality. But I think that’s gonna be a long path to make sure that those, that grounding is actually sort of super accurate and meaningful. But we’ll see.
[00:28:36] The Path to AGI Using LLM
Henry Suryawirawan: Yeah. And I mean those kind of the LLM GenAI leaders, thought leaders, right, think that maybe the path to AGI is using LLM. I’m personally also not very convinced since simply because of the way it works, right, and so many probabilistic thing that could, uh, go wrong as well. So do you think LLM or generative AI kind of things is gonna be the path to AGI or is it more like combining all different, you know, AI, machine learning kind of like capabilities that we have. And I haven’t really heard about people, you know, trying to combine different models from, you know, LLM or maybe other things, neural networks and all that being combined to kind of like solve a particular problem. So do you still see some rooms where we can apply, you know, more AI advancements to the path of AGI?
Patrick Leung: Yeah, I mean, I find it hard to believe that an LLM based architecture alone would be capable of producing AGI. However, as I said before, you know, there are groups that are looking to merge in knowledge systems that are aware of facts in the world and how they relate together. Kind of going back to some of the really much older architectures for AI where they were attempting to model like CYC where they were attempting to actually model reality. Like create this big huge knowledge graph that links together all these different concepts that collectively model reality. This was sort of a path that AI took a long, long time ago that was kind of abandoned when deep learning came along in favor of a more statistical probabilistic methods, including LLMs, ultimately.
So I think, you know, I can’t help thinking, and I think I’m not alone in this, I’m pretty sure, well, others might agree, like some kind of hybrid where we have this fantastic language generation capability of the LLMs combined with an actual sort of factual system that helps to guide the LLM to not really say stupid things sometimes that ignore reality. And we’ll just have to see how that goes. You know, like it’s one of those things where the more computing power and the more efficient these algorithms and architectures become, the more things we can try out. And of course, there’s a whole wild card of quantum computing. Like once quantum computing reaches a level of maturity where it’s able to be practically applied to this, and all bets are off as far as what’s possible.
[00:30:43] Actions People Should Take in the AI Era
Henry Suryawirawan: Yeah. So at one time, I feel excited, but at the other time, I feel a little bit scared as well. You know, so many people are thinking about jobs being changed, you know, people’s roles are gonna be diminished and all that. And obviously, you also have dealt with this kind of worries as well. What do you think people should do in this kind of era where things moving so fast? Every other day, I think you might have heard about cool things being applied, using AI to improve productivity, remove redundancies, and all that. So yeah, what do you think people should do in this era?
Patrick Leung: Yeah, I think, first of all, maybe a little bit of an inspirational story. Like I love to give this example of radiologists. So, you know, seven or eight years ago, Geoffrey Hinton, who’s a brilliant guy, like one of the forefathers of deep learning. One of the people who really helped make this whole AI, the first AI revolution really happen. He made this prediction that, you know, there would be no more radiologists within a few years that we should just, you know, forget about radiologists, because they’re can be replaced by computer vision based systems. And fast forward to today, and there’s more radiologists than ever. So how could that be?
And the answer is, well, you know, radiology is a high stakes kind of field in the sense that there are human lives at risk here and so there is a need for a human in the loop. And the fact that we automated so much using machine learning means that essentially there’s just more and more people who are out there looking to actually have radiology based procedures performed. Like these days, if you get an injury, it’s like, oh yeah, just get an MRI, you know. Like just get a diagnostic MRI to find out if you might have some sort of problem. Like it’s become so commonplace that could not have happened, I don’t think, without these computer vision sort of advances. And it’s resulted in actually more radiology jobs because there still needs to be human in the loop. And so it’s kind of a volume thing where even though a lot of tasks were automated, the volume increased. So that’s point number one.
And I think point number two is there’s this quote by I think a former CEO of IBM saying, you know, AI won’t replace humans per se, but people who are able to effectively wield AI will definitely replace people who aren’t. And so if you’re listening to this, if you’re gonna take anything away from this whole conversation, it would be, really immerse yourself. Like, look into how you might be able to use AI to make your own job better, to make the way you do your job better.
There was a study actually, a very recent study published by the Harvard Business Review called The Cybernetic Work Companion, something like that. If you google Harvard Business Review and cybernetic, you’ll find it. And it’s super interesting, like they actually quantified this and said, oh, you know, somebody who’s with the help of AI, suitably trained, can actually perform the work of two people. And I think that actually might be sort of higher than that. But it just goes to show like, wow, you could be at least twice as productive potentially using AI. So figure out how to do that. Because if you don’t, someone else, you know, might. And so that’s what I would say about that.
Henry Suryawirawan: Yeah. So definitely you have to give it a try, right? And know what are the possibilities. Because sometimes, like even out of our imagination, our current imagination, right? You wouldn’t think that these kind of things can be solved by AI or some kind of a tools that can actually shortcut, you know, the time and effort required, right? So knowing just that possibilities, I think is also really important. And especially if you have used the system, right?
Patrick Leung: I mean, as you can imagine, we’ve been really looking into the use of AI internally, not just in our product, but internally the way we develop our technology and our software. And at first, I was sort of a bit of a, I would say, a skeptic. You know, thinking like, hey, it’s one thing to create a proof of concept or an early stage prototype using AI, but actual production code, no way! You know, we’re still gonna need a whole bunch of software engineers doing things pretty much the way they’ve always done them to do that. But now, I’m not so sure, like we’ve had engineers do extraordinary things using the latest version of these code generation tools. And I really am starting to fully believe that The Harvard Business Review article I I had mentioned before about how people can be twice as productive, it absolutely holds for software engineering. And so, yeah, I mean we’re practicing what we preach. We’re using AI to just develop software a lot more, you know, productively and faster than before. And it’s a pretty exciting time because the underlying tools also are evolving so fast. We’re finding that the latest versions of the code generation tools are just way more capable than the ones even just, you know, from a few months ago. So that’s also really exciting to be surfing that wave of advancement in AI in that way as well.
Henry Suryawirawan: Yeah. Personally, as someone who have been around in the industry for quite some time, right? I find myself during this transition also a little bit skeptic at once, but actually seeing maybe the junior people doing stuff that I couldn’t imagine. Sometimes I feel like, wow, amazing. Like sometimes I’m having to, you know, switch my perspectives to be more optimistic rather than skeptic, right? I think this is also maybe some challenge for those of you who have been around in industry, because we used to think this is how we solve things. But now, I think the possibilities are there, right? So as long as you can imagine, you give it a try, maybe AI could help you in some parts of the job, and I think your productivity can be improved as well.
[00:35:48] AI Engineers and AI-Enabled Engineers
Henry Suryawirawan: So you mentioned about engineers, right? So let’s switch a little bit to that side. So is there any difference managing or leading AI engineers versus typical software engineers? You know, like things like backend, frontend and those kind of things.
Patrick Leung: Yeah, I mean, even before the LLM revolution came along, it’s always been quite different managing data scientists or ML engineers, as they sort of used to be called, right? Simply because a lot of what you’re doing is exploration. Like a lot of what you’re doing is you had this idea like, oh, you know, we can use data science to solve some problem. But it’s not like traditional software engineering where you have this idea, you design it, you specify it, and then you sort of build it all, and according to spec, and it’s a very deterministic process. With data science, in general, you know, you’re just gonna find out like, is this feasible? Okay, I have some idea I can predict, you know, let’s just say I can look at a whole bunch of historical trials and I can start to predict things like amendments or outcomes of any kind. That’s a thesis that you need to go out there and just test. Kind of like science, right? That’s why they call data science.
And so that inherently is quite different because you have to embrace more uncertainty. Now, I think maybe people also wanna sort of know, well, what about all this AI vibe coding or AI enabled coding? Like how does that change? And I think that what it does is essentially it’s a force multiplier for people. So maybe to build a prototype these days, you don’t necessarily have to be a software engineer anymore. You can get to a certain point without any software engineering skills whatsoever. So that’s kind of different.
What we’re observing out there actually is that there’s a lot of entrepreneurs and even sort of like groups that are out there selling who are basically building prototypes where before they would’ve been presenting mocks or, you know, conceptual sort of descriptions of what they intend to build. And now it’s like, no, we actually just went ahead and built it, because these coding systems, these automated coding systems make things so much easier.
So there’s this huge shift towards, essentially, what a lot of the people who are really behind the Agile process were really advocating, which is you just gotta prototype stuff, try it out, and test it with real customers. And I think in the past, a lot of people didn’t do that because it was just too much work. It was like, well, you know, prototyping, it just takes so much work to try to make all these ideas real, like, and then potentially throw them away. It’s just like we don’t like throwing things away. Whereas now the bar is so much lower. As far as what the amount of effort it takes to produce a working prototype, that it’s much more feasible to do what people like Marty Cagan, you know, out there say, which is just putting like a whole bunch of stuff, you know, within two weeks or less before you fall in love with the idea. Just test it out and throw it away if it doesn’t work. So that philosophy and that process will become way more feasible with AI, which is really exciting too.
[00:38:37] The Viability of Vibe Coding
Henry Suryawirawan: Yeah, definitely, makes sense, right? So the process, the iteration now can be much faster. And even people are not skilled enough in development now can do software development in some sort of form, right?
So as a CTO yourself, you mentioned about vibe coding. What is your view about vibe coding? Do you see your engineers doing some kind of vibe coding? And if so, right, is there any danger that you foresee and how do you actually mitigate that?
Patrick Leung: I think vibe coding is great, meaning just using these AI systems to generate code that you don’t even necessarily look at. You just prompt it and prompt it and prompt it until something works. I think that’s great for doing proof of concept, initial zero-to-one kind of prototyping. I don’t think that’s a viable path to actually building enterprise software. Simply because when things go wrong, as they inevitably will, you know, with AI generated code. Like I started playing with some of these vibe coding tools myself and, very quickly, I dug a hole. Like once I passed a certain threshold of complexity, it just became really hard to add new features. They had this magical button called like try to fix it, and I was pressing that button a lot.
And so, um, I don’t know, like I think it’s got a place and I think it’s wonderful that people who don’t have a computer science background or haven’t, or not, you know, experienced software engineers can now actually build technology. Like that’s magical, that’s incredible. But it only goes so far. And I think, in the future, sure, vibe coding will become more powerful, it will become better. But I do think, at the end, of the day, if you are producing enterprise grade software that’s used by real paying customers, you’re gonna want to have people who actually understand the technology, even if they’re using AI to help them code, to actually fix things when they go wrong and to ensure that things are designed in a good way. At least, that’s my view.
Henry Suryawirawan: Yeah, so you mentioned like building enterprise system, right? So the things that are complex and maybe need to evolve over time, right? Because I still don’t know whether, you know, just simply doing vibe coding can actually evolve your system, you know, in a predictable manner, right? So rather than, you know, keep prompting AI until it gets fixed, right? I think the code that gets produced also can be quite terrible, if human are looking at it, right? Because simply they don’t maybe adhere to a certain design pattern. There’s no refactoring done in a concerted effort, I guess. So I think vibe coding definitely is wonderful for things like POC or things that you’re not looking at or something that is more siloed, I guess, that you can just change the whole thing once you think the quality is not there.
[00:41:03] Hiring AI Engineers
Henry Suryawirawan: So I think also another thing when we hire engineers, right? Do you see any kind of difference these days in terms of, I don’t know, attitude, skillset, or non-technical skills that you see when you hire AI engineers?
Patrick Leung: Yeah, I mean, I think that we’re still at the early stages of this. And what I do observe, as a sort of a hiring manager, is there’s a lot of resumes out there with AI on them. Because everybody wants to be perceived as someone who’s versant in this, because they know that there’s a lot of demand. But when we start interviewing, when we start testing people’s actual coding capabilities, because this is still relatively recent, it’s still kind of difficult to find people who really do have experience with this and who have demonstrable ability in this area. And so I think that’s gonna shift over time.
Just like in the old days, you know, it was kind of hard to find people like way back in the nineties, it was hard to find people who knew HTML and who knew like how to build full and then later to who really knew how to build full stack applications. And before too long, within a few years, everybody did. It was the norm. And so I think this is just a natural technology adoption curve where we’re sort of moving into a more of a mainstream situation where everybody’s seeing the value. And, you know, the majority of software engineers out there are going to be conversant in this technology and before too long it’ll be ubiquitous. But we haven’t quite seen that in the hiring pool yet. But I think, as I said, it’s sort of inevitable.
[00:42:26] Important Engineer Attributes in the AI Era
Henry Suryawirawan: Maybe from your existing engineers that you have in the companies, do you see someone that stood out simply because, you know, they can combine all these possibilities and also with a certain attitude. Do you see some kind of persona of engineers that stood out that you think you can share with us here? Like what makes a better engineer now that AI technologies are there?
Patrick Leung: Yeah, I mean, I talk a lot about career development and about the qualities that I’ve observed over the many years that I’ve been working in this industry that lead people to become successful and sort of more quote senior in some way. And one of the big ones is kind of curiosity. There’s just being kind of inherently excited about learning new things and trying things out. And that’s a quality that I think really, really serves people well in this new world of AI. Because a lot of it is kind of unknown. It’s like just trying to apply AI in ways that haven’t been done before, like we’re doing here at Faro. And so I do think that that’s an example of a quality that is really, um, a strong one.
And I think also people who can span multiple domains, like people who are able to think in terms of product, like what makes a good product, what makes a well-designed product, like usability wise, along with the ability to sort of decompose that into commands that you can give to the AI that it’s gonna understand. So this is a combination of different skill sets. It’s not just like you do a computer science degree and you’re good. It’s more like, hey, there’s a part of this, which is customer facing, like knowing what customers really would respond well to. There’s a part of this which is communication, being able to kind of express those requirements well like a product manager does. And there’s part of this, which is like UX designer, and, of course, part of it, which is straight up programmer. So I think that people who can think cross-functionally also will do really well in this new world, because they now have the force multiplier, you know, in the form of these automated coding assistants and so on, to really be able to do the work of multiple people, potentially.
[00:44:23] Important Leader Attributes in the AI Era
Henry Suryawirawan: Yeah, so definitely those skills are still kind of like the most important things, right? Especially, if you are engineers listening to this and worrying about your jobs getting disappeared simply because getting replaced by AIs. So how about leaders themselves? So I know some leaders maybe have decades of experience, maybe they are up and coming leaders as well, like engineering managers and all that. Do you see a certain skill set or attitude that must change in this AI era as well?
Patrick Leung: Yeah, I do think that every time there’s a technology disruption, it can be a little bit scary. Simply because perhaps the processes and tools and behaviors that have served you really well in your career to date might be sort of under threat or, you know, about to be transformed. And that can be exciting. And it can also be kind of scary and threatening. Like, oh my God, what do I do? I can’t depend on the same set of skills and habits and behaviors that have got me to where I am in my career. And I think that the people who can sort of take that uncertainty on board and re-step into the unknown, are the ones who can thrive.
And I think that applies at a company level as well. Every single company out there at this point is wondering like, what do we do about AI? And I think it’s the ones that are bold and are willing to disrupt themselves. Like I spoke to the company earlier on this week and said we’re essentially disrupting ourselves. Like we started this company as a traditional SaaS provider. Now, we’re becoming an AI company. And that is a process of disruption. And I would look at, Apple in the years of Steve Jobs, if you wanna look at an example of a company that’s has the courage and the sort of the fortitude to disrupt itself repeatedly. Like Apple did that. You remember like the iPod, how that used to be the best thing ever? Like Apple disrupted their own product with the iPhone.
And I think that that is the kind of thinking that is needed here. When this new technology wave comes along at the company level as well as the management, individual leader level is giving your teams, you know, first of all, possessing that courage to step into the unknown yourself. But also inspiring your teams to do so and saying, hey, this is, you know, if we don’t do this, someone else will, and we will get disrupted. And you can look at the history of this. Like if you look at things like The Innovator’s Dilemma, you know, Clayton Christensen’s book that talks about this is like a natural law. Like companies get disrupted, because not all of them have the fortitude to actually jump on new technologies and disrupt themselves.
Henry Suryawirawan: Yeah. So I think thanks for pointing that out, right? So definitely, uh, it’s something that some of us leaders, right, have to think about disrupting yourselves, disrupting your team and disrupting your company, right? So I think those things can be healthy. Especially, if it can also revolutionize the way things are done.
[00:46:59] The Room for Juniors in the AI Era
Henry Suryawirawan: So how about the juniors here? Juniors who probably are thinking whether software development is gonna be their career or someone who is studying software engineering, but I think that they now have to switch. Do you still think there’s a room for them to actually enter the industry? Because I heard so many times that people say simply because you can be more productive now, maybe we need less junior engineers. So what is your view here?
Patrick Leung: I think that there’s never a shortage for people who are able to create technology. And AI makes that easier. And so, yes, you could argue the field of programming is becoming more commoditized, because of all this automation. But like I said before, AI can’t necessarily take the place of having good product sense or having the combination of programming skills and design skills and so on. There’s different ways of framing this, but I would say thinking of it as being a force multiplier, like you’re able to do more, whether it’s as an entrepreneur or whether it’s as an engineer who’s part of a larger organization, if you’re able to wield these tools well, that’ll serve you very well.
And I think we’re still also in the early stage of this. So if you are able to, for instance, start a side project, a personal project, and put that in GitHub or put that on your resume, and show that you have the ability to do this, even if it’s some personal project that you just happen to be really excited about, definitely, do that. Because that shows that you can do it. That shows that you can adapt and that you have some level of mastery or facility in this area, and that will serve you well as you look for jobs. Because the employers that were looking for people who can do this now. And they’re still kind of hard to come by. Maybe not for too many more months or years, but there’s an opportunity to jump in there if you really are proactive and potentially stand out from other candidates because of that.
Henry Suryawirawan: Yeah. So I think definitely don’t get disheartened, especially if you’re really passionate about software engineering, right? Creating application, product, and even like enterprise software still kind of like the go-to things that you are passionate about. So I think that don’t get disheartened and maybe you try out stuff, just like Patrick mentioned.
[00:49:04] Inspirational Story of a Successful Junior
Henry Suryawirawan: So we have talked a lot about, you know, AI in clinical trials, AI in general, and also software engineering and engineering teams, is there anything else that you think, uh, you want to share with us today that we haven’t talked about?
Patrick Leung: Maybe just coming back to the theme of career development. Like some of the most successful and interesting careers I’ve seen, of all the people I’ve worked with, have been people who really focus on just trying things out. There was this one, maybe an inspirational story.
Like there was this one really junior product manager I worked with when I first joined Google. And, um, I encouraged them to sort of become a user of our application, which at that time was this kind of e-commerce product. And so I said, hey, you should probably sign up as a merchant, start your own little sort of fake business and kind of try out the product. And he ended up really getting into e-commerce. Like he built this company that ended up becoming successful and he ended up sort of becoming a VP at Shopify. And I just kind of, I was giving my talk earlier on this year at the company on career development, and I actually spoke to him and just said, hey, just how would you sum up like your success? Because it’s like such an incredible career path that he followed, as an entrepreneur and now senior executive and like all these things.
And he just said, you just gotta try things out. And he reflected back to me like, AI just enables so many more people to do this now. Like to do the kind of things that he did that at the time required software engineering expertise, machine learning expertise. Like now, you can potentially vibe code some of the stuff that he did really easily. And so the opportunity for people who have ideas, if they observe problems in the world, the gap between observing that problem and trying to solve it is now much, much smaller than it was. And so whether you are trying to address some big company that you wanna get a job outta, whether you actually wanna start a new company, it’s just much easier to do that. So I would sort of, you know, learn from his experience and maybe be inspired by his story. Because, you know, he was really, really underscoring the fact that it’s just much easier to try things out these days and trying things out is often what leads to great inventions and innovations in the world.
Henry Suryawirawan: Thanks for sharing such inspirational story. I think definitely worth to think about, right? So especially you, having cross-functional skills, back when you mentioned about cross-functional thing, right? If you have cross-functional skills, definitely the opportunities and the possibilities are there for us to give it a try, right? And vibe coding, yeah, it could be one way to actually build POCs and kinda like test the market early before you actually build the actual product.
[00:51:33] 3 Tech Lead Wisdom
Henry Suryawirawan: So Patrick, thank you so much for sharing today. So I learned a lot especially from those inspirational stories that you mentioned. Before I let you go, I have one last question that I always ask my guests. I call this the three technical leadership wisdom. You can think of it just like an advice that you want to give to the listeners. What are the top three things that you wanna share today?
Patrick Leung: Definitely, I’ll return to the curiosity. Like that’s super important.
I would say definitely following your excitement in all things. Like just don’t settle for doing work that you don’t find personally inspiring, that you don’t feel like, oh, I’m excited to get to work in the morning. Like if you don’t feel that way, then think of other things that you might be more excited about. And it sort of sounds obvious, but I think that people get conditioned after a while to not doing that and just settling. And I think that it’s important to really be excited about your work for a number of different reasons.
And I think the third thing would be, definitely, I always advise people to find a mentor. And that also sounds kind of obvious, but I think the majority of people don’t actually actively seek out a mentor. And it can be super, super rewarding. Like I keep coming back to some of the people in my own career who have really helped me, even if it’s just a sentence or a catchphrase or a certain just way of doing one thing, it can change your life over time, really. By just helping to reframe problems or deal with, you know, communication issues or whatever the case may be. Over time in your career, that can have a really compounding effect. So seek out at least one mentor, ideally, multiple, over the course of your career and it’s really gonna help you.
Henry Suryawirawan: Yeah, thanks for pointing that out. So you mentioned like even a simple sentence, phrase and all that could actually change someone’s lives, right? And can change your life as well. So I think for people who have been around in the industry as well, so be a mentor I think I would say as well. Because you never know how much impact that you can make to someone’s lives.
So Patrick, if people love this conversation, they wanna reach out to you or maybe learn more from you, is there place where they can find you online?
Patrick Leung: Yeah, the best way is LinkedIn. You can find me on LinkedIn pretty easily.
Henry Suryawirawan: Right. Okay. So thank you so much for sharing today, Patrick. I really learned a lot. And again, thank you so much.
Patrick Leung: It was a real pleasure to be here, Henry. Thank you.
– End –