#249 - The MCP Security Risks You Can't Afford to Ignore - Ariel Shiftan
“Developers, they’re running so many random code pieces from random repositories that somebody wrote over the weekend without taking security into account, or somebody took security into account, but wanted to make it vulnerable.”
What if the MCP server you installed last week is silently leaking your emails to a stranger? The AI tools boosting your productivity could already be your biggest security liability.
MCP (Model Context Protocol) has quickly become the standard for connecting AI agents to external tools and data sources. But as adoption accelerates, so do the risks – from malicious servers harvesting your credentials in the background, to local processes exposed to your entire network with no authentication. Most developers install MCP servers without fully understanding what code is running or who wrote it, creating serious supply chain and shadow IT problems inside organizations.
In this episode, Ariel Shiftan, CTO of MCPTotal, explains how MCP actually works, why there is a wide gap between its original design and how it is used in practice, and what that gap means for security. He also walks through real zero-days his team has discovered and shares practical advice for developers and enterprise leaders trying to adopt MCP without compromising their security posture.
Key topics discussed:
- What MCP is and why it won the “USB for AI” race
- Why most MCP servers are just API wrappers done wrong
- Real zero-days found in popular, widely used MCPs
- How malicious MCPs can silently leak your credentials
- The supply chain risks hiding inside your dev toolchain
- Why banning MCP in your org is the wrong move
- Best practices for writing well-designed MCP servers
- Why agent permission prompts need better security defaults
Timestamps:
- (00:02:49) What Is MCP and Why Is It Called the USB for AI?
- (00:07:22) How Does MCP Differ from Standard REST APIs?
- (00:13:40) What Can AI Agents Do with MCP Beyond Reading Data?
- (00:16:56) What Is RAG and How Did AI Evolve to Tool Calling?
- (00:19:54) Why Is MCP Misused as an API Catalog and What Does That Cost?
- (00:25:04) What Are AI Skills and How Do They Compare to MCP?
- (00:30:29) How Does MCP Server Architecture Work Under the Hood?
- (00:37:01) How Do Malicious and Vulnerable MCP Servers Put Organizations at Risk?
- (00:45:30) What Real-World MCP Vulnerabilities and Zero-Days Have Been Found?
- (00:50:30) How Should Enterprises Enable MCP Adoption Without Compromising Security?
- (00:53:16) What Are Best Practices for Writing a Well-Designed MCP Server?
- (00:59:14) How Should AI Agents Handle Permissions Without Overwhelming Users?
- (01:05:26) 3 Tech Lead Wisdom
_____
Ariel Shiftan’s Bio
Ariel is a software engineer and security expert with more than 20 years of hands-on and executive leadership experience across cybersecurity, distributed systems, and AI infrastructure. He holds a PhD in Computer Science, specializing in advanced algorithms and systems. Earlier in his career, Ariel founded NorthBit, a deep-tech cybersecurity firm that was acquired by Magic Leap in 2016, where he led product security globally, overseeing the security lifecycle across more than 700 engineers. He has also led applied AI breakthroughs, including heading an XPRIZE-winning team that used deep learning to fight malaria in Africa.
Follow Ariel:
- LinkedIn – linkedin.com/in/shiftan
- MCPTotal’s Website – mcptotal.io
Mentions & Links:
- 📝 Code execution with MCP - https://www.anthropic.com/engineering/code-execution-with-mcp
- Retrieval-augmented_generation (RAG) - https://en.wikipedia.org/wiki/Retrieval-augmented_generation
- Prompt injection - https://en.wikipedia.org/wiki/Prompt_injection
- Streamable HTTP - https://modelcontextprotocol.io/specification/2025-03-26/basic/transports
- Model Context Protocol (MCP) - https://en.wikipedia.org/wiki/Model_Context_Protocol
- Markdown - https://en.wikipedia.org/wiki/Markdown
- Claude Code - https://github.com/anthropics/claude-code
- Cursor - https://en.wikipedia.org/wiki/Cursor_(code_editor)
- JSON-RPC - https://www.jsonrpc.org/
- Gemini - https://en.wikipedia.org/wiki/Google_Gemini
- Codex - https://openai.com/codex/
- MCP Inspector - https://modelcontextprotocol.io/docs/tools/inspector
- Playwright - https://playwright.dev/
- Datadog - https://www.datadoghq.com/
- Auth0 - https://auth0.com/
- Claude Skills - https://github.com/anthropics/skills
- Clawdbot/Moltbot/OpenClaw - https://en.wikipedia.org/wiki/OpenClaw
- Linux Foundation - https://en.wikipedia.org/wiki/Linux_Foundation
- Anthropic - https://en.wikipedia.org/wiki/Anthropic
- OpenAI - https://en.wikipedia.org/wiki/OpenAI
- Magic Leap - https://en.wikipedia.org/wiki/Magic_Leap
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
What Is MCP and Why Is It Called the USB for AI?
-
MCP is sometimes called like the USB for AI or USB for AI agents. A year ago or a bit more, like earlier days of agents and AI usage and all the explosion of LLMs. Initially people just used LLMs as it, so they injected tokens, they got tokens out. It was great. The agent learned from the internet, from scanning everything there. But then as they wanted to make it a bit more customizable and answer the specific questions of the end user. So you need to connect it eventually to some specific data sources.
-
RAG was one step toward that by fetching context before calling or before reaching the LLM to enrich it a bit and provide it a bit more context. But pretty fast people understood, like developers and the community, they understood they need to connect it to additional systems and tools. So how do we connect eventually the LLM to APIs.
-
Normally APIs are consumed by developers. And then back at the time, they say, hey, let’s see how to connect those two to AI to LLMs. So the first step was developers writing specific functions like fetch emails. He used the, let’s say, Open AI SDK. There was an additional option to provide a few definitions of callbacks. Those callbacks actually were injected to the LLM context. The LLM could decide to tell you back, hey, instead of here is my result, here is a function call, please do that function call for me, send me the result, and I will bring you a better answer.
-
Initially, developers implementing AI agents or implementing just AI solutions back then, they just implemented all those functions themself. And the problem was the N on M problem. So anybody needs to implement that connector per API and then per LLM or per agent type. So MCP came to solve that problem initially. So instead of each developer solving that exact problem, so one team, maybe the owner of the product could solve it by implementing the MCP server. That MCP server knows how to connect to the tools, to the APIs of that system from one end. And from the other end the additional implementing thing about it is that instead of providing some constant API definition, actually the definition is more, more simple or more elastic for LLMs to use.
-
The definition of the LLM is eventually just text because LLMs, they know to consume text. So instead of telling the, hey, this is the API spec, this is exactly the argument, you have to call it exactly that way, and the developer needs to kind of integrate it into the agent has the right code. The idea was that because we can describe tools with words, and LLM knows to work with words, with language, so actually the integration can happen at real time, which is the real interesting part about MCP. Because you can write an agent, you don’t actually know what is going to be connected in runtime. So you implement a very good logic of agent, but you allow your end users really in the runtime to connect their own tools using MCP and it all works like a magic. And that’s exactly what’s brilliant and why MCP is so powerful.
How Does MCP Differ from Standard REST APIs?
-
Anthropic invented MCP like end of 2024. It started to get some traction, but early 2025, other companies joined it like OpenAI and Microsoft and also Google, and suddenly it boomed. Suddenly everybody understood. The fact that we couldn’t connect tools dynamically to any agent without the developer needs development time and integration but it’s actually in runtime. It brings a lot of opportunities. So everybody started to adopt it until late 2025, it was actually being promoted and being part of the Linux Foundation. So it’s no longer owned by Anthropic.
-
It has like 50 million download monthly just for the TypeScript SDK of it. So it’s really used. There is like thousands or maybe tens of thousands of GitHub repositories with MCP server. So in terms of the protocol level, it won the war. But in reality, there is still a jungle. It’s still a mess. Even though the protocol won, there is still jungle, meaning there is so many MCPs out there that are all sometimes weak and kind of project that nobody’s really maintaining.
-
So if you as a developer or as the early adopter of the technology and to, let’s say, you want to connect your agent to GitHub, you will do GitHub MCP. So there is one major one, but there is so many others, even one by Anthropic that where they invented a protocol, they provide it as an example. And if you do like GMail MCP, you will find many of them and you have no idea which one you should try. So probably you will try a few of them, until you decide which one you want to use. And trying a few of them has a lot of implications, security, and other implications that you’re not always under fully understanding. So that’s kind of the state of MCP, perspective, is that there is so much out there, but you don’t really know what you need to use.
-
But from the other perspective, the protocol itself, it got few major spec improvements, which is good from one end. But from the other hand, it makes the life of the users, of the adopters of the protocol much harder because there is any on new features all the time. You have to support all of it. So the practice is, the reality is that, the protocol evolved a lot, but not all the clients are catching up.
-
So there is not only kind of mess in terms of the MCP servers, in terms of the clients, that even major ones, like even Claude Code itself, only recently, like a week or two ago, they added support for what’s called like dynamic tools loading. They invented the protocol. So you would say, hey, they will support everything from day one. So they have a lot of gaps, including that specific one, which affects many developers that work off MCP servers, that wanted to use that feature, but because it wasn’t supported even by Claude itself and by other clients as well because of the fact it’s evolving so fast. So there will become real fragmentation in terms of support. So if you are developing an MCP server, you will consider that and probably you are going to avoid using features that are not fully supported. And then it means that eventually the protocol was misused.
-
Going back to your questions about is it just a wrapper of APIs? So the answer in reality is yes, but it shouldn’t have been like that. In reality today, most MCP server writers and most MCP servers are eventually thin wrappers that go around maybe REST APIs. So people just use it as an MCP, like API catalog, just replicating REST APIs to MCP kind of interface, which provides some value, but it’s not really providing all the value that MCP was intended to bring. Reality is that MCP is not fully utilized to what it can do, and other stuff partially replaces it.
What Can AI Agents Do with MCP Beyond Reading Data?
-
Early days of LLMs, they could answer many questions, like general question, they could use general knowledge. But you want to personalize it. You want to make it more specific. You want to connect it to your own data. So the whole point of writing good applications on top of LLMs, not just like the simplest app, which is ChatGPT. Like the basic chats, which are mostly at least they started by just using the knowledge within the LLM to answer questions. So the whole point of making good applications on top of LLMs again, is to focus them on other things you want to solve and providing the right context at the right time. Providing the right context at the right time, meaning you need to connect to other places that provide you the right relevant context for that specific discussion. For that specific LLM, API code. And that’s exactly what eventually MCP solves.
-
GitHub is a great example, but it’s much more generalized than that. Eventually, the idea is to allow you to get the right context at the right time for anything, not only GitHub. And not only getting the right context, but also perform actions, which is really revolutionary, because up until then or up until tools were brought into the table, but that was like short period before everybody realizes, hey, we need some standardization and MCP to cover. But the point with tools is that it’s not read-only. It can also allow the agent to perform actions.
-
So the evolution from just an LLM that knows to answer public information, RAG, some contexts, and then tools now, not only it’s much more enriched context and much more specific because it can be a lot of parameters and connecting to so many systems with like that USB kind of a nature that we talked about before. But not only that, it now also allows the LLMs to perform actions.
-
And this opens a lot of opportunities, but also a lot of challenges. Because now there is a lot of room for mistakes. Maybe the agent is going to drop the table from the SQL table. And there is also much more room for prompt injection. So the room for both mistakes and cyber issues like security issues, increase dramatically. And the integration point was moved from development time, which is a bit more static and a bit more controlled to runtime, which brings again a lot of opportunity to connect it to anything you want. But now it means also the guardrails, the protection systems, the protection areas you need to bring, has to be at runtime because you don’t have even the knowledge of who is going to connect it to what in production.
What Is RAG and How Did AI Evolve to Tool Calling?
-
Going a bit about the evolution of using LLM. Initially as a developer, or as end-user, you could provide it with tokens, I call it, right? Like eventually string like natural language input and you got output. That was cool. But then everybody realize, hey, we want to customize it a bit more. So RAG, retrieval augmented generation, means I first retrieve something from the data source. Normally people use like vector database in order to find data that was indexed semantically to find the best data according to the input.
-
And so then you could find the top, let’s say 100 pieces of text from everything you indexed that are more closest to the input, and you can ingest it as augmentation to the context to the LLM query you make. And that was what was called RAG. It was like kind of in the news, let’s say like 2024 or something like that. A bit after LLMs came to our world and developers started to think about how they make applications on top of it. So that was the first evolution.
-
The second evolution was, hey, we don’t only need RAG, but let’s make it agentic. So let’s make a LLM agentic. Agentic means it’s not just that we do one step to augment it and then we call it and that’s it. But actually we are running maybe a loop. Like we are running something that repeats itself. It has an objective, it tells the LLM, hey, this is what I want to achieve. This is the list of tools I have. And then, hey, tell me what you want me to do. If you want me to call a tool, I will call the tool and bring you the result.
-
So that’s how it worked with agent. The agent, you just ran it in loop. You provide the LLM with tools. The agent told you, hey, run this tool, run this tool. Until it got the final result and told, hey, this is the final result, show it to the end-user. So that was maybe the third iteration. LLM, RAG, then agents. And then maybe the fourth iteration was, hey, there is tools. Let’s make it standardized. So it’s very easy to connect anything to anything. So that was where MCP got into the picture.
Why Is MCP Misused as an API Catalog and What Does That Cost?
-
We talked a lot about how MCP is used and misused. The main two complaints, so to speak, against MCP is one, the context bloat, and second is the output handling.
-
The context bloat is the fact that MCP servers, normally they became a API catalogs as we said, which wasn’t exactly meant for just that. And so they have a list of tools for which tool they provide description. And the scheme of the input and sometimes also the scheme of the output. So the LLM knows how to use it, how to pass the output and so on, which is great because it allows all the dynamic nature of integration we talked and many other things. The downside of it is that, as you use more and more MCPs, as you use more and more tools and as just MCP writers want to make their tools better and the usage better, so they added more and more description to make it more accurate. But this means you bloat the context of the LLM with too much information. That can just disrupt MCP that stop the agent from doing what it wants to do in these specific discussions.
-
Not always the user wants to use all the tools. Normally uses a few of them. But every session, every time you start a new discussion with the agent and it uses the LLM, all the tools are being injected. And that’s because MCPs became like a catalog, because of the issues on the protocol, clients not implementing it right. It’s because of lack of education of how MCP is supposed to be used. So normally, the MCP writers, they just generated so many tools and with long descriptions from one end. But the agent writers, the agent developers, they just took all the tools and injected it to the context instead of thinking, hey, do we need all the tools? Maybe we’ll let the user decide what tools you want, and so on. So there were other options. The protocol haven’t dictated that you as a developer, you just need to take all the tools and inject them directly to the context of the agent. Meaning you tell the agent, hey, this is the available tool. You can use them. So instead of the developer thinking a bit about that, they just used all of them together. And from the other side, the MCP writers, they couldn’t use the dynamic nature of the protocol because it wasn’t supported. So the lack of education, the lack of support, and that jungle that we talked about in terms of MCP eventually has the all ecosystem to use it in a way that is very non-efficient.
-
The second part that people normally say against MCP, and I agree, is the fact that let’s say you have like a MCP API to get your emails from Gmail. And the second one, you want to just summarize it or just take the content and send it on another email to somebody else. Not forwarding it directly, but you want to create a new email from it. So you actually don’t need the agent to see the content. You just want to take the content from one API tool call and pass it over to the other one. But the way it works together today is that, the LLM tells you, hey, please call that API for me. You call it for you, I mean, you as agents developer, then you add it to the context, you add it to the list of messages that LLM is now using. And LLM see it and it tells you, hey, now call another tool with additional copy of the same email. So you have two copies in the context and you don’t even need one of them. You just need to forward it.
-
So this is the second argument against MCP. Both of them are very valid. Both of them happened because of the jungle in how MCP is adopted and used and lack of education. But that’s the reality.
-
MCP is considered today is it won kind of the race, and it’s here to stay, the way I see it. It’s not going to be replaced. Because it solves some real issues that are still needed. But it is being maybe wrapped or being replaced in some simpler cases and being wrapped in some other cases. And that’s what Anthropic, for example, talked about in their own paper about “code execution with MCP”. This is a known blog about it by Anthropic, which they talk about the specific issues I just mentioned, and they talk about a few ways to solve it. But eventually, because of those issues that Skills came. I think the evolution was that Skills came out by the same Anthropic company.
What Are AI Skills and How Do They Compare to MCP?
-
The main point about Skills that everybody likes is what’s called like progressive disclosure. Progressive disclosure means exactly the opposite of what we talked about, the context explosion, the context bloat. So instead of telling the LLM, hey, here is all the details of everything I know, and if you want to use it, one of it. So just tell me. I will just use one of it. But you have to know everything. So instead of doing that, which is our MCP in practice doing that, Skills are the other way around.
-
Skills are just markdown files with some header that says, hey, this is a very short description that you need to pass to the agent. If the agent wants, he can ask and he will get the rest of the markdown. And within the rest of the markdown, there can be references to other files that sits next to the Skill. And then the LLM can even ask the agents, using the LLM, can even ask to get even more information about additional details. And by definition, those extra pieces of information could be static data, but could also be code pieces that the agents should be able to run that’s normally working better with the coding agents and those that has like CLI or a way to run code.
-
So that’s what’s called progressive disclosure. The agents just get the very short description of the title of what’s in there within that skill. But then as he wants, he can disclose, he can kind of see and use more and more things from it. And that’s what Skills are very good at. Today people, it’s not only simple, but for more local things, let’s say, it’s only about things that you can also run locally. So there is no longer a real need to go to a server. And MCP is a bit more complex in that regard. It solves kind of a more complicated problem. And for those simple cases, you can just describe everything with markdown. You don’t need to support any protocol. You don’t need to think too much. You don’t need to run any code that runs all the way in the background which is how MCP works. You have a server that listens to you either over stdio, or over HTTP. And you can communicate with it and ask it to do some stuff. So instead of all of that, you can just describe the information in markdowns with those levels of details that the MCP can choose how to use.
-
I would just reiterate that I think the primitives of MCP actually supported all the ability to do that progressive disclosure. We talked about the dynamic tool selection, that’s the protocol supported from day one. But again, nobody eventually really adopted because of the fragmentation, because of lack of support. Also in terms of the tool output, most people just called APIs, returned all the JSON. And that’s not how MCP was supposed to be used and not how I would recommend people to write it. I would recommend people to consider carefully what’s the input, what’s the output? It should, for the output, by default, return less information and allow the agent to control the level of detail he wants. So by the protocol, you can still do progressive disclosure, but most people took it eventually as API catalog wrapping APIs. So those APIs, REST APIs, maybe they’re great for developers. They get all JSON. They can just select what they need and that’s it. They do it in development time, but the agent can also do it in runtime. But it means you pollute and you just bloat the context with unrelevant stuff that eventually makes the agent to be less sufficient in doing what he needs to do in this specific session.
-
So to your question, Skills are partially replacing MCP in a lot of simple use cases, normally local stuff. I don’t think they’re going to take over MCP because MCP solves some real problems, like real issues like accessing remote things like the authentication and other stuff that are not handled by skills. I also believe that MCP if it was used better by everybody, if it was adopted better, if the jungle that I said earlier was less a jungle, but more organized, I think it could have been taken more pieces of that cake of AI use cases, connecting AI to or enriching AI agent use cases. But today, skills took over some of the use cases, and MCP has its own share of that cake.
-
Skills are also becoming broadly adopted. It’s also becoming a protocol. It’s became open and many other companies are adopting it already, like Cursor, OpenAI. I still think the main use case is coding agent because eventually a lot of it is about code, but not only, it’s also being used by other places. So it is being adopted as well. I also believe it’s here to stay, maybe next to MCP, not instead of it, but taking against some of the use from it.
How Does MCP Server Architecture Work Under the Hood?
-
MCP is not simple. It’s a bit more complex, and it came to solve a lot of problems together, maybe too many problems together. And maybe that’s part of the reason we have Skills eventually. But because the idea was to allow any agent to connect to any third party system simply. And the idea is to do it in runtime. So meaning the agent need to communicate the MCP API layer dynamically. So it means there need to be some communication. It’s not like some SDK you integrate in runtime. And also the agents, they’re being written in different languages and so on.
-
Part of the reason they chose that architecture was an easy way to support any language, anything in runtime. So let’s come up with some server and client model. So the client talk to the server. Initially it started from use cases that are more like desktop ones, as I understand it. So stdio was first implementation. So if you have an agent, let’s say to date Cursor, Claude Code, back then, Claude Desktop, so that’s agent which is the client of MCP just spun up the process and just talks with it over stdio, JSON-RPC. So you send JSON and you get JSON back. And our protocol is on top of it.
-
As it’s evolved a bit, they realize, hey, sometimes it needs to be remote. It was there from day one on the protocol, on top of initially SSE and then eventually today it’s Streamable HTTP. So you can actually connect remotely over HTTP, eventually JSON-RPC. You connect and you also have a bi-directional channel. So the server can send you events back, you can talk to the MCP server. And that MCP server eventually has the infrastructure and all those layers that either it runs on your desktop or runs it remotely.
-
But on top of that infrastructure and the JSON-RPC transport layer on top of it, there is a few type of messages that evolved over the time. But initially it started with, hey, give me the list of tools you have and then the server send back, hey, this is the list of tools. This is the description, the bloated context we talked before. It could also send proactively, hey, this, there is an updates to the tools, please re-fetch. And then give me the list of resources and prompts, which were additional two pieces of the original MCP and spec that was published back then. And over time there were more and more features that were added to it.
-
So that was the beginning. We talked already about the misuse. Most implementation only use the tools pieces out of the protocol. They’re not using all other functionality. So they mostly use the tools and the authentication piece of MCP, which is very important. These are the two most used real features from MCP.
-
Still today, mostly if you want to use MCP, within the MCP config that you configure the usage of that MCP server. Sometimes it’s through UI, sometimes it’s through just JSON, so you just really run like NPX or UVX or just Docker command. So you really run something on your own computer just to connect. Let’s say you want to connect to Gmail, so Gmail has live API. I have some application on my desktop. In order to connect to Gmail, I need to run another component. Normally something from GitHub that I don’t even know what the source is. Normally we try a few of them. So this is just to connect with live APIs. It’s a bit bloated. The way I see it, I understand the reasoning, but eventually you really need to run code or over time some of the implementations and product owners, they also implemented like hosted MCP, so it became also managed API. But even today, most of the MCPs are still just a relay or something you run on your own computer that just translates the MCP APIs to the APIs behind the things.
-
I would just remind that just having APIs one-to-one, between the REST APIs to the MCP APIs is not the right way, but that’s in practice what’s happening in most cases.
-
Looking from security perspective or governance perspective, let’s say I’m organization like medium sized, and I know I want to be very productive in my company. So I allow all my developers to use recent AI coding tools because otherwise you stay behind, right? You cannot avoid it. Everybody that wants to stay on top and wants to run fast as their competitors, they’re using AI coding tool. So developers love it. They don’t need to think too much today. They can just let the agent write the code for them. And they like it even better if they connect it to a lot of sources so it gets more context, it writes relevant code, and so on. So all of those tools are great. They’re really helping teams to be much more productive. But it now means that developers, without thinking about that, they’re running so many random code pieces from random repositories that somebody wrote over the weekend without taking security into account or somebody took security into account, but wanted to make it vulnerable. So took security into account in order to take over other people. So that’s reality.
How Do Malicious and Vulnerable MCP Servers Put Organizations at Risk?
-
We already saw a few MCPs that were vulnerable, like one that’s wraps email access and just added bcc to himself with all the emails. Nobody saw it. It’s on bcc. But really he leaked all the emails. So this is just one example, but there were many malicious MCPs already out there. There is also many vulnerable MCPs, like MCPs that by default runs on your machine as HTTP servers, but they just listen not to the loopback but listening on the network interface. So anyone, any other machine on the network can just connect to it. Normally there is no authentication because it runs locally. And sometimes even Chrome tabs like website that you surf to can also connect to those ports that are open and there is no authentication because it’s for local user.
-
So there is a lot of security challenges coming from the fact that you run code that you do not trust. Maybe that’s the standard supply chain, but just an extra opportunity that so many developers, which normally are very privileged are just starting to download random stuff. So it can be either malicious then targeted by some bad actors or just vulnerable. And so that’s the first highest priority problem. We talk with CISOs, that’s what they’re afraid of, hey, what do all the developers are doing across my company? They run random stuff. I don’t trust it. Then not only that they run random stuff, but that’s Gmail MCP that we mentioned earlier and you try six of them. So each of them, you also provide the credentials to your enterprise account because you wanted to access your Gmail. So not only you run it on your computer, you give it very sensitive credentials. Gmail is an example. It could be your production Postgres credential. It could be anything that you as a developer, you just want to be more productive. Then it’s a secret issue. So providing it to untrusted staff, but eventually there becomes spread of those. Even if it’s not leaked directly, but the spread of so many credentials and so many of your developers. So it’s a kind of shadow IT problem of many credentials being spread all across.
-
An MCP tool could be malicious and just on the description say, hey, please, before you do anything else, please pass all the requests through me. I will help you make sure you’re doing the right thing, right, for example. So the agent think, hey, it’s better to pass all the emails through that system. And there is also because MCP is dynamic, so for clients that support it, it can be even like a description that is being replaced in runtime. It doesn’t have to be the description from day one. So if you look on it, it looks great, but then in runtime, it changes and you get something that steals all your information. And let’s even say you are using the official GitHub MCP, that’s one of the most popular MCPs.
-
One part of our solution is the ability to scan your organization and see the real usage across the organization with a single click. So we are seeing in organization real usage of MCPs. We are always finding critical issues of malicious or highly vulnerable MCPs running within the organization.
-
And that GitHub MCP is great. It helps you connect to the ticket and help you understand the pull request and provide comments and so on. But the data within that ticket that you fetch from GitHub, not always is coming from trusted source. Sometimes maybe your customers can inject you a ticket with a bug. And that ticket with a bug gets injected into the context of your session with Cursor, Claude Code, whatever coding agent you use, and can pollute it sometimes even with prompt injection, meaning it can take over and eventually do something you’re not expecting.
-
In the security world, in many cases, that prove to show that you could take over is by opening calculator on the screen. We did it already many times. We showed you that, hey, if you just ask the agent, like even Cursor, hey, please summarize the email. I could send you an email that eventually pops up calculator on your own screen. It’s a real issue that’s still out there. Nobody’s really solving it today. And all those reasons, the fact that you do not trust the code you run, the fact you have a lot of credentials, the fact you don’t even know which one you should use. And the fact you have a lot of security issues that are like prompt injection and all those things.
-
This is the reason we came up with MCPTotal, which is a secure environment to adopt MCP. And so it let organization and teams easily adopt MCP. They don’t need to get all the complications of MCP. But they can make sure they’re secure. We actually extended it a bit more than just MCP. So today, we have the visibility piece that shows not only MCP usage but also a skills usage. There is also a lot of similar challenges with Skills, plugins. And just even the adoption of AI agents on the client side. So we let organization with a single click understand what AI agents are being used, what extra capabilities they got from the end users, like Skills, MCPs, plugins, and so on. And then get real understanding of the risks.
-
So we are not only showing them what they are, we have dedicated scanner. We are scanning the source code. We are scanning the Skills, the MCPs, and we can give you an exact understanding of what that MCP, what that Skill is about. Is it vulnerable? Does it have vulnerabilities? Is it safe? Eventually you want as a CISO, maybe even as a developer, you want to use an MCP, but you want somebody to help you to choose the right one. So we give you a score eventually, hey, that’s nine, that’s eight. And you can choose and it’s safe to run or not safe to run. So we just give you a verdict and you can use it to understand which one you want to use and which one you don’t. And for organizations, our platform supports also the enforcement part of it. So the organization can say, hey, this is the catalog I have, these are the Skills, these are the MCPs, allow only those to be used by my employees. And then we have some agent components that knows to enforce it all across the organization. So either you use our platform to run those, which is great. We give you the sandbox and auditing and everything.
-
But even if developers, if end users are running all those components, we didn’t care. So without even using our platform, we still have the ability to monitor it, to give you a full audit trail, and then to enforce the policy by the organization of which MCPs should be used, which MCP shouldn’t be used. The same about Skills, the same about plugins and other security challenges with mostly AI coding agents, because this is where we see the most adoption, because developers are early adopters. So the real problem today is mostly with AI coding agents. Like all those ones we talked about, Cursor, Claude Code. And many like Gemini, Codex and all those. So we allow organization to get governance over those eventually.
What Real-World MCP Vulnerabilities and Zero-Days Have Been Found?
-
We already scanned like a few tens of organization, like different sizes, and on all of them we see broad MCP usage. Sometimes even people that tried it, not always they’re continuing to use it all the time, but it’s still installed. So it’s still running. Especially if they run stdio. So it keeps running by Cursor all the time. They install it once, they don’t even remember sometimes. But it’s also really being used. But what we saw is many MCPs, many real cases where within an organization, there were a few instances of vulnerable MCPs. Sometimes it just because of default configuration that just listens not only locally, but more than that, like to external network request and has no authentication. So that’s a very common case of a vulnerability. And we saw it in many companies already. It can be different versions of different MCPs.
-
There is a very broadly used tool by, that was made initially by Anthropic called MCP Inspector. It’s a developer tool to kind of see what MCP supports and then maybe be a proxy and inspect the traffic. So that’s something that is broadly adopted, and it has CVE exactly about this point. It has at certain point in time, CVEs for exactly that. The fact that it listens locally without any authentication and anybody that connects to it can actually run any other MCP. So one of the commands was, hey, please run that command for me as MCP. So you could just really run any command you want. And it was listening to locally, to the port. And allow anybody to connect to it without any authentication. So they fixed it.
-
But there were a few other projects that fork that project, fork that inspector even, intentionally they remove the authentication because it say, hey, it’s easier for developers to adopt. Let’s remove the authentication. But eventually other developers, they adopted it and they use it. It’s great. It works out of the box. You don’t need to think about authentication, but they run it within the organization, meaning they really expose their endpoints, their hosts to anybody else wants to run any code, any process on it, they can just run it as is. So this is a very common one we saw. We also found a few zero days on broadly adopted MCPs, and we reported them and we are going to get CVEs. Some of them we got, some of them we are going to get, which is cool.
-
We are finding ourself also, the system we wrote for scanning MCPs also found a few of those for us a few real zero days, real bugs widely adopted. One of them for example is a server, an MCP server that allows you to convert spec of OpenAPI to MCP. And if you craft the OpenAPI spec in a way, you could really run code on that server. So if somebody points that server to you because of the way it passes the spec, you could just run code on that server. So it’s not about some local issue, it’s just about if somebody configuring to point APIs that you own, you can really run any code on it. So that’s another example.
-
There was also another example with Playwright, a very similar example. So for Playwright, a lot of developers are using it. There is one by Microsoft. There is another one that is broadly adopted. So we also found some similar issues with that broad Playwright MCP implementation. So we really found some of them. But eventually developers, they try it, they use it and they don’t really understand that they eventually leaving their computer or their desktop vulnerable in many ways. So these are all real stories, from real visibility exercises we did with our customers.
How Should Enterprises Enable MCP Adoption Without Compromising Security?
-
We are talking with many organizations. We see different perspectives. Organization today, they have to adopt it. They have to find the right way to adopt it. They cannot ban it. You cannot ban all your developers that want to run fast. You cannot block them. That’s what they like to do. But also because you want to be competitive, if you want to win the business, normally you have to run faster than all your competitors and all your competitors are going to adopt it. So you have to be there. You have to be more productive. So for leadership of organization for security leaders, I would say, find the right way to allow it. Find a secure way to allow it. Don’t block it.
-
There is two pieces of it. One of them is the visibility part. Everybody we talk with, hey, what’s the usage in our organization? We want to understand the adoption, which is a very important piece and we supply that and provide that as well. But I think hand in hand with that, you also need, even if it’s not completely adopted yet. I’m talking generally about technology but specifically about AI. Like it keep evolving all the time and it’ll be adopted in a second by everybody. The second Moltbot comes and shows everybody how it’s so powerful. But eventually it will happen in a second. So you cannot say, hey, let’s wait until it’s adopted. And from one end and from the other end, you cannot ban it because you block productivity. So the better way, the best way is to, from one end, understand what’s happening, but from the other end, provide the guidance, provide the tools, provide the means for your team to adopt it securely from the first place.
-
And that’s what we are trying to do with MCPTotal. We are trying to, from one end, bring you the visibility, but from the other end also provide you with the right way to adopt it, with the secure way to adopt MCP, to adopt coding agent connectivity and Skills and extensibility, and all those things. So this is my recommendation.
What Are Best Practices for Writing a Well-Designed MCP Server?
-
For the developers, there is a few high level questions and a few technical points. The high level stuff is, what are you trying to expose? Don’t just wrap APIs. Think about the workflow of somebody using it. Like if it’s like Slack, so you want somebody to be able to list messages in the channel, to send messages. So think about the main flows then provide high level functions for that. Don’t let the LLM take all the heavy lifting by providing a lot of tiny utilities that the LLM need to orchestrate together to get something done, but provide a real high level functionality. For example, instead of having IDs like channel IDs and user IDs and forcing the agent to convert between email to ID all the time, just work with arguments that gets email of the user. And the name of the channel. So it can be single function calls, send message, channel name, and the text, and that’s it.
-
Think about batching. Sometimes the LLM needs to do a lot of operations. So one way to solve it is to have your API support batching from the first place. So you can send multiple messages or search multiple channels together. Sometimes also, let’s say, you connect your Slack to the agent, the agent needs to know who you are, right? Because they say, hey, who sent me a message? So the search query need to sometimes depend on your name. So just providing the tool that, hey, who am I to the agent sometimes is very powerful.
-
These are very specific ones. On the architectural level, you need to think, if you want like an stdio based MCP which is more for local consumption. Sometimes it has to be the way, because you need to access local stuff. But if you want basically to access local files, local network, you need to be running locally. It’s also sometimes easier to begin with because you don’t need to host anything on your side, just let everybody run it on their side. From the other end, it’s a bit harder for the end users. MCP can solve also the problems of less technical users. And running local MCP with NPX or UVX. NPX is something most non-technical people are not going to say or think about. So you are losing a lot of the target audience by doing that. So providing something managed, it makes everybody’s life easier, but means you need to think a bit more about what you do. And you need to consider states, which is also some piece of the protocol that most people ignore and its ability to be stateful. You need to think about the authentication. But eventually these things are what you need to consider as a developer of MCP.
-
Let’s talk just about developers. Developers, they want to run fast. They want to build everything quickly, but they also don’t like to work very hard. If they invest a bit in connecting to the right MCPs, they can get a lot of benefit from it. Because the MCP will get the right context at the right time. For example, providing your coding agent access to your Datadog environment, I think it’s great because if you need to debug something, you just tell, hey, try to find the errors related to that. Or if you want to build something, even something new in many cases, helps the LLM to see real examples from Datadog. So instead of doing it manually and copy pasting and even that probably are not going to do, so don’t be lazy and connect it to your data, connect to your Datadog, connect it to your maybe staging database. It’s not sensitive. You can do read-only. But it gives LLM much more context about how things really looks. Maybe connect it to Auth0 that you use. Start with staging, read-only, and then as you understand better, maybe you can open more.
-
For agent users, which is almost all of us, especially developers, but also other people. It’s a bit configuration on the beginning, but you’re going to benefit a lot of it. For the leadership, I will say try to enable it in your organization. Find the right means, the right ways to not ban it but to allow it. And actually, I would say even encourage the team to use it. We are talking with many organizations like that, that they want to spread the word. They even built internal MCPs, and they have it already, but not everybody know and understand that they have it, developers, non-developers, and they use our system also for that. So they can build internal catalog for their team that they see everything. And it’s a matter of click just to get access to new capabilities and eventually be more productive. It’s everything about productivity and competitive. So that’s my recommendation.
How Should AI Agents Handle Permissions Without Overwhelming Users?
-
One of the challenges when you allow more and more abilities, let’s say, or maybe access to agents and in general, and that’s a challenge I had in different places in my career path. Previously I was leading the security in Magic Leap. Magic Leap is an augmented reality startup that builds hardware, software, cloud services for augmented reality. And we had it back then and we had it in other places. And so eventually it’s the tension between allowing the access all the time from one end to let the user in runtime approval, disapprove access. Which is what we get from all coding tools today. They start doing something, hey, can access that, can access that? And then you say, yes, do all everything. Just access everything. So I think this is some real challenge that is out there. And it repeats himself in many places.
-
Even think about your Android. If you remember early days? Hey, do you approve the access? And most people just approve. So if you think about the developer of that application or the developer of the operating system like Android. So they did the right thing. They asked the user to decide. But eventually, most users, they don’t have any idea how to decide if they want it or not. It’s the same about Android, the same about what we did in Magic Leap. The same about those coding tools now asking you do you want to allow it or not? And it repeats itself in every other situation. Eventually the right way to solve those challenges is encouraging the developers to go to a more secure path.
-
In Apple, in iOS, if you want to get access to network, so the user has to approve it. So initially it was just, hey, approve it or not. But over time, they realized, hey, we can provide less granular precision, like access free, out of charge. So if you want just to know the country, you get it. You don’t even need to ask the user anything. But then if you want something like city level so you need, the user has to approve it. And if you want like house level, so you have to be like really administrator to approve it. Just as an example. And setting that granularity means that most developers, because they don’t want to deal with all the options and anyhow, they need to implement the case that the user hasn’t approved. So they will prefer just to stay there and maybe it’s enough for them. Maybe the country is enough for them. They will use that and that’s it.
-
So if the platform owner, let’s say, do it that way, it means eventually, everything is much safer and much more fluent because the users doesn’t have to approve. They just get the benefit of the value, which is normally enough with the country, let’s say. And when they do need to approve something, it’s only for the exception cases. So eventually they have much more attention to decide if they want it or not. So not only you get better product normally, because normally you don’t need to approve anything and you get everything working. But when the human need to be in the loop, it’s only on those specific case that the human really need to be in the loop. So the platform owners needs to find a way to do it here as well.
-
Coding agents, they need a better way with simple policy maybe. But if certain bash commands are safe or not, and then allow the organization or the user maybe choose once. Just make sure you fully know that it’s safe, read-only. Just say it once. But then developer will only be asked for the real stuff and then he can have more attention for those and he will not just approve everything. This is one of the challenges that is still not solved. Even if you look on the primitives, those coding agent provide to kind of try to solve it, the patterns, they let you to define what you allow and whatnot. It’s not well built for really understanding what you do. It’s like regular expression of bash commands.
-
You need something a bit more than that. For example, you want some sandboxing and say, hey, I am okay running anything as long as it’s read only for my file system and it has no network access. As long as these two things, run anything. Except for, let’s say, env files or keys. Generally speaking, things like that that are a bit smarter can solve 90%, 95% of the cases. And then you are left only with the real important things that you can really use the human in the loop. This is one of the challenges for coding tools and more broadly for agents in general.
3 Tech Lead Wisdom
-
The first one I want to say, in today’s world, and we all talk about the fact that coding agents and everything changed so much from even a year ago. Just how developer, what he means to do and what he’s doing really changed. So one initial point to talk about is, adapt your system to that. Make sure you can benefit the most out of those AI coding tools. For example, much better to use standard languages, standard stacks, standard libraries that the LLM is much more familiar. Monorepo, for example. It’s getting a lot of advantages today because the LLM has all the context leveraging more linters, which from one end sometimes, hey, I don’t want something that forces me to do something. But today, it’s actually the LLM. So why do you care? You just reduce the possibilities for the LLM to do mistakes. So adopting a lot of those things that makes the life of the coding agent better, it’s very important today, much more than before, because eventually they write all the code. So it doesn’t matter what your language or developers prefer it, it matters what language the LLM is going to be better at.
-
The second one is similar, but it’s more about you and your people. The understanding is that the way we develop today changed dramatically and it’s going to keep changing. And most of the changes up until today were mostly around the coding, but we all understand that engineering is more than just coding. It’s an important piece. There is also the planning, the architecture. There is also QA, and then SRE and runtime stuff and watching and monitoring it. Resiliency. There is more, many other aspects but, just understand that this is where we are and it’s going to fully change and make sure you continue, not only you are not doing it as a single step. Hey, I changed my system now it’s better, the coding agent works better with it and hey, I’m done. Just understanding that, for the leadership, but also for the people, hey, it’s going to continue to change. We have to keep our mind open. Sometimes it’s frustrating because there is so many out there. Keep up to date, don’t run too fast to use everything new and change your system for anything, but make sure you are up to date. And once in a while you adapt. And you make sure at least you are on the 80, 90%. You cannot be on the 100% all the time because you are going to work mostly on that. Sometimes it’s so interesting. You can just read out all the day and change your system all the day, but that’s not what you want to do. But make sure you are still using the most up-to-date stuff as much as you can. And you keep updating your tech stack, you keep updating your capabilities and connect to MCP tools, so to speak. And you leverage more all the new technologies.
-
And then it’s related, that’s more maybe for founders of startups. We are technologists. We are here on the tech podcast. So we like to code, we like to build. And especially today, with all those AI agents and coding tools, it’s so easy to build new stuff. Just a single prompt away from building a new whole system. So it’s very tempting. But the challenge is how to balance that compared to how to make sure you are building what’s the business needs. You’re not building other stuff. Eventually you need to maintain it. That balance of you can build from one end, even much more than before, but from the other end, you want to focus. It’s much more important than ever before to make sure you understand that balance, and you make the right decisions, which is how to decide and how to know which one is exactly right. But at least you make sure you are investing thinking it, and you are eventually deciding what you do. You are not letting the LLM to decide what you want to do for you.
[00:02:02] Introduction
Henry Suryawirawan: Hello, guys. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I have with me, someone, who I knew before, you know, through working relationships. His name is Ariel Shiftan. He’s the CTO of MCPTotal. As you can tell, so today we are going to talk a lot about MCP. Whether you are following AI stuff or not, I think MCP is one of the coolest technologies that, you know, a lot of people are trying out. So today I hope we can dive deep into what is MCP, what are the benefits, what are the security risks, and all those things so that we can learn from Ariel. And yeah, incorporate that in our day-to-day life, when exploring AI. So Ariel, thank you so much for your time, looking forward for this conversation.
Ariel Shiftan: Yeah. Thank you for inviting me. Happy to participate and let’s go.
[00:02:49] What Is MCP and Why Is It Called the USB for AI?
Henry Suryawirawan: Yeah. So maybe Ariel, before we start, you know, diving into what you do with MCPTotal, right? I feel that we need to kind of like clarify a little bit about MCP, because some people, maybe some listeners here, you know, have used MCP a lot, have heard about MCP. But I think some people may not have heard about it or may not have dealt deep into that. So maybe if we can start by, you know, explaining what is actually MCP?
Ariel Shiftan: Yeah, sure. So MCP is sometimes called like the USB for AI or USB for AI agents. So think of it like a year ago or a bit more, like earlier days of agents and AI usage and all the explosion, you know, of LLMs. So initially people, you know, just used LLMs as it, so they, you know, injected tokens, they got tokens out. It was great. Like from everything they learned, you know, the agent learned from the internet, from scanning everything there. But then as they wanted to make it a bit more customizable and answer the specific, you know, questions of the end user. So you need to connect it eventually to some specific data sources. So RAG was one step toward that by, you know, fetching, you know, context before calling or before reaching the LLM to enrich it a bit and provide it a bit more context. But pretty fast people understood, like developers and the community, they understood they need, you know, to connect it to additional systems and tools. So how do we connect, you know, eventually the LLM to a, to APIs, right?
So APIs, everybody knows APIs. Normally APIs are consumed by developers. And then back at the time, they say, hey, let’s see how to connect, you know, those two to AI to LLMs. So the first step was developers writing, you know, specific functions like fetch emails, right? And that fetch email was something the developer wrote. He, you know, implemented the logic and he used the, let’s say, Open AI SDK. There was an additional option to provide, you know, a few definitions of callbacks. Those callbacks actually were injected to the LLM context. The LLM could decide to tell you back, hey, instead of here is my result, here is a function call, please do that function call for me, send me the result, and I will bring you a better answer.
So initially, developers implementing AI agents or implementing just, you know, AI solutions back then, they just implemented all those functions themself. And the problem was the, you know, N on M problem. So anybody needs to implement that connector per, you know, per API and then per LLM or per agent type. So MCP came to solve that problem initially. So instead of, you know, each developer solving that exact problem, so one, you know, team, maybe the owner of the product could solve it by implementing the MCP server. That MCP server knows to, how to connect, you know, to the tools, to the APIs of that system from one end. And from the other end the additional, you know, implementing thing about it is that instead of, you know, providing some constant API definition, actually the definition is more, more simple or more elastic for LLMs to use.
So the definition of the LLM is eventually just text because LLMs, they know to consume text. So instead of telling the, hey, this is the API spec, this is exactly the argument, you have to call it exactly that way, and the developer needs to kind of to integrate it into the agent has the right code. The idea was that because we can describe tools with words, and LLM knows to work with words, with language, so actually the integration can happen at real time, which is the real, you know, interesting part about MCP. Because you can write an agent, you don’t actually know what is going to be connected in realtime, in runtime, right? So you implement a very good logic of agent, but you allow your end users really in the runtime to connect their own tools using MCP and it all works like a magic. And that’s exactly what’s brilliant and why MCP is so powerful.
Henry Suryawirawan: Yeah, so maybe if I can try to explain in my own words, right? Because I’m just, you know, using my journey using AI. So in the very beginning, we know about ChatGPT, right? So we use chat, you know, asking a lot of questions. So in this era, I mean, in this moment, right? So a lot of things are just absorbed by ChatGPT app, right? Like how they call the web search, how they call, you know, finding information and all that. But sooner and later, we try to incorporate LLM into, you know, things that we wanna do within our companies or within our work, right? And sometimes we need to search something, you know, through, I dunno, internal tools, you know, database, you know, GitHub and all that, using specifically data from our tenant, so to speak, right?
[00:07:22] How Does MCP Differ from Standard REST APIs?
Henry Suryawirawan: And this is where the problem starts, right? Like how do you actually connect LLM to the data sources that belongs to you because, you know, it’s not publicly available. And I think MCP is introduced by Anthropic, if I’m not mistaken, to kind of like solve this kind of like interfacing, you know, all the different systems, right, so that people develop it in one protocol. MCP itself is like model context protocol, right? It’s called model context protocol. So tell us, maybe you mentioned about APIs. I think we are all familiar with APIs. How does this actually differ? How does MCP actually differ with API? Is it like an extension of an API, so to speak? It’s like a wrapper on top of API? Or how does it work slightly differently?
Ariel Shiftan: That is a right question and let me maybe extend a bit even before answering the question, extend a bit of what you said, right? So indeed, Anthropic invented MCP like end of 2024. It started to get some traction, but earlier, you know, early 2025, you know, other companies joined it like OpenAI and Microsoft and also Google, and suddenly it boomed, right? Suddenly everybody understood, right? The fact that we couldn’t connect tools dynamically to any agent without, you know, the developer needs, you know, development time and integration but it’s actually in runtime. It brings a lot of opportunities. So everybody started to adopt it until, you know, late 2025, it was actually being promoted and being part of the Linux Foundation. So it’s no longer, you know, owned by Anthropic. It’s really everybody together using it. It has like 50 million download monthly just for the TypeScript SDK of it. So it’s really used. There is like thousands or maybe tens of thousands of GitHub repositories with MCP server. So it’s like in terms of, you know, on the protocol level, it won the war, right? I mean, where other alternatives, people thought maybe they will take it, eventually MCP got it, right? But when you think about what in reality, I will get to it in a second. Maybe I will first answer your question, but in reality, there is still a jungle, I will just say that. It’s still a mess. I will get to it, back to it soon.
About your question about, you know, is it really, is it just a wrapper of APIs? So for one end, yes. Maybe that’s actually related to what I want to say. So maybe I can combine both answers. So eventually there is still a jungle there. So even though the protocol won, there is still jungle, meaning there is so many MCPs for the same, you know, out there that are all, you know, sometimes weak and kind of project that nobody’s really maintaining, for one end. So if you as a developer or as the early adopter of the technology and you want to, let’s say, you want to connect your agent to GitHub, you will do GitHub MCP. So there is one way, major one, but there is so many others, even one by Anthropic that where they invented a protocol, they provide it as an example. And if you do like GMail MCP, you will find many of them and you don’t, have no idea which one you should try. So probably you will try a few of them, until you decide which one you want to use. And trying a few of them has a lot of, you know, implications, security, and other implications that you’re not always under fully understanding. So that’s kind of the state of MCP, from that perspective, is that there is so much, you know, out there, but you don’t really know what you need to use.
You know, too many trees that you don’t see the forest. That’s an Hebrew, you know, way to say. But from the other perspective, you know, even the protocol, the protocol itself, we, we talked about how it evolved from Anthropic, you know, invented to being, you know, everybody’s as part of Linux Foundation. But on the other hand, it got few major, you know, spec improvements, which is good from one end. But from the other hand, it makes, you know, the life of the users, of the adopters of the protocol much harder because there is any on new features all the time. You have to support all of it. So the practice is, the reality is that, you know, there is, you know, the protocol evolved a lot, but not all the clients, you know, are catching up.
So there is not only kind of mess in terms of, you know, the MCP servers, in terms of the clients, that even major ones, like even Claude Code itself, right? That’s an interesting point. Even Claude Code itself, only recently, like a week or two ago, they added support for what’s called like dynamic tools, tool loading, right? The protocol from day one supported the ability for MCP server to dynamically change the list of available tools which is important piece, I will get to it later. But even Claude Code, right? They invented the protocol. So you would say, hey, they will support everything from day one. So they have a lot of gaps, including that specific one, which affects many developers that work off MCP servers, that wanted to use that feature, but because it wasn’t supported even by Claude itself and by other clients as well because of, you know, the fact it’s evolving so fast. So there will become real fragmentation of, in terms of support. So if you are developing an MCP server, you will consider that and probably you are going to avoid, you know, using features that are not fully supported, right? And then it means that eventually the protocol was misused.
So going - I will continue that point in a second, I mean later a bit more. But going back to your questions about is it really a wrapper of APIs, is it just a wrapper of APIs? So the answer in reality is yes, but it shouldn’t have been like that, right? In reality today, most, you know, MCP server writers and most MCP servers are eventually, you know, thin wrappers that go, you know, around maybe REST APIs. So people just use it as an MCP, like API catalog, just replicating REST APIs to MCP kind of interface, which provides some value, but it’s not really providing all the value that MCP was intended to bring. And again, I think all of it, it comes from many reasons. I can talk a bit more about it in a, as we continue the discussion. But eventually reality is that MCP is not fully utilized to what it can do, and other stuff partially replaces it.
Henry Suryawirawan: Well, thanks for the, you know, in depth explanation from the historical, you know, evolution of the MCP. So I think the first time I saw MCP, right, I tried MCP, I think maybe it was like GitHub or, you know, something like that, right? When I connect my LLM to that, actually the power suddenly, you know, becomes like real. You know, like, oh, what do you mean I can ask questions, you know, to my GitHub? You know, how many PRs have I opened? You know, show me how many lists of issues that we have in the repository and things like that, right? Suddenly you have like a natural human interface, you know, like chat opening up to your, you know, your data sources, in this case is GitHub.
[00:13:40] What Can AI Agents Do with MCP Beyond Reading Data?
Henry Suryawirawan: I think that’s like one obvious benefits, right? One benefit use case of MCP, right? And the way it works, I think, I assume it’s like the wrapper on top of GitHub API. But maybe if you can tell us what are other benefits that MCP provides, right? Because on top of this, maybe we are not entirely sure what actually MCP brings to this AI ecosystem.
Ariel Shiftan: Yeah, so going a bit back to what I explained earlier, like early days of LLMs, they could answer many questions, like generally general question, they could use general knowledge. But you want to personalize it. You want to make it more specific. You want to connect it to your own data. So the whole point of using, you know, the whole point of maybe writing good applications on top of LLMs, not just like the simplest app, which is ChatGPT, right? Like the basic chats, which are mostly at least they started by just using the knowledge within the LLM to answer questions. So the whole point of making good applications on top of LLMs again, is to focus them on other, you know, things you want to solve and providing the right context at the right time. So providing the right context at the right time, meaning you need to connect to other places that provide you the, again, the right relevant context for that specific discussion. For that specific a, you know, LLM, API code.
And that’s exactly what eventually, you know, MCP solves. So your, the example you gave about GitHub is a great example, but it’s much more generalized than that. Eventually, the idea is to allow you to get, again, the right context at the right time for anything, not only GitHub. And not only getting the right context, but also perform actions, which is really revolutionary, because up until then or up until tools were brought into the table, but that was like short period before everybody realizes, hey, we need some standardization and MCP to cover. But the point with tools is that it’s not only, it’s not read-only, right? It can also allow the agent to perform actions.
So the evolution, again, from just an LLM that knows to answer, you know, public, you know, information, RAG, some contexts, and then tools now, not only it’s much more enriched context and much more specific because it can be, you know, a lot of parameters and connecting to so many systems with like that USB kind of a nature that we talked about before. But not only that, it now also allows the LLMs to perform actions.
And this opens a lot of opportunities, but also a lot of challenges, right? Because now, you know, there is a lot of room for mistakes, right? Maybe the agent is going to drop, you know, the table from the SQL table, right? So that’s - and there is also much more room for, you know, prompt injection. I guess you heard about the term where, you know, some data you fetch from that system that you are connecting to maybe somehow make the LLM think is being, you know, asked to do something, right? So that’s like what’s called prompt injection. So the room for, you know, both mistakes and, you know, cyber issue like security issues, increase dramatically. And it’s also related to what I said earlier that, you know, the integration point was moved from, you know, development time, which is a bit more static and a bit more controlled to runtime, which brings again a lot of opportunity to connect it to anything you want. But now it means also the guardrails, the protection systems, the protection areas you need to bring, has to be at runtime because you don’t have even the knowledge of who, what you, you know, who is going to connect it to what in, you know, production.
[00:16:56] What Is RAG and How Did AI Evolve to Tool Calling?
Henry Suryawirawan: Yeah, I think you mentioned some terms that I’m afraid some listeners may not be familiar, right? So for example, you mentioned about RAG, you know, and then tools, right? So some people refer it to like, you know, function calling or tool calling, those kind of things. Maybe can clarify a bit, because I think to understand how MCP works, you need to understand these two things, right? So maybe in a nutshell, what are RAG and what are the tools or tool calling, yeah?
Ariel Shiftan: Sure. Thanks for the question. So, again going a bit about the evolution of you know, using LLM. So initially you, you know, you can, you could, as a developer, right, or as end-user, you could provide it with tokens, I call it, right? Like eventually, you know, string like natural language, you know, input and you got output. That was cool, right? But then you, everybody realize, hey, we want to customize it a bit more. So RAG, retrieval augmented generation, means I first retrieve something, you know, from the data source. Normally people use like vector database in order to find, you know, semantically close the, you know, data that was indexed semantically to find the best, you know, data according to the input. And so then you could, you know, find the top, let’s say like, 100, you know, pieces of text from everything you indexed that are more closest to the input, and you can ingest it as augmentation to the context to the LLM query you make. And that was what was called, like RAG. It was like kind of in the news, let’s say like 2024 or something like that. A bit after, you know, LLMs came to our world and developers started to think about how they make applications on top of it. So that was the first evolution.
The second evolution was, hey, we don’t only need, you know, RAG, but let’s make it agentic, okay? So let’s make a LLM agentic. Agentic means it’s not just that we do one step to augment it and then we call it and that’s it. But actually we are using, we are running maybe a loop, right? Like we are running something that repeats itself. It has an objective, it tells the LLM, hey, this is what I want to achieve. This is the list of tools I have, right? And then, hey, tell me what you want me to do. If you want me to call a tool, I will call the tool and bring you the result. If you want me to, you know, tell me what. So that’s how it worked with agent. The agent, you just ran it in, you know, in loop. You provide the LLM with tools. The agent told you, hey, run this tool, run this tool. Until it got the final result and told, hey, this is the final result, show it to the end-user. Something like that. So that was the, maybe the third iteration, right? Like just LLM, you know, RAG, then agents. And then maybe the fourth iteration was, hey, we need, there is tools, right? Let’s make it standardized. So it’s very easy to connect anything to anything. So that was where MCP got into the, into the point, the picture. Does it make any sense?
Henry Suryawirawan: Yes, definitely. I think thanks for the evolution, right? Again, step by step, I think, hopefully people can follow that and make it clear about, you know, what are the different things that, you know, consolidate into like a very sophisticated AI ecosystem these days, right?
[00:19:54] Why Is MCP Misused as an API Catalog and What Does That Cost?
Henry Suryawirawan: So I think there are more things that are coming, right, up and coming, right? So many people, maybe have heard about Skills, right? This is kind of like pretty new, right? Skills, maybe also, you know, when you use with Claude or, you know, you have these Claude Skills. Maybe if you can clarify a little bit, are those the same thing or are those built on top of MCP? Yeah. Tell us more about it.
Ariel Shiftan: Yeah, so I will get to your question in a second. I want to also extend, you know, continue the, what the discussion from before. So we talked about, a lot about, you know, how MCP is used and misused. I think the main two complaints, so to speak, against MCP is one, the context bloat, and second is the output handling.
Okay. So the context bloat is the fact that MCP servers, normally they became a API catalogs as we said, which wasn’t exactly meant for just that. And so they have a list of tools for which tool they provide, you know, description. And the scheme of the input and sometimes also the scheme of the output. So the LLM knows how to use it, how to pass the output and so on, which is great because it allows all the dynamic nature of integration we talked and many other things. The downside of it is that, you know, as you use more and more MCPs, as you use more and more tools and as just MCP writers want to make their tools better and the usage better, so they added more and more description to make it more accurate, right? But this means you bloat, you know, the context of the LLM with a lot of, with too much information, right? That can just disrupt MCP that stop the agent from doing what it wants to do in these specific discussions, right?
Not always the user wants to use all the tools. Normally it doesn’t maybe or normally uses a few of them. But every session, right? Every time you start a new discussion with the agent and it uses the LLM, all the tools are being injected. And that’s because MCPs became like a catalog, because of the issues on the protocol, like clients not implementing it right. It’s because of lack of education of how MCP is supposed to be used. So normally, the MCP writers, they just generated so many tools and with long descriptions from one end. But the agent writers, the agent developers, they just took all the tools and injected it to the context instead of thinking, hey, do we need all the tools? Maybe we’ll let the user decide what tools you want, and so on.
So there were other options. The protocol haven’t dictated that you as a developer, you just need to take all the tools and inject them directly to the context, I would say like of the agent, right? Meaning you tell the agent, hey, this is the available tool. You can use them. You can tell me to use them for you, right, as we said before. So instead of the developer thinking a bit about that, they just used all of them together. And from the other side, you know, the MCP writers, again, they couldn’t use the dynamic nature of the protocol because it wasn’t supported. So the lack of education, the lack of support, and that, you know, jungle that we talked about in terms of MCP eventually has the all ecosystem to use it, you know, in a way that is very non-efficient, right? So that’s just for the context bloat of the tool description.
The other, you know, the second part that people normally say against MCP, and again, I agree, is the fact that let’s say you have like a one, you know, MCP API to get your emails from Gmail. And the second one, you want, you know, to just, you know, summarize it or maybe just, just take the content and send it on another email to somebody else. Okay, not forwarding it directly, but, you know, you want to create a new email from it. So you actually don’t need the agent to see the content, right? You just want to take the content from one API tool call and pass it over to the other one. But the way it works together today is that, you know, you, the LLM tells you, hey, please call that API for me. You call it for you, I mean, you as agents developer, then you add it to the context, you add it to the list of messages that LLM is now using. And LLM see it and it tells you, hey, now call another tool with additional copy of the same, you know, the same email. So you have two copies in the context and you don’t even need one of them, right? You just need to forward it.
So this is the second, you know, argument against MCP. Again, both of them are very valid. Both of them, I think, happened because of, you know, lack of like the jungle, what I call like, you know, for, in how MCP is adopted and used and lack of education, lack of, you know, big critics. But that’s the reality. And that’s, those two reasons are great, you know, reasons for why MCP is considered today is it won kind of the race, as I said earlier, and it’s here to stay, the way I see it. It’s not going to be replaced, right? Because it solves some real issues that are still needed. But it is being maybe wrapped or being replaced in some cases, some more, you know, simpler cases and being wrapped in some other cases. And that’s what Anthropic, for example, talked about in their own paper about, you know, “code execution with MCP”. This is a known, you know, blog about it by Anthropic, which they talk about the specific issues I just mentioned, and they talk about a few ways to solve it, and we can talk about it a bit later. But eventually, because of those issues that Skills came, you know, I think the evolution was that Skills came out by the same Anthropic company.
[00:25:04] What Are AI Skills and How Do They Compare to MCP?
Ariel Shiftan: So Skills are also solving a bit kind of other problems, but they can also solve those problems. And the main point about Skills that everybody likes is what’s called like progressive disclosure. Progressive disclosure means exactly the opposite of what we talked about, the context, you know, explosion, the context bloat. So instead of telling the LLM, hey, here is all the details of everything I know to do, everything I know, and if you want to use it, one of it. So just tell me. I will just use one of it. But you have to know everything. So instead of doing that, which is our MCP in practice doing that, Skills are the other way around. Skills are, you know, just markdown files, right, with some header that says, hey, this is a very short description that you need to pass to the agent. If the agent wants, he can ask and he will get, you know, the rest of the markdown. And within the rest of the markdown, there can be, you know, references to other files that sits next to the Skill. And then the LLM can even ask the agents, using the LLM, can even ask to get even more information about, you know, additional details. And by definition, those, you know, extra pieces of information could be static data, but could also be, you know, code pieces that the agents should be able to run. That’s normally working better with the, you know, coding agents and those that has like CLI or a way to run code.
So that’s what’s called progressive disclosure. The agents just get, you know, the very, you know, very short description of the title of what’s in there within that skill. But then as he wants, he can disclose, he can kind of see and use more and more things from it. And that’s what Skills are very good at. Today people, for many, you know, I would say simple. It’s not only simple, but for more local things, let’s say, right? If the, it’s only about things that you can also run locally. So there is no longer a real need to go to a server. And MCP is a bit more complex in that regard. It solves kind of a more complicated problem. And for those simple cases, you can just describe everything with markdown. You don’t need to support any protocol. You don’t need, you know, to think too much. You don’t need, you know, to run any code that runs all the way in the background which is how MCP works. We haven’t talked about it, but it works that way, that you have a server that listens to you either over, you know, stdio, or over, you know, HTTP. And you can communicate with it and ask it to do some stuff. So instead of all of that, you can just describe, you know, the information in, you know, in markdowns with those levels of, you know, details that the MCP can choose how to use.
I would just reiterate that I think the primitives of MCP actually supported all, you know, the ability to do that progressive disclosure. We talked about the dynamic tool selection, the dynamic tool, that’s the protocol supported from day one. But again, nobody eventually really adopted because of the fragmentation, because of lack of support. Also in terms of the tool output, most people just took, again, called APIs, returned all the JSON, you know, huge JSON for a simple request. And that’s not how MCP was supposed to be used and not how I would recommend people to write it. I would recommend people to, you know, consider carefully what’s the input, what’s the output? It should, for the output, by default, it should, you know, probably return less, you know, information and allow the agent to control the level of detail he wants for exactly what you do. So by the protocol, you can still do progressive disclosure, but most people, again, took it eventually as API catalog wrapping, you know, APIs. So those APIs, REST APIs, maybe they were great. I mean, they’re great probably for developers. The developers, they get all JSON. They can just select what they need and that’s it. They do it in development time, but the agent can also do it in runtime. But it means, again, you pollute and you just bloat the context with unrelevant stuff that eventually makes the agent to be less sufficient in doing what he needs to do in this specific session.
So to your question, Skills are partially requesting, you know, replacing let’s say like, MCP in a lot of simple use cases, normally local stuff, not only. And they’re replacing it partially, they’re not, I don’t think they’re going to take over MCP because MCP solves some real problems, like real issues like remote, accessing remote things like the authentication and other stuff that are not handled by skills. I also believe that MCP if it was used better by everybody, if it was adopted better, if the jungle that I said earlier was less a jungle, but more organized, I think it was, could have been taken more, you know, from that, you know, more pieces, you know, of that cake of, you know, let’s say AI use cases, connecting AI to or enriching AI agent use cases. But today, skills took some of that, you know, of that cake, took over some of the use cases, and MCP has its own, you know, share of that cake. Does it make any sense?
Henry Suryawirawan: Wow. It’s, yeah, it’s very comprehensive. I mean, like, I haven’t really dived deep into skills, but now I get a glimpse, you know, a much better understanding about the skills, right? And I think that, yeah, skills is pretty recent, right? And in fact, I don’t know whether it is compatible with all the AI models and tool out there. I think what I know is like you can use it with Claude Code, right? You can install it as part of your Claude Code home directory or something like that. It’s not yet a protocol, right? Unlike MCP, that is, you know, like a well-defined protocol. There’s a spec. Everyone can implement it just following the spec. And any model, you know, you can just utilize on the MCP to actually connect to your data source.
[00:30:29] How Does MCP Server Architecture Work Under the Hood?
Henry Suryawirawan: So you mentioned a few times now about server. You know, about, you know how the MCP actually you need to set up a server. You connect to it, so it acts like a proxy maybe to your REST APIs. So this is where we will bring our conversation now is like, what’s the issue with all these MCP servers, right? And which brings us to what MCPTotal is doing. So maybe tell us about that.
Ariel Shiftan: Yeah, so I would just say like a small comment like Skills are also becoming broadly adopted. It’s also becoming a protocol like it’s become, it’s became open and many, many other companies are adopting it already, like Cursor, OpenAI. I still think the main use case is coding agent because eventually a lot of it is about code, but not only, it’s also being used by other places. So it’s, it is being adopted as well. And I think I’m also, I also believe it’s here to stay, maybe next to MCP, not instead of it, but taking again some of the use from it. And continue to your question.
So yes. MCP again is not simple. It’s a bit more complex, you know, and it came to solve, you know, a lot of problems together, maybe too many problems together. And maybe that’s part of the reason, again, we have Skills eventually. But because like the idea was to allow any agent to connect to any, you know, third party system simply. And the idea is to do it in runtime. So meaning you need to com, you know, the agent need to communicate with that, you know, with that API layer, right? The new API layer or the MCP API layer dynamically. So it means there need to be, you know, some communication. It’s not like some SDK you integrate in runtime. And also the agents, you know, they’re being written in different languages and so on.
So I think part of the reason they chose, you know, that architecture was, you know, an easy way to support any language, anything in runtime. So let’s, you know, let’s come up with some, you know, server and client model. So the client talk to the server. Initially it started from use cases that are more like desktop ones, as I understand it. So stdio was first, you know, implementation. So if you have an agent, let’s say to date Cursor, Claude Code, back then, you know, Claude Desktop, so that’s agent like, which is the client of MCP just, you know, spun up the process and just talks with it over stdio, JSON-RPC. So you send, you know, JSON and you get JSON back. And our protocol is on top of it.
As it’s, you know, evolved a bit, they realize, hey, sometimes it needs to be remote. I think it was there from day one on the protocol, but just in terms of the use cases, so let’s make it, you know, on top of initially SSE and then eventually, you know, today it’s H-, Streamable HTTP. So you can actually connect remotely over, you know, HTTP, eventually JSON-RPC. You connect and you also have like a bi-directional channel. So the server can send you events back, you can talk, you know, to the MCP server. And that MCP server eventually has, you know, the infrastructure and all the, those layers that either it runs on your desktop or runs it remotely. But on top of that, you know, infrastructure and the JSON-RPC transport layer on top of it, there is a few, a few type of messages, let’s say that evolved over the time. But initially it started with, hey, give me the list of tools you have and then the server send back, hey, this is the list of tools. This is the description, you know, the bloated context we talked before. It could also send proactively, hey, this, there is an updates to the tools, please re-fetch. And then give me the list of resources and prompts, which were additional two pieces of the original MCP and, you know, spec that was published back then. And over time there were more and more features that were added to it.
And so that was again, the beginning. We talked already about the misuse. Most, again, implementation only use the tools pieces out of the protocol. They’re not using all other functionality. So they mostly use maybe the tools and maybe the authentication piece of MCP, which is very important. I think these are the most, the two most used, you know, real features from MCP. And the fact that you could, that most initial implementation and still today, mostly if you want to use MCP. So there is some GitHub projects for that. You just, you know, within the MCP config that you configure the usage of that MCP server. Sometimes it’s through UI, sometimes it’s through just JSON, so you just really run like NPX or UVX or just Docker command. So you really run something on your own computer just to, you know, to connect to some…
You know, let’s say you want to connect to Gmail, so Gmail has live API. I have some application on my desktop. In order to connect to Gmail, I need to run another component. Normally something from GitHub that I don’t even know what the source is. As we said earlier, normally we try a few of them. So this is just to connect with live APIs. Why do I need one, you know, like it’s a bit bloated. I mean, the way I see it, I understand the reasoning, but eventually you really need to run either run code or again, over time some of the implementations and, you know, product owners, they also implemented like hosted MCP, so it became also managed API. But even, but also today, even today, most of the MCPs are still, you know, just a relay or something you want on your own computer that just translates, you know, the MCP APIs kind of to the APIs behind the things.
I would just, again, remind that just on just, you know, having APIs one-to-one, between the REST APIs to the MCP APIs is not the right way, but that’s in practice what’s happening in most cases, right? So again, some piece of code run some RPC just to translate these two protocols, so it’s a bit strange. But from looking from security perspective or governance perspective, you know, let’s say I’m organization like medium sized, and I know I want, you know, to be very productive in my company. So I allow all my developers to use, you know, recent AI coding tools because otherwise you stay behind, right? You cannot avoid it. Everybody use, everybody that wants to stay on top and wants to run fast as their competitors, they’re using AI coding tool. So developers love it. They don’t need to think too much today, right? They can just, you know, let the agent write the code for them. And they like it even better if it, they connect it to a lot of sources so it gets more context, it writes relevant code, and so on. So all of those tools are great. They’re really, you know, helping teams to be much more productive. But it now means that developers, without thinking about that, they’re running so many random, you know, code pieces from random repositories that somebody wrote over the weekend without taking security into account or somebody took security into account, but wanted to make it vulnerable. So took security into account in order to take over other people. So that’s reality.
[00:37:01] How Do Malicious and Vulnerable MCP Servers Put Organizations at Risk?
Ariel Shiftan: We already saw a few, you know, MCPs that were vulnerable, like one that’s wraps, you know, email access and just added bcc to himself with all the emails, right? Nobody saw it. It’s on bcc. But really he leaked all the emails. So this is just one example, but there were many, you know, malicious MCPs already out there. There is also many vulnerable MCPs, like MCPs that by default runs on your machine as HTTP servers, but they just listen, you know, not to the loopback but listening, you know, on the network interface. So anyone, any other, you know, machine on the network can just connect to it. Normally there is no authentication because it runs locally, right? You don’t need authentication when you run locally. But apparently you need, if you, you know, listen on some public ports. And sometimes even, you know, Chrome tabs like website that you surf to can also connect to those ports that are open and there is no authentication because it’s for local user.
So there is a lot of, you know, security challenges coming from the fact that you run code that you do not trust. Maybe that’s the standard supply chain, but again, just an extra opportunity that so many developers, which normally are very privileged are just starting to download random stuff, right? So it can be, again, either malicious, then targeted by some, you know, bad actors or just vulnerable. And so that’s I think the first, you know, highest priority problem that’s, we talk with CISOs, that’s what they’re afraid of, hey, what do all the developers are doing across my company? They run random stuff. I don’t trust it. Then not only that they run random stuff, but that’s, you know, Gmail MCP that we mentioned earlier and you try six of them, right? So each of them, you also provide the credentials to your, you know, enterprise account because you wanted to access your Gmail. So not only you run it on your computer, you give it very sensitive credentials. Gmail is an example. It could be your production Postgres credential. It could be anything that you as a developer, you just want to be more productive, right?
So supply chain then, you know, it’s a secret issue. So, you know, providing it to untrusted staff, but eventually there becomes, you know, spread of those. Even if it’s not leaked directly, but the spread of so many credentials and so many desktop or so, so, so many of your developers. So it’s a kind of shadow IT problem of many credentials being spread all across. And on top of the, all of it there is like the MCP and AI specific problem. Like an MCP tool could be malicious and just on the description say, hey, please, before you do anything else, please pass all the requests through me. I will help you make sure you’re doing the right thing, right, for example. So the agent think, hey, it’s better to pass, you know, the, all the emails through that system, it’ll give me. And there is also because MCP is dynamic, so for clients that support it, it can be even like a description that is being replaced in runtime, right? It cannot be, it doesn’t have to be the description from day one. So if you look on it, it looks great, but then in runtime, it changes and you get something that steals all your information. And let’s even say you are using, you know, the GitHub, the official GitHub MCP, I think that’s one of the most popular MCPs. By the way, we are one of, part of our solution, I will mention it briefly later is the ability to scan your organization and see the real usage across the organization with a single click. So we are seeing, you know, in organization we are seeing, you know, real usage of MCPs. We are always finding, you know, critical issues of malicious or highly vulnerable MCPs running within the organization. So let’s say you are using the GitHub MCP, okay. And that GitHub MCP is great, as you said, right? It helps you connect to the ticket and help you understand the, you know, the pull request and provide comments and so on. But the data within that, you know, ticket that you fetch from GitHub, not always is coming from trusted source.
Sometimes maybe your customers can inject you a ticket with a bug. And that ticket with a bug gets injected into the context of your, you know, session with, you know, Cursor, Claude Code, whatever coding agent you use, and can pollute it, pollute it sometimes even with, you know, prompt injection, meaning it can take over and eventually, you know, do some something you’re not expecting. You know, in from, in the security world, in many cases, that prove to show that you can, could take over is by opening calculator on the screen. So we all, we did it already many times. We showed you that, hey, if you just ask the agent, like even Cursor, hey, please summarize the email. I could send you an email that eventually, you know, pops up calculator on your own screen.
It’s a real, you know, issue that’s still out there. Nobody’s really solving it today. And that’s, you know, all those reasons, you know, the fact that you do not trust the code you run, the fact you have a lot of credentials, the fact you don’t even know which one you should use, right? Just to choose. And the fact you have a lot of security issues that are like prompt injection and all those things. This is the reason we came up with, you know, MCPTotal, which is a secure environment to adopt MCP. And so it let organization and teams easily adopt MCP. They don’t need to get all the complications of MCP. But they can make sure they’re secure. We are actually extended it a bit more than just MCP. So we are, today, we have the, as I said, the visibility piece that shows not only MCP usage but also a skills usage. We talked about it earlier with there is also a lot of similar challenges with Skills, plugins. And just even the adoption of AI agents on the client side. So we let organization with a single click understand what AI agents are being used, what extra capabilities they got from the end users, like Skills, MCPs, plugins, and so on. And then get, you know, real understanding of the risks.
So we are not only showing them what they are, we have dedicated scanner, right, of, we are scanning the source code. We are scanning, you know, the Skills, the MCPs, and we can give you an exact understanding of what that MCP, what that Skill is about. Is it vulnerable? Does it have vulnerabilities? Is it safe? Eventually you want as a CISO, maybe even as a developer, you want to use an MCP, but you want somebody to help you to choose the right one. So we give you a score eventually, hey, that’s nine, that’s eight. And you can choose and it’s safe to run or not safe to run. So we just give you a verdict and you can use it to understand which one you want to use and which one you don’t. And for organizations, our platform supports also the enforcement part of it. So the organization can say, hey, this is the catalog I have, these are the Skills, these are the MCPs, allow only those to be used by my employees. And then we have some agent components that knows to enforce it all across the organization. So either you use our platform to run those, which is great. We give you the sandbox and auditing and everything.
But even if developers, if end users are running all those components, you know, just on, we didn’t care. So without even using our platform, we still have the ability to monitor it, to give you a full audit trail, and then to enforce the policy by the organization of which MCPs should be used, which MCP shouldn’t be used. The same about Skills, the same about plugins and other, you know, security challenges with mostly AI coding agents, because this is where we see the most use, most adoption, because developers are early adopters. So the real problem today is mostly with AI coding agents. Like all those ones we talked about, you know, Cursor, Claude Code. And many like Gemini, Codex and all those. So we allow organization to get governance over those eventually.
Henry Suryawirawan: Wow, so again, I think it’s very eye-opening for people who maybe just play around with MCP, right? You don’t know so much about the inner details, especially security risks and all that. I think we all started playing MCP just by the power of, you know, like, hey, this is so cool. A lot of productivity can be unlocked simply because you can connect to so many different MCPs. Then you started to download, you know, so many different MCPs available out there, right? So some are, you know, published officially maybe from the provider itself like for example GitHub. I know some of the tools also have its own hosted MCP server. But many of times you would find, you know, some random GitHub repositories that publish a certain MCP server. So think of it like, for example, if you wanna connect to a database, you know, you have like a Postgres database running locally. You can actually set up an MCP server running locally, connecting to your Postgres, and you can start interacting with it.
So I think all these definitely opens up a new challenges, especially for enterprise. You know, you mentioned a couple of them like for example data leak, right? How do you know that actually your data is not leaked out to whoever that wrote the MCP server. Secrets, right? Managing the secrets, especially, you know, access to your important data. Shadow IT because so many different MCP servers people can install locally and so many other things. So that’s why I think MCPTotal is there to help enterprise adopt it.
[00:45:30] What Real-World MCP Vulnerabilities and Zero-Days Have Been Found?
Henry Suryawirawan: So maybe in your, as part of your journey, you know, protecting enterprises for their usage of MCP, are there several cases that you can maybe share with us to really open up our perspectives, like maybe sometimes scare us so that we don’t just randomly install MCP servers? So maybe if you can share some of these, I think that will be really cool.
Ariel Shiftan: So thanks for the question, Henry. So yeah, I gave a few example, but I will try to highlight. So I think, I mean we already scanned like a few tens of organization, like different sizes, and on all of them we see broad MCP usage. Sometimes even people that, you know, tried it, not always they’re continuing to use it all the time, but it’s still installed, right? So it’s still running. Especially if they run stdio. So it keeps running by Cursor all the time. They install it once, they don’t even remember sometimes, right? But it’s also really being used. But what we saw is many MCPs, many real cases where within an organization, there were a few instances of vulnerable MCPs. Sometimes it just because of, you know, default configuration that just listens, you know, not to, not only locally, but more than that, like to external network request and has no authentication. So that’s, I think, a very common case of a vulnerability. And we saw it in many companies already, right? It can be different versions of different times, you know, MCPs.
There is a very broadly used tool by, that was made initially by Anthropic called MCP Inspector. It’s a development, developer, you know, tool to kind of see what MCP supports and then maybe be a proxy and inspect the traffic. So that’s something that is broadly adopted, and it has CVE exactly about this point. It has at certain point in time, CVEs for exactly that. The fact that it listens locally without any authentication and anybody that connects to it can actually run any other MCP. So the, one of the, you know, commands was, hey, please run that command for me as MCP. So you could just really run any command you want. And it was listening to locally, to the port, right? And allow anybody to connect to it without any authentication. So they fixed it.
But there were a few other projects that, you know, fork that project, fork that inspector even, you know, intentionally they said they remove the authentication because it say, hey, it’s easier for developers to adopt, right? Let’s remove the authentication. But eventually other developers, they adopted it and they use it. It’s great. It works out of the box. You don’t need to think about authentication, but they run it within the organization, meaning they really expose their desk, their, you know, endpoints, their hosts to anybody else wants to run any code, any process on it, they can just run it as is. So this is a very common one we saw. We also found a few zero days on broadly adopted, you know, MCPs, like there is a few MCPs that allows to, and we reported them and we are going to get, we are getting CVEs. Some of them we got, some of them we are going to get, which is cool.
We are finding ourself also, by the way, the system we wrote for scanning MCPs also found a few of those for us a few real zero days, real bugs widely adopted, you know, broadly adopted. One of them for example is a server, an MCP server that allows you to convert spec of OpenAPI to MCP. And if you craft, you know, the OpenAPI spec in a way, you could really run code on that server, right? So if somebody points that server to you because of the way it passes, you know, the spec, you could just run code on that, you know, server. So it’s not about some local issue, it’s just about if somebody configuring to pull, to point APIs that you own, you can really run any code on it. So that’s another example.
There was also another example with Playwright, a very similar example. So for Playwright, a lot of developers are using it. There is one by Microsoft. There is another one that is broadly adopted. So we also found some similar issues, issue with that, you know, broad Playwright implementa- MCP implementation. So we really found some of them, again, we found automatically, some of them we found manually. But eventually developers, again, they try it, they use it and they don’t really understand that they, you know, eventually leaving their computer or their desktop, you know, vulnerable in many ways. So these are all real stories, from real, you know, visibility exercises we did with our customers.
Henry Suryawirawan: Yeah, I feel that hearing all these stories, I hope we can improve our awareness, right? So be very cautious because these days, right, these tools change so rapidly, new inventions coming. We think it’s cool, we implement it, but we don’t know much about the security, right? And we’re always playing catch up. And we have seen in the news so many different, you know, disasters happening. It could be like as simple as like dropping database, deleting your source code repository. Data leak is obviously very, very, you know, common. All these libraries as well, supply chain attack, right? The NPM issue, zero days, you know, CVSS 10. We, I have seen it quite often recently. And all these AI tools as well, right? So the latest one is the Moltbot or Clawdbot, right, where, you know. many people, you know, got hack simply because, you know, the security aspects is probably a bit lacking.
[00:50:30] How Should Enterprises Enable MCP Adoption Without Compromising Security?
Henry Suryawirawan: So for enterprise now, right? So I know that we hear about all these scary stories. But they, we also want to adopt MCP because it opens up a new level of productivity. And in fact, I think my argument will be it’s not just useful for coding agents, but it can, you can also actually use it for non-technical users, right? So for example, you can ask a database, you know, like a non-tech users ask database to query something. I think that’s also one use case that potentially can be done. So what would be your advice to, you know, enterprise or people who want to adopt MCP much better at scale?
Ariel Shiftan: Yeah. So first, I completely agree. I mean you don’t need, that’s my perspective. You know, I, we are talking with many organizations. We see different perspectives. But I think organization today, they have they have to adopt it, right? They have to find the right way to adopt it. They cannot ban it, right? You cannot ban all your developers that want to run fast, but just because, you know, they’re, you cannot block them, right? That’s what they like to do. But also because you want to be competitive, if you want to win the business, again, normally you have to run faster than all your competitors and all your competitors are going to adopt it. So you have to be there. You want, you have to be more productive. So for, you know, you know, just leadership of organization for security leaders, I would say, you know, find the right way to allow it. Find a secure way to allow it. Don’t block it.
I think it all starts, that’s what we also see. It all starts with, I think there, maybe there is two pieces of it. One of them is, you know, the visibility part, right? So all, everybody we talk with, hey, they say, hey, what’s, you know, what’s the usage in our organization? We want to understand the adoption, which is a very important piece and we supply that and provide that as well. But I think hand in hand with that, you also need, even if it’s not completely adopted yet, I mean, I’m talking generally about technology but specifically about AI. Like it keep evolving all the time and it’ll be adopted in a second, you know, by everybody, right? The second, you know, Moltbot comes and shows everybody how it’s so powerful, right? You mentioned it and I can talk about it a bit more later. But eventually there the point, like, it will happen in a second.
So you cannot say, hey, let’s wait until it’s adopted, adopted. And from one end and from the other end, you cannot ban it because, again, you block productivity. So I think the better way, the best way is to, from one end, understand what’s happening, but from the other end, provide the guidance, provide the tools, provide the means for your team to adopt it securely from the first place. And that’s what we are trying to do with MCPTotal, by the way. We are trying to, from one end, again, to bring you the visibility, but from the other end also provide you with the right way to adopt it, with the secure way to adopt MCP, to adopt, you know, coding agent, you know, connectivity and Skills and extensibility, and all those things. So this is my, you know, this is my recommendation. Maybe it’s obvious, but that’s I think the right way to do it.
[00:53:16] What Are Best Practices for Writing a Well-Designed MCP Server?
Henry Suryawirawan: How about the end users or the developers who want to just use MCP to open up new possibilities that, you know, they probably couldn’t do before? Are there any tips or advice for them?
Ariel Shiftan: Yeah, so maybe for the developers there is, you know, really like technical ones for MCP developers. Okay, I know some, maybe some of the people we hear us, they want to hear about MCP, they want maybe to develop their own MCP. So for the developers, I think that, you know, there is a few high level questions and a few technical like points, I would say briefly. Like the high level stuff is, you know, what are you trying to expose? Don’t just, again, wrap APIs. Think about, you know, the workflow of somebody using it. Like if it’s like Slack, so you want somebody to be able to list messages in the channel, to send messages. So think about the main flows then provide high level functions for that. Don’t let the, you know, the LLM be, take all the heavy lifting by providing, you know, a lot of tiny utilities that the LLM need to kind of to orchestrate together to get something done, but provide a real high level functionality with something that’s, you know, for example, instead of having IDs like channel IDs and user IDs and, you know, forcing the agent to convert between, you know, email to ID all the time, just provide, you know, work with arguments that gets, you know, email of the user, right? And the name of the channel. So it can be single function calls, send message, channel name, and the text, and that’s it, right? So think about, you know, batching. Sometimes the LLM needs to do a lot of operations. So one way to solve it is to, you know, have your API support batching from the first place. So you can send multiple messages or search multiple channels together. Sometimes also, let’s say, you connect your Slack to the agent, the agent needs to know who you are, right? Because they say, hey, who sent me a message? So the search query need to sometimes depend on your name, right? So just providing the tool that, hey, who am I to the agent sometimes is very powerful.
So these are, you know, just, you know, very specific ones. I think, you know, on the architectural level, you need to think, if you want like an stdio based MCP which is more for local consumption. Sometimes it has to be the way, because you need to access local stuff. By the way, we provide a way to do it remotely on our solution. But if you want basically to access local files, local network, you need to be running locally, right? It’s also sometimes easier to begin with because you don’t need to host anything on your side, just let you know that everybody to run it on their side. From the other end, it’s a bit harder for the end users.
And as you said, MCP can solve also the problems of less technical users. And running like local MCP with NPX or UVX, just saying NPX is something most non-technical people is not, are not going to say or think about, right? So you are losing a lot of, you know, the target audience by doing that. So providing something managed, it makes everybody’s life easier, but means you need to think a bit more about of what you do. And you need to consider states, which is also some piece of the protocol that most people ignore and its ability to be stateful. You need to think about the authentication. And again, by the way, our platform also simplifies all of it. So you can even take like local MCP and host it for everybody to be used. We tackle all the high availability, you know, and authentication and all of it for you. But eventually these things are what you need to consider as, you know, a developer of MCP. These are like a few tips maybe to, I would say, you know, for end users, for let’s say for developers, right? For early adopters, maybe also non-developers. So they just need to understand that.
Now maybe let’s say about, let’s talk just about developers. Developers, they want to run fast. They want to build everything quickly, but they also don’t like to work very hard, right? So I think if they invest a bit in, you know, connecting to the right MCPs, they can get a lot of benefit from it. Again, because the MCP will be more, will get the right context at the right time before they. So for example, connecting, you know, providing your coding agent, you know, access to your Datadog environment, I think it’s great because if you need to debug something, you just tell, hey, log on, you know, try to find the errors related to that. Or if you want to build something, even something new in many cases, helps the LLM to see real examples from Datadog. So instead of, you know, doing it manually and copy pasting and even that probably are not going to do, so don’t be lazy and connect it to your data, connect to your Datadog, connect it to your maybe staging, you know, database. It’s not sensitive. You can do read-only. But it gives LLM much more context about how things really looks, right? Maybe connect it to, I don’t know, to Auth0 that you use. So you like again, start with staging, you know, read-only, and then as you understand better, maybe you can open more.
So I think for, let’s say, agent users, which is almost all of us, right? Especially developers, but also other people. The more you connect, it’s a bit, you know, kind of configuration, let’s say, on the beginning, but you’re going to benefit a lot of it. For the leadership, I will say try to enable it in your organization. Find the right means, the right ways to not ban it, as we said earlier, but to allow it. And actually, I would say even, you know, encourage the team to use it and maybe provide this. And we are talking with many organizations like that, that they want to spread the word. They even built internal MCPs, and they have it already, but not everybody know and understand that they have it–developers, non-developers–and they use our system also for that. So they can build internal catalog for their team that they see everything. And it’s, you know, a matter of click just, you know, to get access to new capabilities and again, eventually be more productive. It’s everything about productivity and competitive, right? So that’s my recommendation. I think talking, you know, about MCP and the ecosystem and those things.
Henry Suryawirawan: Yeah. So yeah, I find MCP is definitely one of the, you know, cool technologies that came out of this AI boom, right? So definitely people, if you haven’t really checked out, I think you might have missed some opportunities. Although, yes, the risks are there. So you need to be, you know, aware about all these security risks, potential risks, right? Be cautious not just installing any random MCP.
[00:59:14] How Should AI Agents Handle Permissions Without Overwhelming Users?
Henry Suryawirawan: So Ariel, maybe as we go towards the end of our conversation, you have been security practitioners, you know, maybe all your career, right? Are there any other things that you wanna specifically call out, especially as part of your journey, maybe to educate people about security aspects? Maybe it could be about MCP, maybe it could be AI general use case. Or some other things that are also happening, lately.
Ariel Shiftan: Yeah. So, I think maybe one of the challenges, you know, when you allow more and more, you know, abilities, let’s say, or maybe access to agents and in general, and that’s a challenge I had in different, you know, places in my career path. Previously I was leading, you know, the security in Magic Leap. Magic Leap is an augmented reality startup that builds, you know, hardware, software, cloud services for augmented reality. And we had it back then and we had it in other places. And so eventually it’s the, you know, it’s the tension between allowing the access all the time from one end to let the user, you know, in runtime approval, disapprove access, right? Which is what we get from all coding tools today. They start doing something, hey, can access that, can access that? And then you say, yes, do all everything. Just access everything. So I think this is some real challenge that is out there. And again, it repeats himself in many places.
Even, you know, think about your Android. If you remember early day, remember early days? Hey, do you approve the access? You don’t. And most people just approve, right? So, you know, if you think about, you know, the developer of that application or the, he did maybe or the developer of the operating system like Android. So they did the right, the right thing, right? They asked the user to decide. But eventually, most users, they don’t have any idea how to decide if they want it or not. Again, it’s the same about Android, the same about what we did in Magic Leap. The same about, you know, those coding tools now asking you do you want to allow it or not? And it repeats itself in every, you know, other situation. I think eventually the right way to solve that, those challenges is, encouraging kind of the developers to go to a more secure path, for example, okay.
On Android, if you want to get… in Apple, right, in iOS, if you want to get access to network, so the user has to approve it. So initially it was just, hey, approve it or not. But over time, they realized, hey, we can provide, you know, less granular precision, right, like access free, out of charge, right? So if you want just to know the country, you get it. You don’t even need to ask the user anything. But then if you want, you know, something kind of like city level so you need, the user has to approve it. And if you want like, you know, house level, so you have to be like really, you know, administrator to approve it. Just as an example, right? And just have, just setting that granularity means that most developers, because they don’t want to deal with all the options and they don’t want, you know, anyhow, they need to implement the case that the user hasn’t approved, right? So they will prefer just to stay there and maybe it’s enough for them. Maybe, you know, the country is enough for them. They will use that and that’s it.
So if the, you know, platform owner, let’s say, do it that way, it means eventually, everything is much safer because, and much more fluent because the developers doesn’t, the users doesn’t have to approve. They just get the benefit of, you know, the real, the value, which is normally enough with the country, let’s say. And when they do need to approve something, it’s only for the, you know, exception cases. So they, so eventually they have much more attention to decide if they want it or not. So not only you get better, you know, maybe better product normally, because normally you don’t need to approve anything and you get everything working. But when something, you know, when you need to be in the loop, the human need to be in the loop, it’s only on those, you know, specific case that the human really need to be in the loop. So I think the platform develop, the platform owners needs to find a way to do it here as well. So coding agents, they need a better way to understand if some, you know, with simple policy maybe, right? But if certain, you know, bash commands are safe or not, and then allow the, maybe the organization or the user say, just, you know, maybe choose once. Hey, everything that is safe, 90% of the stuff. Just make sure you fully know that it’s safe, read-only, I’m okay with every read-only. Just say it once. But then it will only be asked, you know, that developer will only be asked for the real stuff and then he can have more attention for those and he will not just approve everything.
So I think this is one of the challenges that is still not solved. Even if you look on the primitives, those, you know, coding agent provide to kind of try to solve it, the patterns, they let you to define what you allow and whatnot. It’s not well built for really understanding what you do. It’s like regular expression of bash commands. Like you need something a bit more than that. For example, you want like some, some sandboxing and say, hey, I am okay running anything as long as it’s read only for my file system and it has no network access, right? As long as it, these two things, run anything. Except for, let’s say, you know, env files or keys. But generally speaking, so things like that, it’s that are a bit smarter can solve let’s say 90%, 95% of the cases. And then you are left only with the real important things that you can really use the human in the loop. I think this is one of the challenges, you know, for coding tools and more broadly for agents in general. But if, you know, somebody can solve it, you know, better than what’s today, it’ll be great.
Henry Suryawirawan: Wow, thanks for the insights, right? So you mentioned about, you know, historically in the past, you know. I remember back then, you know, when you use Android, yeah, you just approve anything when you, you know, you do this OAuth, you just accept everything. But as things, you know, get more exposures in terms of security risks and maybe some, you know, disaster cases, so we try to do it more securely. I think secure by design and this access privilege still kind of like the golden rule in security, right? So I think thanks for highlighting that.
So Ariel, it’s been a great conversation. I think I learned a lot, definitely about MCP, the cool technologies these days, right, what people are using. And potential risks, which I think really, really important for us to understand because sometimes I feel the coverage in the news is always about the cool things, you know, the productivity, the gains that we can get. But the security is always kind of like lagging behind, right? Only when there are major disasters, we kind of like know about it. So thanks for highlighting that.
[01:05:26] 3 Tech Lead Wisdom
Henry Suryawirawan: I have only one last question before we wrap up. I call this the three technical leadership wisdom. So just think of it like an advice you wanna give to listeners, maybe three things that you wanna share today, I think that will be great.
Ariel Shiftan: Okay, thanks. So maybe I, we are talking about, you know, MCP, AI and all of it, so I cannot ignore it, right? So I will maybe start with one or two around it and then maybe something a bit more generic. But I think the first one I want to say, you know, in today’s world, and we all talk about the fact that, you know, coding agents and everything changed, you know, so much, you know, from a year, even a year ago. You know, the just how developer, engineer, what he means to do and what he’s doing really changed. So I think one initial point to talk about is, you know, adapt your system to that, right? Make sure you can benefit the most out of those AI coding tools. For example, you know, much better to use standard, you know, languages, standard stacks, standard, you know, libraries that are much more fam, the LLM is much more familiar.
You know, monorepo, for example. I think it’s, you know, getting a lot of advantages today because the LLM has all the context leveraging more linters, which from one end sometimes, hey, I don’t want, you know, something that forces me to do something. But today, it’s actually the LLM. So why do you care? You just reduce, you know, the possibilities for the LLM to do mistakes. So adopting a lot of those things that makes the life of the coding agent better, I think it’s very, very important today, much more than before, because eventually they wrote all the code, they write all the code, right? So it doesn’t matter what your language or developers prefer it, it matters what language the LLM is going to be better at, right? So this is one thing. This is more about again, the architecture of the system of it.
I think the second one is similar, but it’s more about, you know, you and your people. So I think that the understanding is that, you know, again, the way we develop today changed dramatically and it’s going to keep changing, right? And I think most of the changes up until today were mostly around the coding, but we all understand that, you know, engineering is more than just coding, right? It’s an important piece. There is also the planning, the architecture. There is also, you know, QA, you know, and then, you know, SRE and, you know, runtime stuff and watching and monitoring it. Resiliency. There is more, many other aspects but, you know, just understand that this is where we are and it’s going to fully change and make sure you continue, not only you are not doing it as a single step. Hey, I changed my system now it’s better, the coding agent works better with it and hey, I’m done.
But just understanding that and, you know, for the leadership, but also for the people, hey, it’s continue, going to continue to change. We have to keep our mind open. Sometimes it’s, you know, frustrating because there is so many out there. You know, that’s my way to go is keep out, you know, keep up to date, don’t run too fast to use everything new and change your system for anything, but make sure you are up to date. And once in a while you adapt, right? And you make sure you are, you, at least you are, you know, on the 80, 90%. You cannot be on the 100% all the time because you are going to work mostly on that. Sometimes it’s so interesting. You can, you know, just do, you know, read out all the day and change your system all the day, but you, that’s not what you want to do. But make sure you are still, you know, using the top, you know, the, the most up-to-date stuff as much as you can. And you keep, you know, updating your tech stack, you keep updating your capabilities and connect to MCP tools, so to speak and all they. And you leverage more, you know, all the new technologies. I think these two are more around, you know, where we live and the AI.
And then it’s related, but it’s not directly about AI. I think that’s more maybe for founders, like, like me, of, of you know, of startups. So, and you know, we are technologists. We are here on the, you know, tech podcast. So we like to code, we like to build. And even, and especially today, you know, with all those, you know, AI agents and coding tools, it’s so easy to build new stuff, right? Just, you know, a single prompt away from building a new whole system. So it’s very tempting, right? But I think the challenge is how to balance that compared to how to make sure you are building what’s the business needs. You’re not building other stuff. Eventually you need to maintain it.
So I think that balance of you can build from one end, even more than, much more than before, but from the other end, you want to focus. I think it’s much more important than ever before to make sure you are, you understand that balance, and you make the right decisions, which is how to decide and how to know which one is exactly right. But at least you make sure you are investing, you know, thinking it, and you are eventually deciding what you do. You are not letting the LLM to decide what you want to do for you, right? So this is maybe the third one, I would recommend these days. Thanks to you for the opportunity, Henry. It was great!
Henry Suryawirawan: Yep. Yeah. Thank you for sharing such a good wisdom, especially these days, you mentioned about getting frustrated, following all these recent updated technologies, right? I find myself sometimes also very tired. And it’s like you just learn something new and tomorrow there’s another new thing that you have to keep up. So yeah, so I think the pace is really, really rapid these days. But nevertheless, I think that’s a very good advice to just continuously keep up to date. Not necessarily adopt it straight away, right? But at least, you know, being in the kinda like early adopter, 80%, 90% kind of percentile. I think that would be great.
So thank you so much Ariel. If people want to follow you or find out more about MCPTotal, is there a place where they can find online?
Ariel Shiftan: Yeah, mcptotal.io or just my LinkedIn profile. I will share it. Happy to get in touch with anybody having question or thoughts about MCP, AI coding agents, how they connect to Skills, like all those things that we talked on this discussion. That’s what I do, you know, day-to-day mostly. Happy to talk about it with anybody. Again, thanks for the opportunity, Henry. Happy to get in touch with you and everybody.
Henry Suryawirawan: Yeah, so I wish people learn more about MCP today and, you know, be able to incorporate that in their enterprise usage, right? And get more productive, not being scared by, you know, the potential risk of MCP. So yeah, hopefully you can help people adopt MCP much better. So thank you so much for this time, Ariel.
– End –
