What is the MCP Apps Extension? And why is it a big deal?
A deep dive into how the MCP Apps proposal brings real interactive UIs into LLM chat interfaces.

The MCP Apps Extension is a proposed standard that allows Model Context Protocol (MCP) servers to ship full interactive user interfaces, not just text and structured data, into chat-based AI clients like ChatGPT or Claude.
The Model Context Protocol itself is an open standard that connects AI models to external tools and data, so that one integration can work across many assistants and products.
Until now, MCP has mainly defined how tools, prompts, and resources are exposed to models. MCP Apps adds a missing piece: a shared way to describe and deliver actual user interfaces as part of those tools.
In other words, this proposal turns “tool calls” into full applications that live inside the chat window, with custom frontends, layouts, and controls, while still following a common standard.

How are “apps inside chat” working today and what feels limited about them?
If you have used ChatGPT apps, Claude projects, or custom MCP servers, you have probably felt a pattern:
- You ask the model to run a tool.
- The tool returns text or data.
- The chat client does its best to render that result as markdown, maybe a table, maybe a file link.
This works, but it breaks down when:
- You need rich visualization, like interactive charts, dashboards, or maps.
- You need complex input, like multi-step configuration, filters, or wizard-like flows.
- You want something that actually feels like a small app, not just a long answer.
So why can’t my AI tool just show me a proper UI instead of walls of text?
With the current MCP baseline, the server can send back data, but the client is responsible for deciding how to render it. Different clients solve this differently, or not at all, which makes experiences inconsistent and often awkward.
Developers have already built creative workarounds. The MCP-UI project, for example, has been shipping rich UI patterns for MCP servers and has seen adoption at companies like Postman, Shopify, Hugging Face, Goose, and ElevenLabs.
But each implementation introduces its own conventions, which makes portability across clients harder.
What problem does the MCP Apps proposal actually solve?
The MCP Apps proposal standardizes three things that have been missing so far:
- How a server declares that it has a UI.
- How that UI is attached to a tool.
- How the UI and the host chat application talk to each other.
Right now, MCP servers are limited to “text plus structured data” responses. That is fine for simple tools, but it creates friction when:
- A visualization service wants to render a chart, not just send JSON.
- A configuration tool wants to show a form, not ask users to type parameters.
- A workflow tool wants to present a step-by-step UI, not a long chain of prompts.
Without a standard, each host has to invent its own UI protocol and each server has to fork its design to match. MCP Apps tries to remove that duplication by defining a common pattern that any host can implement and any server can target.
How does MCP Apps introduce interactive interfaces without getting too technical?
Pre-declared UI resources: Servers can say “I provide this UI screen” in advance. You can think of it as registering a reusable template, for example a “Bar chart viewer” or “Conversation review panel.” Tools then reference those templates.
For hosts, this means:
- They can discover all possible screens upfront.
- They can prefetch and review UI before it runs, which helps performance and security.
- They can cache static UI independently from dynamic tool data
Communication over the existing MCP channel. Instead of inventing a new side-channel, the UI talks to the host using the same structured communication that MCP already uses. In simple terms, the UI can ask the host to call tools, update state, or send messages, and the host can respond in a traceable way.
For developers, this means:
- They can use the usual MCP SDKs to handle messages.
- Logging and auditing stay consistent.
- Future MCP features can apply automatically to UIs.
HTML-first with sandboxed iframes. The first version focuses on HTML. UI templates are HTML content rendered in sandboxed iframes, which gives:
- Very broad compatibility across browsers.
- A well understood isolation model for security.
- A practical base for screenshots, previews, and other UX features.
Other content types like native widgets or remote DOM are explicitly left for later iterations. The intention is to start small and extend over time.
Security by design. Running interactive content from third-party servers is sensitive. The proposal leans on multiple layers: sandboxing, pre-declared templates, auditable message flows, and explicit user consent for certain actions.
Backwards compatibility. MCP Apps is an optional extension. Existing MCP tools continue to work, and servers are expected to provide a text-only fallback for UI-enabled tools. That means the ecosystem can move at its own pace, without breaking current integrations.
What are the key takeaways from the MCP Apps proposal?
- MCP Apps defines a standard way for MCP servers to provide full interactive UIs, not just data.
- Apps are described as reusable UI resources that tools can reference, so hosts can discover and review them upfront.
- HTML in sandboxed iframes is used as the initial delivery mechanism, which makes security and compatibility manageable.
- Communication between UI and host uses the existing MCP protocol, which keeps things auditable and consistent.
- The extension is optional and backwards compatible, so servers and clients can adopt it gradually.
Why is this such a strong signal for the future of AI apps?
And how will MCP Apps change the way I use ChatGPT or Claude in practice?
If MCP Apps is widely adopted, the AI chat window becomes less like a messaging client and more like a universal application runtime:
Your “apps” are portable. The same MCP server could present the same UI inside ChatGPT, Claude, or any other MCP-capable client that implements the extension. You are no longer locked into one vendor’s plugin system.
Workflows become visual and interactive. Instead of describing a dashboard in text, the model can open an actual dashboard, with filters, drill-down, and controls, all inside the chat. The assistant can still explain what is happening, but the work itself happens in a UI optimized for the task.
Tools feel like first-class applications. A “tool” stops being just a function call hidden behind a prompt. It becomes something closer to a mini-app: it has screens, state, and interactions, but still lives inside a conversational context.
Ecosystem effects get stronger. MCP was already positioned as a universal tool and data integration layer. With MCP Apps, that same layer can standardize UX patterns as well. This could make it easier for organizations to share, reuse, and govern AI apps across teams and products.
Is this basically turning MCP into an app runtime? The proposal itself hints at this. The authors describe MCP Apps as starting to look like an “agentic app runtime,” a foundation for new kinds of interactions between AI models, users, and applications. That is the bigger story: MCP is gradually evolving from “just” a protocol into the backbone of a whole application ecosystem.
How does collaboration between Anthropic, OpenAI, and the MCP community change the picture?
This proposal is not coming from a single company. It is co-authored by MCP core maintainers from OpenAI and Anthropic, together with the MCP-UI creators and the UI Community Working Group.
This signals a few important shifts:
Interoperability over fragmentation. Instead of separate, incompatible UI systems for each assistant, there is an explicit push to align on one extension that multiple vendors can share.
Community patterns becoming standards. The MCP-UI project and its community have already proven many of these ideas in practice. The extension formalizes those patterns so they can be relied on by the broader ecosystem.
More durable investments for builders. If you build on a neutral standard, your app is less dependent on one vendor’s roadmap. For internal platforms, that makes long-term planning easier.
What kinds of experiences will MCP Apps enable for users?
Here are a few concrete types of “apps inside chat” that become more natural with a shared UI standard, inspired by community examples and the proposal’s motivation:
Analytics dashboards. Instead of a static table, you get an interactive dashboard with charts, filters, and drill-downs. The model can still describe the insights, but you drive.
Multi-step configurators. Complex tools, such as deployment pipelines or pricing calculators, can present proper forms with validation and dynamic fields. The assistant can help you fill them in, not just ask you to type all parameters into one prompt.
Visual editors. Workflow diagrams, content layouts, or automation rules can be represented with drag and drop or graphical controls, directly inside the chat interface.
Domain specific consoles. For example, an AI-assisted CRM console, a log viewer, or a data-quality checker that combines conversational guidance with a purpose-built UI panel.
Will I still be able to just talk to the model? Yes. MCP Apps enhances tools, it does not replace the core chat paradigm. You can still rely on plain conversation. The difference is that when you need structure, your assistant can seamlessly open an app-like surface rather than only responding in text.
What does MCP Apps mean for builders?
If you are building AI apps on top of LLMs, MCP Apps offers a way to:
Design once, distribute widely. You can implement your UI as an MCP App and, over time, expect it to work in any compliant client. That includes first-party clients like ChatGPT or Claude, as well as independent tools that support MCP.
Keep logic and presentation cleanly separated. Your server owns the logic and declares the UIs it can render. Hosts can cache and review those UIs, and models interact with them through the protocol. This separation can make security reviews and iterative development easier.
Offer richer experiences without building full standalone frontends. Instead of building an entire separate web app, you can embed focused, task-specific interfaces inside the chat experiences your users already rely on.
If you want to experiment with your own MCP app, you can try Yavio to build your own app that runs inside ChatGPT, Claude and Gemini in minutes.
How can interested teams start exploring MCP Apps today?
And what should my team do if we want to try this without diving too deep into protocol details? A pragmatic path looks like this:
Read the proposal at a conceptual level. Focus on what UI resources are, how they are linked to tools, and how communication flows, rather than every detail of the schema.
Look at existing SDKs and community implementations. The MCP-UI SDKs and the early-access MCP Apps SDK showcase real patterns and code for delivering UIs from MCP servers.
Prototype a single, high-value app. Choose one scenario where text-only interactions are clearly not ideal: for example, an internal analytics dashboard or a configuration wizard. Build that as an MCP App and observe how it changes user behavior.
Plan for fallbacks. Ensure your tools still return meaningful text-only results so that they remain usable in clients that do not yet implement the extension.
x
We at Yavio are already working close with the MCP community. We will implement MCP Apps as soon as possible so that you can develop your own MCP app without having to worry about hosting, security and other overheads.
