With the new ChatGPT Apps SDK and Model Context Protocol (MCP), chat interfaces are starting to look less like Q&A tools and more like places where work actually happens. To explore what that means for creative workflows, we built a small technical demo: CE.SDK running directly inside ChatGPT.
From a user’s perspective, the flow is almost trivial. You ask ChatGPT for something like an ecommerce template. ChatGPT searches our template catalog, selects a matching design, and opens it instantly in a fully interactive CE.SDK editor — right inside the chat interface. What looks like a preview is, in fact, a real editor loaded with a real template scene.
This isn’t meant as a product announcement. It’s a technical proof of concept showing how creative SDKs can plug directly into AI-native interfaces.
CE.SDK as a ChatGPT App
The integration is built around a custom MCP server that exposes CE.SDK to ChatGPT as a tool. The server speaks OpenAI’s JSON-RPC–style MCP and implements the standard lifecycle methods (initialize, tools.list, tools.call, resources.read). It knows about our premium template catalog and emits structured payloads that the frontend understands.
On the client side, a Next.js app listens to tool output events streamed from ChatGPT, renders CE.SDK widgets, and hydrates them with the payloads returned by the tool — such as a scene URL, placeholder values, or export permissions. Templates are loaded via CE.SDK’s Template API, either from a URL or from a serialized scene string.
Under the hood, the stack is fairly conventional:
- Next.js 15 (App Router)
- CE.SDK Web / CreativeEngine
- A custom MCP handler to normalize JSON-RPC
- Vercel for hosting
What’s new is not the technology itself, but the context in which it runs.
Working with MCP in Practice
The hardest part of the demo wasn’t CE.SDK — it was MCP.
OpenAI’s MCP implementation is extremely strict. Even the smallest schema mismatch can trigger the infamous “TaskGroup 424” error, usually without any hint as to what went wrong. In many cases, the HTTP response is technically successful, but the JSON structure doesn’t match the expected schema closely enough.
The key lesson here is to treat MCP responses as hard contracts:
- Validate every response against a schema (for example with zod).
- Mirror OpenAI’s field names exactly, even for empty or optional capabilities.
- Assume that a 424 almost always means “your JSON shape is wrong”.
Another important insight was how critical visual context is in chat-based tools. If your MCP responses don’t include thumbnails or preview images, ChatGPT will often fall back to rendering links. For creative tools, that immediately breaks the experience. In a chat UI, visuals aren’t an enhancement — they are the interface.
State handling also requires a shift in mindset. ChatGPT can replay tool calls, and each prompt effectively creates a new widget instance. You can’t rely on mutating an existing editor. The frontend needs to be idempotent: load scenes from serialized state first, then apply changes. Every tool call should be treated as a fresh render.
Why This Pattern Matters
This demo points to a broader change in how creative software may be accessed. Chat becomes a coordination layer, not just a conversational one. Instead of explaining how something could be designed, the AI opens the actual editor and lets the user continue from there.
For CE.SDK, this fits naturally. Editors become embeddable capabilities rather than standalone applications, and AI systems become the entry point into creative workflows. Prompting turns into doing.
Beyond OpenAI: The MCP UI Standard
Although this demo uses OpenAI’s MCP, the architecture maps cleanly to the new MCP UI standard recently introduced by Anthropic. That standard aims to make tool definitions and UI rendering more consistent across models and platforms.
Because this integration already separates tool logic from UI rendering and relies on structured, explicit payloads, transferring it to Anthropic’s MCP UI model is conceptually straightforward. CE.SDK can act as a reusable creative surface across ChatGPT, Claude, and future AI app ecosystems.
You can read more about Anthropic’s MCP UI direction here:
https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-apps/
This demo is intentionally small and technical, but it highlights a meaningful shift: AI systems that don’t just describe creative outcomes, but open the tools to actually create them.