An introduction to generative UI concepts, the component vocabulary model, and how WireAI aligns with the emerging A2UI standard.
Generative UI is an interface paradigm where AI agents dynamically select and render interactive UI components instead of static text. In React Native, the WireAI SDK powers this by translating LLM JSON outputs into native components via a Zod schema registry. The agent picks what to show. WireAI makes sure it's safe to render.
Traditional AI chat apps follow the same broken pattern: user sends a message, LLM returns a wall of text, developer displays it in a Text component. The user reads a paragraph to find the one button they need to tap. On a 6-inch screen, this is genuinely bad UX. Generative UI replaces the paragraph with the button itself.
What is generative UI?
Generative UI is an architecture where the AI agent controls what appears on screen, not just what text is displayed. You pre-build a library of UI components, forms, cards, selectors, confirmation dialogs. You describe each one in plain English. The LLM reads those descriptions and, based on the user's intent, picks the most appropriate component and fills in its props as JSON.
The key difference from traditional AI chat: the interface changes based on agent reasoning, not static navigation logic. A health coach agent that detects the user seems stressed doesn't write "I think you might be stressed." It renders a breathing exercise card with a timer. The user taps Start. The agent sees the completion and renders a reflection prompt. Every step of the session is driven by agent reasoning, not a pre-written decision tree.
How does generative UI work technically?
The mechanism has three parts: a component registry, a system prompt generator, and a JSON validator. You register each component with a name, a description, and a Zod schema. WireAI generates a system prompt block for each component automatically. When the LLM responds, it picks one component by name and provides its props as JSON. WireAI validates the JSON against the Zod schema before rendering, if validation fails, the app shows a fallback text message and logs the error.
// The LLM output that drives a UI render
{
"component": "BreathingCard",
"props": {
"technique": "box breathing",
"durationSeconds": 240,
"promptText": "Let's do 4 minutes of box breathing together."
}
}WireAI parses this output, runs it against your Zod schema for BreathingCard, and renders the native component if everything is valid. The entire validation-to-render pipeline is handled by the WireAIMessageRenderer component, you don't write any parsing logic.
Why does mobile need generative UI more than the web?
On a 27-inch desktop monitor, reading a 300-word AI response is annoying but manageable. On a 6-inch phone screen held one-handed while commuting, it is actively hostile UX. Mobile users expect tap interactions, not reading assignments.
Mobile also has constraints that make generative UI more valuable, not less. Touch targets must be at least 44×44 points. Keyboard-aware layouts must not cover the input field. Navigation must respect the platform's back-gesture model. A generative UI runtime built for mobile (like WireAI) handles all of these natively. A web-first generative UI SDK ported to React Native handles none of them.
Generative UI vs. traditional navigation logic
Traditional mobile apps use navigation graphs. Every possible flow is pre-written. The developer decides: after screen A, show screen B or C depending on condition X. This works perfectly for predictable flows like checkout funnels. It breaks completely for contextual AI interactions where the right next step depends on dozens of signals the agent has access to, conversation history, user goals, past behavior, current context.
Generative UI handles the cases that navigation graphs can't. Use static screens for predictable flows. Use generative UI for contextual, agent-driven sequences where the "right next screen" isn't knowable at development time.
The flat component model: why one component per turn
Web generative UI frameworks often ask the LLM to generate a full component tree per turn. This requires GPT-4o-level output reliability. For mobile apps using smaller local models (Llama 3 8B, Phi-3 Mini), this approach fails ~30% of the time.
WireAI uses a flat model: the LLM picks exactly one component per turn. Not a tree. Not a layout. One component and its props. This simple constraint drops the failure rate from ~30% to under 5% even with 7B parameter models. It also makes the conversation flow feel more natural, one step at a time, with the agent adapting based on each tap.
The A2UI protocol and WireAI
A2UI (Agent-to-UI) is an emerging framework-agnostic standard for how AI agents communicate component selection and props to UI runtimes. WireAI's registry format, component name + description + Zod schema → JSON output, is architecturally aligned with A2UI v0.9. As the standard matures, components registered with WireAI today will transfer to A2UI-compatible tools without changes to your component code.
Full A2UI protocol support is targeted for WireAI v0.2. The WireAI Cloud tier will add managed A2UI routing and analytics on top of the open-source runtime.
Getting started with generative UI in React Native
The fastest path is npm install wireai-rn zod, then following the 30-minute build tutorial. You get 11 built-in components out of the box and can register your own with a single function call. For local development with no API key, the Ollama setup guide has you running Llama 3 locally in under 10 minutes.
Stop shipping AI text bubbles. Run npm install wireai-rn zod.