Skip to content
FeaturesAI

The Future of Agentic UI: Why We Built WireAI

Malik Chohra

Malik Chohra

April 20, 2026 · 5 min read

Static mobile UIs are dead. Learn how WireAI provides a dynamic native runtime for AI agents to render high-performance interfaces on the fly.

WireAI is an open-source React Native SDK that lets your AI agent respond with interactive native components instead of streaming markdown text. You register your components with a name, description, and Zod schema. The LLM picks which one to render, WireAI validates the props, and the component mounts natively without dropping frames.

We have spent years hard-coding every screen in our mobile apps. You design the flows, you build the components, you write the navigation logic. When you start building around AI agents, this model breaks down. The agent has context about the user that no static flow can anticipate. You have to let the model decide what the user needs to see next, and you need a runtime that handles that safely.

Why static screens fail with AI agents

Anyone who has shipped a "chat-first" mobile app hits the same wall: text is a poor interface. If a user says "log 8oz of water," they don't want the AI to reply with a polite confirmation paragraph. They want a visual confirmation card they can tap. If they say "I'm feeling anxious today," they don't want a wall of coping strategies, they want a structured mood check-in that takes 10 seconds and feeds their history to the agent.

Forcing mobile users to read AI-generated essays is the failure mode of the first generation of AI apps. The second generation renders UI. This is generative UI, and it requires a runtime designed for mobile from the ground up.

How the WireAI runtime works

WireAI is a runtime, not a template. It sits between your LLM and your React Native components. You register your existing components with a name, a plain-English description, and a Zod schema. WireAI generates a system prompt from that registry automatically. When the agent responds with JSON naming a component and its props, WireAI validates the props against your schema and renders the component. If validation fails, it falls back to a text message, no crashes, no red screens.

The whole cycle, user message → LLM → JSON → Zod validation → native render, takes one hook call in your screen component. You do not manage conversation history, parse JSON, catch errors, or write system prompts. The runtime handles all of it.

Registering your first component

Any React Native component can be registered. You add three things: a name the LLM will use to refer to it, a plain-English description that tells the LLM when to use it, and a Zod schema defining the props. WireAI handles the rest.

import { registerComponent } from 'wireai-rn';
import { z } from 'zod';

export const WaterTracker = registerComponent({
  name: "WaterTracker",
  description: "Use when the user wants to log water intake or check their hydration goal.",
  schema: z.object({
    targetGoal: z.number().describe("Daily water goal in ounces"),
    logged: z.number().describe("Amount already logged today in ounces"),
  }),
  render: ({ props, onSubmit }) => (
    <WaterTrackerUI
      goal={props.targetGoal}
      logged={props.logged}
      onLog={(amount) => onSubmit(amount)}
    />
  ),
});

The 11 built-in components

WireAI ships with 11 components that cover the most common agentic interaction patterns. Every component has a Zod schema, an LLM routing description, and a submitted-state pattern built in. They use only React Native primitives, no third-party UI library required:

  • ActionCard, Single CTA with title, description, and confirm button.
  • ChipSelectCard, Multi-select chips for categorical choices.
  • ConfirmPrompt, Two-button confirm/cancel dialog.
  • ContentSelectCard, List of options with descriptions to pick from.
  • InfoList, Bulleted information display with optional icons.
  • MessageBubble, Styled agent message with rich text support.
  • NumberStepperCard, Increment/decrement stepper for numeric input.
  • SelectionCard, Single-select from a list of labeled options.
  • StatusCard, Progress or status display with icon and label.
  • StepList, Ordered list of steps with checkable state.
  • TextInputCard, Free-text input with prompt and submit button.

Local LLMs first: why no API key is required

The WireAI free tier connects to locally-running LLMs via Ollama or LM Studio. Install Ollama, run ollama pull llama3, and WireAI works immediately without an account, API key, or data leaving your machine. This is how prototyping should work.

For production apps, @wireai/cloud (coming in v0.2) adds cloud LLM adapters: OpenAI, Anthropic, and Gemini. The adapter interface is identical, swap one line of configuration and your components work with any model. See the model comparison guide to choose the right LLM for your use case.

The A2UI protocol alignment

A2UI (Agent-to-UI) is an emerging standard for how AI agents communicate with UI runtimes. WireAI's component registry format, name, description, Zod schema, JSON output, is architecturally aligned with A2UI v0.9. As the standard matures, components registered with WireAI today will be compatible with A2UI-aware tools and orchestration platforms. Full A2UI compatibility is targeted for WireAI v0.2.


Stop making your users read text bubbles. Check out the open-source repo or run npm install wireai-rn zod.