Skip to content
TutorialsAIRN

Generative UI in React Native: How It Actually Works

Malik Chohra

Malik Chohra

April 28, 2026 · 5 min read

Learn the mechanics of Generative UI on mobile, and how the LLM picks native iOS and Android components instead of sending plain text.

Generative UI in React Native allows AI agents to render interactive native components instead of markdown text. You register your components with WireAI using Zod schemas. The LLM outputs validated JSON props, and WireAI mounts the correct iOS or Android component, no custom parsers, no brittle regex, no crashes from invalid output.

Most AI features in mobile apps follow a boring pattern: you send a message, you get a wall of text. Generative UI flips this. You ask the LLM to pick a UI component and fill in its data. The user sees a native card or form. They tap instead of type. The result feels like a purposefully-designed app interaction, not a chat window with extra steps.

What is the core idea?

Think of your UI components as a vocabulary. You register each component with WireAI, providing three things: a name, a plain-English description that guides the LLM when to use it, and a Zod schema defining the exact props it needs. WireAI automatically constructs a system prompt from this registry and injects it into every LLM call. No manual prompt engineering. No maintaining a separate prompt file.

Each conversation turn follows the same cycle: user sends a message → LLM reads the component registry in the system prompt → LLM outputs JSON naming one component and its props → WireAI validates against your Zod schema → component renders natively. If validation fails at any step, WireAI falls back to a text message, the app never crashes.

Setting up WireAI in under 3 minutes

npm install wireai-rn zod

Wrap your root component in WireAIProvider. Pass an adapter (which LLM to use) and a components array (what the agent is allowed to render). For local development, the OllamaAdapter connects to a locally-running Llama 3 instance with no API key required.

import { WireAIProvider, OllamaAdapter } from 'wireai-rn';
import { builtInComponents } from 'wireai-rn/components';

export default function App() {
  return (
    <WireAIProvider
      adapter={new OllamaAdapter({ baseUrl: 'http://localhost:11434', model: 'llama3' })}
      components={builtInComponents}
    >
      <ChatScreen />
    </WireAIProvider>
  );
}

Writing your first component registration

WireAI ships with 11 built-in components (ActionCard, ChipSelectCard, TextInputCard, StepList, and more). When you need something domain-specific, register it with registerComponent. The description field is the most important part, write it as a clear instruction to the LLM, not a technical specification.

import { registerComponent } from 'wireai-rn';
import { z } from 'zod';
import { View, Text, TouchableOpacity } from 'react-native';

const MoodCheckIn = registerComponent({
  name: "MoodCheckIn",
  // The LLM reads this to decide when to render this component
  description: "Use at the start of each session to ask the user how they are feeling. Do NOT use for other question types.",
  schema: z.object({
    question: z.string().describe("A short, empathetic question"),
    options: z.array(z.string()).min(2).max(5).describe("2-5 mood options as tappable buttons"),
  }),
  render: ({ props, onSubmit }) => (
    <View>
      <Text>{props.question}</Text>
      {props.options.map(opt => (
        <TouchableOpacity key={opt} onPress={() => onSubmit(opt)}>
          <Text>{opt}</Text>
        </TouchableOpacity>
      ))}
    </View>
  ),
});

How does the agent pick the right component?

WireAI constructs a system prompt block for each registered component automatically. The LLM sees something like: "Component: MoodCheckIn, Use at the start of each session to ask the user how they are feeling. Props schema: question (string), options (array of 2–5 strings)." After reading the full registry, the LLM responds:

{
  "component": "MoodCheckIn",
  "props": {
    "question": "How are you feeling today?",
    "options": ["Great", "Okay", "Tired", "Anxious"]
  }
}

WireAI parses this response, validates props against your Zod schema, and renders the component. If validation passes, the user sees native UI. If it fails, WireAI renders a fallback text message and logs the specific prop that failed validation, you see exactly what the model returned and which constraint it broke.

Why is the flat component model better for mobile?

Web-first generative UI frameworks like Vercel AI SDK ask the LLM to output an entire component tree, the model decides the nesting, layout, and structure of the whole screen. This requires GPT-4o-level reliability to work consistently. Smaller models like Llama 3 8B have a ~30% error rate on deeply nested schemas.

WireAI uses a flat model: the LLM picks one component per turn. Not a tree. Not a layout. One component with its typed props. This drops the error rate from ~30% to under 5% even with small local models. That is what makes WireAI work offline with Ollama, where no other generative UI framework is reliable at all.

How does the action feedback loop work?

Each registered component receives an onSubmit callback. When the user taps a button or fills in a form and confirms, the component calls onSubmit(value). WireAI automatically sends that value back to the LLM as the next user message: "User selected: Tired." The agent sees the selection and renders the next appropriate component, a follow-up question, a summary card, or a recommended action step.

import { useWireAIThread } from 'wireai-rn';
import { WireAIMessageRenderer } from 'wireai-rn/components';

export function AgentScreen() {
  const { messages, sendMessage, isThinking } = useWireAIThread();

  return (
    <FlatList
      data={messages}
      renderItem={({ item }) => (
        // Renders text OR a native component depending on agent output
        <WireAIMessageRenderer message={item} />
      )}
    />
  );
}

This loop runs entirely inside the useWireAIThread hook. Your screen component stays clean, no action dispatchers, no conversation state management, no middleware. The hook handles conversation history, loading state, fallback rendering, and the full action feedback cycle.

What happens when the LLM returns invalid JSON?

WireAI has a three-layer error recovery system. First, it tries to extract JSON from prose output, some models wrap the JSON in markdown code fences or add explanatory text before the object. Second, if JSON extraction fails entirely, WireAI displays the raw text as a plain message bubble. Third, if the JSON is valid but fails Zod validation, it renders a fallback text message and logs the specific prop name and the value the model returned. The app never throws or crashes.

In development mode, the console output includes the full LLM response alongside the Zod error. This makes debugging fast: you see exactly what the model said and which constraint failed, so you can add a .describe() hint to guide future outputs.

When should you use generative UI vs. static screens?

Generative UI is not a replacement for every screen. Use it when the right next step depends on context the user has communicated through conversation, their goals, their mood, their previous actions. Use static screens for navigation, settings, and any flow with a fixed, predictable sequence.

The sweet spot: multi-step workflows where the order and content of steps varies by user input. Coaching sessions, onboarding flows, customer support, health logging, financial goal-setting, anywhere the agent needs to guide the user through a dynamic sequence that a static decision tree cannot anticipate. That is where generative UI produces interactions that no amount of if/else logic could replicate.


Build real native AI interfaces. Run npm install wireai-rn zod.