Skip to content

Quick Start

Build a working generative UI chat screen in under 5 minutes.

Prerequisites
You need a running LLM. The quickest option is Ollama with llama3 pulled locally. Or configure provider: "openai" with your API key.
1

Install wireai-rn

$
yarnadd wireai-rn zod
2

Wrap your screen in WireAIProvider

WireAIProvider initializes the component registry and LLM adapter. Place it once, at or above the screen level.

App.tsx
1import { WireAIProvider, defaultComponents } from "wireai-rn";
2
3const llm = {
4  provider: "ollama" as const,
5  baseUrl: "http://localhost:11434",
6  model: "llama3",
7};
8
9export default function App() {
10  return (
11    <WireAIProvider llm={llm} components={defaultComponents}>
12      <ChatScreen />
13    </WireAIProvider>
14  );
15}
3

Build the chat UI with hooks

Three hooks handle everything: useWireAIThread for conversation state, useWireAIInput for the input field, and useWireAIAction to convert component interactions into LLM messages.

ChatScreen.tsx
1import React from "react";
2import { FlatList, TextInput, TouchableOpacity, Text, View, StyleSheet } from "react-native";
3import {
4  useWireAIThread,
5  useWireAIInput,
6  useWireAIAction,
7  ComponentRenderer,
8} from "wireai-rn";
9
10export function ChatScreen() {
11  const { messages, isLoading, error, sendMessage } = useWireAIThread();
12  const { inputText, setInputText, handleSubmit } = useWireAIInput(sendMessage);
13  const createCallbacks = useWireAIAction(sendMessage);
14
15  return (
16    <View style={styles.container}>
17      <FlatList
18        data={messages}
19        keyExtractor={(m) => m.id}
20        contentContainerStyle={{ padding: 16, gap: 12 }}
21        renderItem={({ item }) => (
22          <ComponentRenderer
23            message={item}
24            callbacks={createCallbacks(item.id)}
25          />
26        )}
27      />
28
29      {isLoading && (
30        <Text style={styles.loading}>Thinking…</Text>
31      )}
32      {error && (
33        <Text style={styles.error}>{error}</Text>
34      )}
35
36      <View style={styles.inputRow}>
37        <TextInput
38          style={styles.input}
39          value={inputText}
40          onChangeText={setInputText}
41          onSubmitEditing={handleSubmit}
42          placeholder="Type a message…"
43          returnKeyType="send"
44        />
45        <TouchableOpacity style={styles.sendBtn} onPress={handleSubmit}>
46          <Text style={styles.sendText}>Send</Text>
47        </TouchableOpacity>
48      </View>
49    </View>
50  );
51}
52
53const styles = StyleSheet.create({
54  container: { flex: 1, backgroundColor: "#fff" },
55  loading: { textAlign: "center", padding: 8, color: "#6b7280", fontSize: 13 },
56  error: { textAlign: "center", padding: 8, color: "#ef4444", fontSize: 13 },
57  inputRow: {
58    flexDirection: "row",
59    padding: 12,
60    gap: 8,
61    borderTopWidth: 1,
62    borderTopColor: "#e5e7eb",
63  },
64  input: {
65    flex: 1,
66    borderWidth: 1,
67    borderColor: "#d1d5db",
68    borderRadius: 10,
69    paddingHorizontal: 14,
70    paddingVertical: 10,
71    fontSize: 15,
72  },
73  sendBtn: {
74    backgroundColor: "#7c3aed",
75    borderRadius: 10,
76    paddingHorizontal: 18,
77    justifyContent: "center",
78  },
79  sendText: { color: "#fff", fontWeight: "600", fontSize: 14 },
80});
4

Send your first message

Start the app and type: “Plan a trip to Tokyo”. The LLM will respond with a sequence of components: destination input, duration picker, activity selector, and a summary InfoList.

What to expect
The SDK auto-generates a system prompt from your registered components. The LLM learns which components exist and how to use them without any manual prompt engineering.

Complete single-file example

App.tsx · complete
1import React from "react";
2import { FlatList, TextInput, TouchableOpacity, Text, View, StyleSheet, SafeAreaView } from "react-native";
3import {
4  WireAIProvider,
5  useWireAIThread,
6  useWireAIInput,
7  useWireAIAction,
8  ComponentRenderer,
9  defaultComponents,
10} from "wireai-rn";
11
12const llmConfig = {
13  provider: "openai" as const,
14  baseUrl: "https://api.openai.com",
15  model: "gpt-4o-mini",
16  apiKey: "sk-...",  // ⚠️ use webhook adapter for production
17};
18
19function ChatScreen() {
20  const { messages, isLoading, sendMessage } = useWireAIThread();
21  const { inputText, setInputText, handleSubmit } = useWireAIInput(sendMessage);
22  const createCallbacks = useWireAIAction(sendMessage);
23
24  return (
25    <SafeAreaView style={{ flex: 1 }}>
26      <FlatList
27        data={messages}
28        keyExtractor={(m) => m.id}
29        contentContainerStyle={{ padding: 16, gap: 12 }}
30        renderItem={({ item }) => (
31          <ComponentRenderer
32            message={item}
33            callbacks={createCallbacks(item.id)}
34          />
35        )}
36      />
37      {isLoading && <Text style={{ textAlign: "center", padding: 8, color: "#6b7280" }}>Thinking…</Text>}
38      <View style={{ flexDirection: "row", padding: 12, gap: 8 }}>
39        <TextInput
40          value={inputText}
41          onChangeText={setInputText}
42          onSubmitEditing={handleSubmit}
43          placeholder="Ask anything…"
44          style={{ flex: 1, borderWidth: 1, borderColor: "#d1d5db", borderRadius: 10, padding: 12 }}
45        />
46        <TouchableOpacity onPress={handleSubmit}
47          style={{ backgroundColor: "#7c3aed", borderRadius: 10, paddingHorizontal: 16, justifyContent: "center" }}>
48          <Text style={{ color: "#fff", fontWeight: "600" }}>Send</Text>
49        </TouchableOpacity>
50      </View>
51    </SafeAreaView>
52  );
53}
54
55export default function App() {
56  return (
57    <WireAIProvider llm={llmConfig} components={defaultComponents}>
58      <ChatScreen />
59    </WireAIProvider>
60  );
61}
API key security
Never ship API keys in your mobile app bundle. Use the Webhook adapter for production: it proxies all LLM calls through your backend server.

Next steps