Skip to content

Production Guide

Best practices for shipping a wireai-rn app to production: API key security, App Store compliance, performance tuning, and error handling.

API key security

Never bundle API keys in your app
API keys embedded in your mobile app binary can be extracted with standard reverse engineering tools. Anyone who downloads your app can find and abuse your key.

The correct production pattern is a backend proxy:

Production architecture
Mobile App
│ (no API keys, just a short-lived auth token)
Your Backend Server ← API keys live here
│ (auth + rate limiting + usage tracking)
OpenAI / Ollama / Claude API
Production config
1// Development (local testing only)
2const devConfig = {
3  provider: "openai" as const,
4  baseUrl: "https://api.openai.com",
5  model: "gpt-4o-mini",
6  apiKey: "sk-...",  // ⚠️ development only
7};
8
9// Production (use Webhook adapter)
10const prodConfig = {
11  provider: "webhook" as const,
12  baseUrl: "https://api.myapp.com/chat",
13  model: "gpt-4o-mini",
14  // apiKey here is YOUR app's auth token (short-lived JWT, not the OpenAI key)
15  apiKey: userJwtToken,
16};

App Store compliance

Apple App Store Review Guidelines (Section 2.3.1, 4.2) require that AI-driven UI changes do not bypass review by remotely altering functionality not described in the app listing.

✓ Safe patterns

  • → LLM controls which registered component to show
  • → LLM fills in props (text, options, configuration)
  • → All components are statically bundled in the binary
  • → No code is downloaded at runtime

✗ Avoid these

  • → Registering components that perform payments
  • → Components that bypass auth or system permissions
  • → Dynamic code loading via eval/JSI
  • → Undocumented AI-driven UI changes
App Store description
Include language like “The app uses AI to personalize its interface, selecting from a fixed set of built-in interactive components.” This covers the generative UI behavior in your listing.

Error handling

The error field from useWireAIThread contains user-friendly error messages. Always render it:

Tsx
1const { messages, isLoading, error, sendMessage } = useWireAIThread();
2
3// Render error state
4{error && (
5  <View style={{ padding: 12, backgroundColor: "#fef2f2", borderRadius: 8 }}>
6    <Text style={{ color: "#dc2626", fontSize: 13 }}>{error}</Text>
7    <TouchableOpacity onPress={() => sendMessage(lastUserMessage)}>
8      <Text style={{ color: "#7c3aed", fontSize: 13, marginTop: 4 }}>Try again →</Text>
9    </TouchableOpacity>
10  </View>
11)}

Component render errors are caught by the built-in ComponentErrorBoundary: invalid props or component crashes show a FallbackMessage instead of crashing the app.

Performance

Tune the context budget for your model

Tsx
1<WireAIProvider
2  maxContextMessages={15}    // ~15 turns: balance between context and token cost
3  maxContextChars={8000}     // ~2000 tokens for most tokenizers
4  llm={config}
5  components={components}
6/>

Keep the component registry small for local models

Local models (Llama 3, Phi-3) work best with fewer than 10 registered components. Beyond that, JSON output quality degrades. Use the full set only with GPT-4o / Claude.

Persist conversation history

Tsx
1<WireAIProvider
2  initialMessages={restoredHistory}
3  onThreadUpdate={(msgs) => {
4    // Persist on every update (debounce if needed)
5    storage.setItem("history", JSON.stringify(msgs));
6  }}
7/>

Add a loading indicator and abort button

Tsx
1const { isLoading, abort } = useWireAIThread();
2
3{isLoading && (
4  <TouchableOpacity onPress={abort} style={{ padding: 8 }}>
5    <Text style={{ color: "#6b7280", fontSize: 13 }}>Stop generating ✕</Text>
6  </TouchableOpacity>
7)}