Skip to content

WireAIProvider

The root context provider. Place it once above any component that uses wireai-rn hooks. It initializes the component registry, LLM adapter, and shared context.

Usage

App.tsx
1import { WireAIProvider, defaultComponents } from "wireai-rn";
2
3<WireAIProvider
4  llm={{ provider: "ollama", baseUrl: "http://localhost:11434", model: "llama3" }}
5  components={defaultComponents}
6  systemPromptSuffix={appInstructions}
7  initialMessages={restoredHistory}
8  maxContextMessages={20}
9  maxContextChars={12000}
10  onThreadUpdate={(msgs) => saveToStorage(msgs)}
11>
12  {children}
13</WireAIProvider>

Props

PropTypeDefaultDescription
llmrequiredLocalLLMConfigLLM provider configuration. See LLM Adapters.
componentsrequiredWireAIComponent[]Components the LLM can render. Use defaultComponents or mix with your custom ones.
systemPromptSuffixstringundefinedAppended verbatim to the auto-generated system prompt. Use for app persona, conversation flows, or domain-specific rules.
initialMessagesMessage[][]Pre-populate the conversation (e.g., restored from storage on app launch).
maxContextMessagesnumber20Maximum number of messages sent to the LLM per request. Oldest messages are trimmed first.
maxContextCharsnumber12000Maximum total characters sent to the LLM. Oldest messages are trimmed first. ~12 000 chars ≈ 3 000 tokens.
onMessage(msg: Message) => voidundefinedCalled each time a user or assistant message is added to the thread.
onThreadUpdate(msgs: Message[]) => voidundefinedCalled with the full history whenever it changes. Ideal for persistence.
licenseKeystringundefinedReserved for future use. Has no effect in v0.x.

LocalLLMConfig

types
1type LocalLLMConfig = {
2  provider: "ollama" | "lmstudio" | "openai" | "webhook" | "custom" | "a2a";
3  baseUrl: string;    // e.g. "http://localhost:11434" or "https://api.openai.com"
4  model: string;      // e.g. "llama3" or "gpt-4o-mini"
5  apiKey?: string;    // required for OpenAI; optional for Webhook and A2A
6  temperature?: number;
7  maxTokens?: number;
8  timeoutMs?: number; // default: 60 000ms
9};

Provider remounting

When the user switches LLM providers (e.g., in a settings panel), you want the adapter, system prompt, and message history to reset completely. Use a key prop derived from the config:

Tsx
1<WireAIProvider
2  key={`${config.provider}:${config.baseUrl}:${config.model}`}
3  llm={config}
4  components={components}
5>
6  {children}
7</WireAIProvider>
This pattern is used in the mental-coach example to reset the full conversation when switching between Ollama, OpenAI, and A2A providers.

Context budget

wireai-rn trims older messages when the conversation grows beyond maxContextMessages or maxContextChars. The system prompt is always sent regardless of budget. Tune based on your model's context window:

ModelRecommended maxContextMessagesRecommended maxContextChars
GPT-4o / Claude 330+40 000+
GPT-4o-mini2012 000
Llama 3 8B (Ollama)106 000
Phi-3 mini (Ollama)63 000

Dev-mode guardrails

In __DEV__ mode, the provider automatically logs warnings for common misconfigurations:

  • API keys in config: API keys should never be bundled in a mobile app. Use the Webhook adapter for production.
  • Large registries (> 10 components): Local models (Llama 3, Phi-3) work best with fewer components. JSON quality degrades with large registries.
  • LLM connectivity: Performs a HEAD request to baseUrl on mount to verify the server is reachable.