The AI-First SDK for Expo
Add generative UI to your Expo app in about 3 minutes. AI agents drive native components directly with Zod-validated props. No prompt engineering.
Native Experience
Avoid slow WebViews and broken AI text parsing. WireAI uses standard React Native primitives to render high-performance UI picked by your AI agent.
Zero Cost Inference
Pair WireAI with Ollama or LM Studio to run your agent and UI generation entirely locally. No API keys, no latency, no cost.
What's Included
Questions?
Does this work with Expo Go?
Yes. The core WireAI SDK works perfectly in Expo Go using the Ollama or Webhook adapters. You only need a development build if you want to use custom native modules or on-device LLM inference.
How fast is the integration?
Integration takes about 3 minutes. Install `wireai-rn`, wrap your app in `WireAIProvider`, and register your first component. Your AI agent can now render that component immediately.
Can I use it with OpenAI or Claude?
Yes. While the OSS version focuses on local models, WireAI supports any LLM provider that follows standard chat completion APIs, including OpenAI, Anthropic, and Gemini.