Skip to content

The AI-First SDK for Expo

Add generative UI to your Expo app in about 3 minutes. AI agents drive native components directly with Zod-validated props. No prompt engineering.

Native Experience

Avoid slow WebViews and broken AI text parsing. WireAI uses standard React Native primitives to render high-performance UI picked by your AI agent.

Zero Cost Inference

Pair WireAI with Ollama or LM Studio to run your agent and UI generation entirely locally. No API keys, no latency, no cost.

What's Included

Expo Go Compatible: no custom native modules required for the core SDK
Zod Prop Validation: LLM output is checked against your schema before render
Local LLM Support: Ollama or LM Studio for zero-cost inference
11 Starter Components: pre-built native views for AI responses
Context Budgeting: handle long conversations on mobile devices
Auto-Prompting: the system prompt updates as you register components
Custom UI: turn any React Native component into an AI-ready view
MIT Licensed: build unlimited apps for free
Type Safe: first-class TypeScript support

Install in 3 minutes

Open source, MIT licensed, runs in Expo Go.

View on GitHub

Questions?

Does this work with Expo Go?

Yes. The core WireAI SDK works perfectly in Expo Go using the Ollama or Webhook adapters. You only need a development build if you want to use custom native modules or on-device LLM inference.

How fast is the integration?

Integration takes about 3 minutes. Install `wireai-rn`, wrap your app in `WireAIProvider`, and register your first component. Your AI agent can now render that component immediately.

Can I use it with OpenAI or Claude?

Yes. While the OSS version focuses on local models, WireAI supports any LLM provider that follows standard chat completion APIs, including OpenAI, Anthropic, and Gemini.