Skip to content
AIRN

Model Context Protocol (MCP) for React Native: A Mobile Developer's Guide

Malik Chohra

Malik Chohra

May 6, 2026 · 5 min read

MCP lets AI agents access tools, databases, and live data sources. This guide explains what the Model Context Protocol is, how it works, and how to call an MCP server from a React Native app.

The Model Context Protocol (MCP) is an open standard from Anthropic that lets AI agents connect to external data sources and tools through a uniform interface. From a React Native app, you don't run an MCP server on-device, you call an MCP-enabled backend that your AI agent uses to fetch context (calendar events, health records, database rows, live APIs) before responding. The result is an agent that knows things beyond its training data, without you writing custom tool-calling plumbing for every data source.

When Anthropic released MCP in late 2024, most of the ecosystem coverage focused on desktop AI tools, Claude Desktop, Cursor, Cline. Mobile was treated as an afterthought. That gap is closing fast. If you are building a production AI mobile app in 2026 and your agent needs access to live data, MCP is likely the cleanest architecture for handling it. This guide explains MCP from first principles and shows how to consume an MCP server from a React Native/Expo app.

What is MCP and why does it exist?

Before MCP, every AI integration that needed external data required custom plumbing. If your agent needed to check a user's calendar, you wrote a tool function that called the Google Calendar API, formatted the result into a string the LLM could read, and included it in the system prompt or function-call schema. Do this for ten data sources and you have ten bespoke integrations to maintain.

MCP standardizes this. An MCP server exposes resources (data that can be read, like a user's health records), tools (actions the agent can invoke, like creating a calendar event), and prompts (reusable conversation templates). An MCP client (your AI agent) discovers what's available and calls the server using a standard JSON-RPC protocol. New data sources become available without rewriting any agent logic, you just point the agent at a new MCP server.

The analogy Anthropic uses is useful: MCP is to AI agents what REST was to web services. Instead of every API having its own custom SDK, you have one standard interface that everything speaks.

MCP concepts every mobile developer needs to understand

Three primitives cover 90% of mobile use cases:

  • Resources, Read-only data the agent can access. Examples: a user's meal log for the last 7 days, their saved workout templates, the last 10 conversation summaries. Resources are fetched when the agent needs context, not on every turn.
  • Tools, Actions the agent can invoke during a conversation. Examples: create a calendar reminder, log a new meal, fetch the current weather at the user's location, look up a medication in a health database. Tools are called by the LLM, not the user, the agent decides when to call them.
  • Prompts, Reusable instruction templates the agent can load. Examples: a cognitive behavioral therapy session template, a daily check-in prompt, a goal-setting framework. Prompts let you store complex agent behaviors outside the app binary and update them without a release.

How MCP fits into a React Native app architecture

A React Native app is not an MCP client in the traditional sense. The official MCP TypeScript SDK runs in Node.js and relies on stdio transport, it does not run in the React Native Hermes engine. The practical architecture for mobile is:

  • React Native app, sends user messages to your backend AI agent. This is a regular fetch call.
  • Backend AI agent, a Node.js server or edge function that runs Claude (or another MCP-compatible LLM) with the MCP client SDK. This layer orchestrates tool calls, fetches resources, and builds the final response.
  • MCP servers, one or more servers that provide tools and resources. Can be third-party (e.g., an MCP server for Google Calendar) or your own (e.g., a custom server for your app's health database).

The mobile app remains simple. All the MCP complexity lives on the server.

Building an MCP-enabled backend for your React Native app

Here is a minimal backend that uses the MCP client SDK with Claude. This runs in Node.js (Next.js API route, Express, Hono, etc.).

// server/agent.ts, Backend that calls Claude with MCP tools
import Anthropic from "@anthropic-ai/sdk";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";

const claude = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function runAgentWithMCP(userMessage: string, userId: string) {
  // Connect to your MCP server
  const mcpClient = new Client({ name: "wireai-mobile", version: "1.0.0" }, {});
  const transport = new SSEClientTransport(
    new URL(`${process.env.MCP_SERVER_URL}/sse`)
  );
  await mcpClient.connect(transport);

  // Get available tools from the MCP server
  const { tools } = await mcpClient.listTools();

  // Convert MCP tool definitions to Anthropic tool schema
  const anthropicTools = tools.map((tool) => ({
    name: tool.name,
    description: tool.description,
    input_schema: tool.inputSchema,
  }));

  // Initial Claude call
  let response = await claude.messages.create({
    model: "claude-3-5-sonnet-20241022",
    max_tokens: 1024,
    tools: anthropicTools,
    messages: [{ role: "user", content: userMessage }],
  });

  // Agentic loop: keep calling tools until Claude is done
  while (response.stop_reason === "tool_use") {
    const toolUseBlock = response.content.find((b) => b.type === "tool_use");
    if (!toolUseBlock || toolUseBlock.type !== "tool_use") break;

    // Call the MCP tool
    const toolResult = await mcpClient.callTool({
      name: toolUseBlock.name,
      arguments: toolUseBlock.input as Record<string, unknown>,
    });

    // Send tool result back to Claude
    response = await claude.messages.create({
      model: "claude-3-5-sonnet-20241022",
      max_tokens: 1024,
      tools: anthropicTools,
      messages: [
        { role: "user", content: userMessage },
        { role: "assistant", content: response.content },
        {
          role: "user",
          content: [
            {
              type: "tool_result",
              tool_use_id: toolUseBlock.id,
              content: JSON.stringify(toolResult.content),
            },
          ],
        },
      ],
    });
  }

  await mcpClient.close();

  const textBlock = response.content.find((b) => b.type === "text");
  return textBlock?.type === "text" ? textBlock.text : "";
}

Building a simple MCP server for your app's data

You will likely need a custom MCP server for your app's own data, the user's health history, their saved preferences, their activity log. Here is a minimal server that exposes a health log resource and a "log meal" tool:

// mcp-server/index.ts
import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import { z } from "zod";
import express from "express";

const server = new McpServer({ name: "health-data", version: "1.0.0" });

// Expose the user's recent meals as a resource
server.resource(
  "recent-meals",
  new ResourceTemplate("health://meals/{userId}", { list: undefined }),
  async (uri) => {
    const userId = uri.pathname.split("/")[2];
    const meals = await fetchMealsFromDatabase(userId);
    return {
      contents: [{ uri: uri.href, text: JSON.stringify(meals), mimeType: "application/json" }],
    };
  }
);

// Tool: log a new meal
server.tool(
  "log_meal",
  "Log a meal that the user just ate. Call this when the user describes eating something.",
  { userId: z.string(), description: z.string(), calories: z.number().optional() },
  async ({ userId, description, calories }) => {
    await saveMealToDatabase({ userId, description, calories, loggedAt: new Date() });
    return { content: [{ type: "text", text: `Logged: ${description}` }] };
  }
);

// Serve over SSE for remote MCP clients
const app = express();
const transports: Record<string, SSEServerTransport> = {};

app.get("/sse", async (req, res) => {
  const transport = new SSEServerTransport("/messages", res);
  transports[transport.sessionId] = transport;
  await server.connect(transport);
});

app.post("/messages", async (req, res) => {
  const { sessionId } = req.query;
  await transports[sessionId as string]?.handlePostMessage(req, res);
});

app.listen(3001);

MCP resources vs. RAG: when to use each

Both MCP resources and RAG (retrieval-augmented generation) let an agent access external knowledge. The difference is granularity and structure:

  • MCP resources are structured, addressable data the agent explicitly requests. The agent knows what it is asking for and when to ask. Best for user-specific data like health logs, preferences, and activity history.
  • RAG performs similarity search over unstructured text. The agent embeds the user's query and retrieves the most relevant chunks from a vector store. Best for large document corpora, product manuals, medical reference databases, knowledge bases, where the agent can't enumerate what it needs in advance.

Most production mobile apps need both. Use MCP for the user's own data; use RAG for the domain knowledge the agent draws on to answer questions about that data.

MCP + WireAI: tools that render native components

The combination that most WireAI users are building toward is: MCP provides the context (user's health data, live APIs), and WireAI decides what native component to render based on that context. The agent loop looks like this:

  • User says: "I just had lunch, what should I do now?"
  • Agent calls MCP tool get_recent_meals → gets the last 3 meals.
  • Agent calls MCP tool get_user_goals → gets the user's hydration and activity targets.
  • Agent reasons: user is on track for calories, behind on water. Returns JSON: { "component": "HydrationReminder", "props": { "target": 64, "logged": 24 } }.
  • WireAI validates and renders the HydrationReminder native component.

The result is an AI experience that feels genuinely intelligent: it knows the user's context and responds with a specific, tappable action rather than a generic advice paragraph.


MCP is the infrastructure layer. WireAI is the rendering layer. Start with npm install wireai-rn zod for the mobile runtime, and follow the local LLM setup guide to get your agent running without a cloud API key during development.