Skip to main content
createDeepAgent has the following configuration options:
const agent = createDeepAgent({
  model?: BaseLanguageModel | string,
  tools?: TTools | StructuredTool[],
  systemPrompt?: string | SystemMessage,
  middleware?: TMiddleware,
  subagents?: TSubagents,
  responseFormat?: TResponse,
  backend?: AnyBackendProtocol | ((config) => AnyBackendProtocol),
  interruptOn?: Record<string, boolean | InterruptOnConfig>,
  memory?: string[],
  skills?: string[],
  ...
});
For the full parameter list, see the createDeepAgent API reference.

Model

Pass a model string in provider:model format, or an initialized model instance. Defaults to anthropic:claude-sonnet-4-6. See supported models for all providers and suggested models for tested recommendations.
Use the provider:model format (for example openai:gpt-5) to quickly switch between models.
👉 Read the OpenAI chat model integration docs
npm install @langchain/openai deepagents
import { createDeepAgent } from "deepagents";

process.env.OPENAI_API_KEY = "your-api-key";

const agent = createDeepAgent({ model: "gpt-5.4" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly

Connection resilience

LangChain chat models automatically retry failed API requests with exponential backoff. By default, models retry up to 6 times for network errors, rate limits (429), and server errors (5xx). Client errors like 401 (unauthorized) or 404 are not retried. You can adjust the maxRetries parameter when creating a model to tune this behavior for your environment:
import { ChatAnthropic } from "@langchain/anthropic";
import { createDeepAgent } from "deepagents";

const agent = createDeepAgent({
    model: new ChatAnthropic({
        model: "claude-sonnet-4-6",
        maxRetries: 10, // Increase for unreliable networks (default: 6)
        timeout: 120_000, // Increase timeout for slow connections
    }),
});
For long-running agent tasks on unreliable networks, consider increasing max_retries to 10–15 and pairing it with a checkpointer so that progress is preserved across failures.

Tools

In addition to built-in tools for planning, file management, and subagent spawning, you can provide custom tools:
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";

const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string().describe("The search query"),
      maxResults: z.number().optional().default(5),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general"),
      includeRawContent: z.boolean().optional().default(false),
    }),
  },
);

const agent = createDeepAgent({
  tools: [internetSearch],
});

System prompt

Deep Agents come with a built-in system prompt. The default system prompt contains detailed instructions for using the built-in planning tool, file system tools, and subagents. When middleware add special tools, like the filesystem tools, it appends them to the system prompt. Each deep agent should also include a custom system prompt specific to its specific use case:
import { createDeepAgent } from "deepagents";

const researchInstructions = `You are an expert researcher. ` +
  `Your job is to conduct thorough research, and then ` +
  `write a polished report.`;

const agent = createDeepAgent({
  systemPrompt: researchInstructions,
});

Middleware

By default, Deep Agents have access to the following middleware: If you are using memory, skills, or human-in-the-loop, the following middleware is also included:
  • MemoryMiddleware: Persists and retrieves conversation context across sessions when the memory argument is provided
  • SkillsMiddleware: Enables custom skills when the skills argument is provided
  • HumanInTheLoopMiddleware: Pauses for human approval or input at specified points when the interruptOn argument is provided

Pre-built middleware

LangChain exposes additional pre-built middleware that let you add-on various features, such as retries, fallbacks, or PII detection. See Prebuilt middleware for more. The deepagents package also exposes createSummarizationMiddleware for the same workflow. For more detail, see Summarization.

Provider-specific middleware

For provider-specific middleware that is optimized for specific LLM providers, see Official integrations and Community integrations.

Custom middleware

You can provide additional middleware to extend functionality, add tools, or implement custom hooks:
import { tool, createMiddleware } from "langchain";
import { createDeepAgent } from "deepagents";
import * as z from "zod";

const getWeather = tool(
  ({ city }: { city: string }) => {
    return `The weather in ${city} is sunny.`;
  },
  {
    name: "get_weather",
    description: "Get the weather in a city.",
    schema: z.object({
      city: z.string(),
    }),
  }
);

let callCount = 0;

const logToolCallsMiddleware = createMiddleware({
  name: "LogToolCallsMiddleware",
  wrapToolCall: async (request, handler) => {
    // Intercept and log every tool call - demonstrates cross-cutting concern
    callCount += 1;
    const toolName = request.toolCall.name;

    console.log(`[Middleware] Tool call #${callCount}: ${toolName}`);
    console.log(
      `[Middleware] Arguments: ${JSON.stringify(request.toolCall.args)}`
    );

    // Execute the tool call
    const result = await handler(request);

    // Log the result
    console.log(`[Middleware] Tool call #${callCount} completed`);

    return result;
  },
});

const agent = await createDeepAgent({
  model: "google_genai:gemini-3.1-pro-preview",
  tools: [getWeather] as any,
  middleware: [logToolCallsMiddleware] as any,
});
Do not mutate attributes after initializationIf you need to track values across hook invocations (for example, counters or accumulated data), use graph state. Graph state is scoped to a thread by design, so updates are safe under concurrency.Do this:
const customMiddleware = createMiddleware({
  name: "CustomMiddleware",
  beforeAgent: async (state) => {
    return { x: (state.x ?? 0) + 1 }; // Update graph state instead
  },
});
Do not do this:
let x = 1;

const customMiddleware = createMiddleware({
  name: "CustomMiddleware",
  beforeAgent: async () => {
    x += 1; // Mutation causes race conditions
  },
});
Mutation in place, such as modifying state.x in beforeAgent, mutating a shared variable in beforeAgent, or changing other shared values in hooks, can lead to subtle bugs and race conditions because many operations run concurrently (subagents, parallel tools, and parallel invocations on different threads).For full details on extending state with custom properties, see Custom middleware - Custom state schema. If you must use mutation in custom middleware, consider what happens when subagents, parallel tools, or concurrent agent invocations run at the same time.

Subagents

To isolate detailed work and avoid context bloat, use subagents:
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent, type SubAgent } from "deepagents";
import { z } from "zod";

const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string().describe("The search query"),
      maxResults: z.number().optional().default(5),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general"),
      includeRawContent: z.boolean().optional().default(false),
    }),
  },
);

const researchSubagent: SubAgent = {
  name: "research-agent",
  description: "Used to research more in depth questions",
  systemPrompt: "You are a great researcher",
  tools: [internetSearch],
  model: "openai:gpt-5.2",  // Optional override, defaults to main agent model
};
const subagents = [researchSubagent];

const agent = createDeepAgent({
  model: "claude-sonnet-4-6",
  subagents,
});
For more information, see Subagents.

Backends

Deep agent tools can make use of virtual file systems to store, access, and edit files. By default, Deep Agents use a StateBackend. If you are using skills or memory, you must add the expected skill or memory files to the backend before creating the agent.
An ephemeral filesystem backend stored in langgraph state.This filesystem only persists for a single thread.
import { createDeepAgent, StateBackend } from "deepagents";

// By default we provide a StateBackend
const agent = createDeepAgent();

// Under the hood, it looks like
const agent2 = createDeepAgent({
  backend: new StateBackend(),
});
For more information, see Backends.

Sandboxes

Sandboxes are specialized backends that run agent code in an isolated environment with their own filesystem and an execute tool for shell commands. Use a sandbox backend when you want your deep agent to write files, install dependencies, and run commands without changing anything on your local machine. You configure sandboxes by passing a sandbox backend to backend when creating your deep agent:
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";
import { DenoSandbox } from "@langchain/deno";

// Create and initialize the sandbox
const sandbox = await DenoSandbox.create({
  memoryMb: 1024,
  lifetime: "10m",
});

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-opus-4-6" }),
    systemPrompt: "You are a JavaScript coding assistant with sandbox access.",
    backend: sandbox,
  });

  const result = await agent.invoke({
    messages: [
      {
        role: "user",
        content:
          "Create a simple HTTP server using Deno.serve and test it with curl",
      },
    ],
  });
} finally {
  await sandbox.close();
}
For more information, see Sandboxes.

Human-in-the-loop

Some tool operations may be sensitive and require human approval before execution. You can configure the approval for each tool:
import { tool } from "langchain";
import { createDeepAgent } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
import { z } from "zod";

const deleteFile = tool(
  async ({ path }: { path: string }) => {
    return `Deleted ${path}`;
  },
  {
    name: "delete_file",
    description: "Delete a file from the filesystem.",
    schema: z.object({
      path: z.string(),
    }),
  },
);

const readFile = tool(
  async ({ path }: { path: string }) => {
    return `Contents of ${path}`;
  },
  {
    name: "read_file",
    description: "Read a file from the filesystem.",
    schema: z.object({
      path: z.string(),
    }),
  },
);

const sendEmail = tool(
  async ({ to, subject, body }: { to: string; subject: string; body: string }) => {
    return `Sent email to ${to}`;
  },
  {
    name: "send_email",
    description: "Send an email.",
    schema: z.object({
      to: z.string(),
      subject: z.string(),
      body: z.string(),
    }),
  },
);

// Checkpointer is REQUIRED for human-in-the-loop
const checkpointer = new MemorySaver();

const agent = createDeepAgent({
  model: "google_genai:gemini-3.1-pro-preview",
  tools: [deleteFile, readFile, sendEmail],
  interruptOn: {
    delete_file: true,  // Default: approve, edit, reject
    read_file: false,   // No interrupts needed
    send_email: { allowedDecisions: ["approve", "reject"] },  // No editing
  },
  checkpointer,  // Required!
});
You can configure interrupt for agents and subagents on tool call as well as from within tool calls. For more information, see Human-in-the-loop.

Skills

You can use skills to provide your deep agent with new capabilities and expertise. While tools tend to cover lower level functionality like native file system actions or planning, skills can contain detailed instructions on how to complete tasks, reference info, and other assets, such as templates. These files are only loaded by the agent when the agent has determined that the skill is useful for the current prompt. This progressive disclosure reduces the amount of tokens and context the agent has to consider upon startup. For example skills, see Deep Agent example skills. To add skills to your deep agent, pass them as an argument to create_deep_agent:
import { createDeepAgent, type FileData } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();

function createFileData(content: string): FileData {
  const now = new Date().toISOString();
  return {
    content: content.split("\n"),
    created_at: now,
    modified_at: now,
  };
}

const skillsFiles: Record<string, FileData> = {};

const skillUrl =
  "https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/examples/skills/langgraph-docs/SKILL.md";
const response = await fetch(skillUrl);
const skillContent = await response.text();

skillsFiles["/skills/langgraph-docs/SKILL.md"] = createFileData(skillContent);

const agent = await createDeepAgent({
  checkpointer,
  // IMPORTANT: deepagents skill source paths are virtual (POSIX) paths relative to the backend root.
  skills: ["/skills/"],
});

const config = {
  configurable: {
    thread_id: `thread-${Date.now()}`,
  },
};

const result = await agent.invoke(
  {
    messages: [
      {
        role: "user",
        content: "what is langraph? Use the langgraph-docs skill if available.",
      },
    ],
    files: skillsFiles,
  },
  config,
);

Memory

Use AGENTS.md files to provide extra context to your deep agent. You can pass one or more file paths to the memory parameter when creating your deep agent:
import { createDeepAgent, type FileData } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";

const AGENTS_MD_URL =
  "https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/examples/text-to-sql-agent/AGENTS.md";

async function fetchText(url: string): Promise<string> {
  const res = await fetch(url);
  if (!res.ok) {
    throw new Error(`Failed to fetch ${url}: ${res.status} ${res.statusText}`);
  }
  return await res.text();
}

const agentsMd = await fetchText(AGENTS_MD_URL);
const checkpointer = new MemorySaver();

function createFileData(content: string): FileData {
  const now = new Date().toISOString();
  return {
    content,
    mimeType: "text/plain",
    created_at: now,
    modified_at: now,
  };
}

const agent = await createDeepAgent({
  memory: ["/AGENTS.md"],
  checkpointer: checkpointer,
});

const result = await agent.invoke(
  {
    messages: [
      {
        role: "user",
        content: "Please tell me what's in your memory files.",
      },
    ],
    // Seed the default StateBackend's in-state filesystem (virtual paths must start with "/").
    files: { "/AGENTS.md": createFileData(agentsMd) },
  },
  { configurable: { thread_id: "12345" } }
);

Structured output

Deep Agents support structured output. You can set a desired structured output schema by passing it as the responseFormat argument to the call to createDeepAgent(). When the model generates the structured data, it’s captured, validated, and returned in the ‘structuredResponse’ key of the agent’s state.
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";

const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string().describe("The search query"),
      maxResults: z.number().optional().default(5),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general"),
      includeRawContent: z.boolean().optional().default(false),
    }),
  }
);

const weatherReportSchema = z.object({
  location: z.string().describe("The location for this weather report"),
  temperature: z.number().describe("Current temperature in Celsius"),
  condition: z
    .string()
    .describe("Current weather condition (e.g., sunny, cloudy, rainy)"),
  humidity: z.number().describe("Humidity percentage"),
  windSpeed: z.number().describe("Wind speed in km/h"),
  forecast: z.string().describe("Brief forecast for the next 24 hours"),
});

const agent = await createDeepAgent({
  responseFormat: weatherReportSchema,
  tools: [internetSearch],
});

const result = await agent.invoke({
  messages: [
    {
      role: "user",
      content: "What's the weather like in San Francisco?",
    },
  ],
});

console.log(result.structuredResponse);
// {
//   location: 'San Francisco, California',
//   temperature: 18.3,
//   condition: 'Sunny',
//   humidity: 48,
//   windSpeed: 7.6,
//   forecast: 'Clear skies with temperatures remaining mild. High of 18°C (64°F) during the day, dropping to around 11°C (52°F) at night.'
// }
For more information and examples, see response format. n and examples, see response format.