You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.
Overview
LangChain’screate_agent
runs on LangGraph’s runtime under the hood.
LangGraph exposes a Runtime
object with the following information:
- Context: static information like user id, db connections, or other dependencies for an agent invocation
- Store: a
BaseStore
instance used for long term memory - Stream writer: an object used for streaming information via the
"custom""
stream mode
Access
When creating an agent withcreate_agent
, you can specify a context_schema
to define
the structure of the context
stored in the agent runtime.
When invoking the agent, pass the context
argument with the relevant configuration for the run:
Inside tools
You can access the runtime information inside tools to:- Access the context
- Read or write long term memory
- Write to the custom stream (ex, tool progress / updates)
get_runtime
function from langgraph.runtime
to access the Runtime
object inside a tool.
Inside prompt
Use theget_runtime
function from langgraph.runtime
to access the Runtime
object inside a prompt function.
Inside pre and post model hooks
To access the underlying graph runtime information in a pre or post model hook, you can:- Use the
get_runtime
function fromlanggraph.runtime
to access theRuntime
object inside the hook - Inject the
Runtime
directly via the hook signature