You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.
- The underlying LLM is just not good enough
- The “right” context was not passed to the LLM
The core agent loop
It’s important to understand the core agent loop to understand where context should be accessed and/or updated from. The core agent loop is quite simple:- Get user input
- Call LLM, asking it to either respond or call tools
- If it decides to call tools - then go an execute those tools
- Repeat steps 2 and 3 until it decides to finish
The model
The model (including specific model parameters) that you use is a key part of the agent loop. It drives the whole agent’s reasoning logic. One reason the agent could mess up is the model you are using is just not good enough. In order to build reliable agents, you have to have access to all the possible models. LangChain, with it’s standard model interfaces, supports this - we have over 50 different provider integrations. Model choice is also related to context engineering, in two ways. First, the way you pass the context to the LLM may depend on what LLM you are using. Some model providers are better at JSON, some at XML. The context engineering you do may be specific to the model choice. Second, the right model to use in the agent loop may depend on the context you want to pass it. As an obvious example - some models have different context windows. If the context in an agent builds up, you may want to use one model provider while the context is small, and then once it gets too large for that model’s context window you may want to switch to another model.Types of context
There are a few different types of context that can be used to construct the context that is ultimately passed to the LLM. Instructions: Base instructions from the developer, commonly referred to as the system prompt. This may be static or dynamic. Tools: What tools the agent has access to. The names and descriptions and arguments of these are just as important as the text in the prompt. Structured output: What format the agent should respond in. The name and description and arguments of these are just as important as the text in the prompt. Session context: We also call this “short term memory” in the docs. In the context of a conversation, this is most easily thought of the list of messages that make up the conversation. But there can often be other other, more structured information that you may want the agent to access or update throughout the session. The agent can read and write this context. This context is often put directly into the context that is passed to the LLM. Examples include: messages, files. Long term memory: This is information that should persist across sessions (conversations). Examples include: extracted preferences Runtime configuration context: This context that is not the “state” or “memory” of the agent, but rather configuration for a given agent run. This is not modified by the agent, and typically isn’t passed into the LLM, but is used to guide the agent’s behavior or look up other context. Examples include: user ID, DB connectionsFunctionality our agent needs to support to enable context engineering
Now we understand the basic agent loop, the importance of the model you use, and the different types of context that exist. What functionality does our agent need to support, and how does LangChain’s agent support this?Specify custom system prompt
You can useprompt
parameter to pass in a function that returns a string to use as system prompt
Use cases:
- Personalize the system prompt with information in session context, long term memory, or runtime context
Explicit control over “messages generation” prior to calling model
You can useprompt
parameter to pass in a function that returns a list of messages
Use cases:
- Reinforce instructions by dynamically adding an extra system message to the end of the messages sent in, without updating state
Access to runtime configuration in “messages generation”/custom system prompt
You can useprompt
parameter to pass in a function that returns a list of messages or a custom system prompt.
You can access runtime configuration by calling get_runtime
Use cases:
- Use
user_id
passed in to look up user profile, and put it in the system prompt
Access to session context in “messages generation”/custom system prompt
You can useprompt
parameter to pass in a function that returns a list of messages or a custom system prompt.
Session context is passed in with the state
parameter
Use cases:
- Use more structured information that the user passes in at runtime (preferences) in the system prompt
Access to long term memory in “messages generation”/custom system prompt
You can useprompt
parameter to pass in a function that returns a list of messages or a custom system prompt.
You can access long term memory by calling get_store
Use cases:
- Look up user preferences from long term memory and put them in the system prompt
Update session context before model invocation
You can use pre_model_hook to update state Use cases:- Filter out messages if message list is getting long, save filtered list in state and only use that
- Create a summary of conversation every N messages, save that in state
Access to runtime configuration in tools
You can useget_runtime
to access runtime configuration in tools
Use cases:
- Use
user_id
to look up information inside a tool call
Access to session context in tools
You can add an argument with InjectedState to tools to access session context in tools Use cases:- Pass messages in state to a sub agent
Access to long term memory in tools
You can useget_store
to access long term memory in tools
Use cases:
- Look up memories from long term memory store
Update session context in tools
You can return state updates with Command from tools Use cases:- Use tools to update a “virtual file system”
Update long term memory in tools
You can useget_store
to access long term memory and then update it inside tools
Use cases:
- Use tools to update user preferences that are stored in long term memory
Update tools before model call
You can pass in a function tomodel
parameter that attaches custom tools
Use cases:
- Force the agent to call a certain tool first
- Only give the agent access to certain tools after it calls other tools
- Remove access to tools (forcing the agent to respond) after N iterations
Update model to use before model call
You can pass in a function tomodel
parameter that returns a custom model
Use cases:
- Use a model with a longer context window once message history gets long
- Use a smarter model if the original model gets stuck