Tools are components that Agents call to perform actions. They extend a model’s capabilities beyond text by letting it interact with the world through well-defined inputs and outputs.
The simplest way to create a tool is with the @tool decorator. By default, the function’s docstring becomes the tool’s description that helps the model understand when to use it:
Copy
Ask AI
from langchain_core.tools import tool@tooldef search_database(query: str, limit: int = 10) -> str: """Search the customer database for records matching the query. Args: query: Search terms to look for limit: Maximum number of results to return """ return f"Found {limit} results for '{query}'"
Type hints are required as they define the tool’s input schema. The docstring should be informative and concise to help the model understand the tool’s purpose.
Override the auto-generated tool description for clearer model guidance:
Copy
Ask AI
@tool("calculator", description="Performs arithmetic calculations. Use this for any math problems.")def calc(expression: str) -> str: """Evaluate mathematical expressions.""" return str(eval(expression))
ToolNode is a prebuilt LangGraph component that handles tool calls within an agent’s workflow. It works seamlessly with create_agent(), offering advanced tool execution control, built in parallelism, and error handling.
from langchain.agents import ToolNodetool_node = ToolNode( tools=[...], # List of tools or callables handle_tool_errors=True, # Error handling configuration ...)
ToolNode provides built-in error handling for tool execution through its handle_tool_errors property.To customize the error handling behavior, you can configure handle_tool_errors to either be a boolean, a string, a callable, an exception type, or a tuple of exception types:
True: Catch all errors and return a ToolMessage with the default error template containing the exception details.
str: Catch all errors and return a ToolMessage with this custom error message string.
type[Exception]: Only catch exceptions with the specified type and return the default error message for it.
tuple[type[Exception], ...]: Only catch exceptions with the specified types and return default error messages for them.
Callable[..., str]: Catch exceptions matching the callable’s signature and return the string result of calling it with the exception.
False: Disable error handling entirely, allowing exceptions to propagate.
handle_tool_errors defaults to a callable _default_handle_tool_errors that:
catches tool invocation errors ToolInvocationError (due to invalid arguments provided by the model) and returns a descriptive error message
ignores tool execution errors (they will be re-raised with the template string TOOL_CALL_ERROR_TEMPLATE = "Error: {error}\n Please fix your mistakes.")
Examples of how to use the different error handling strategies:
Copy
Ask AI
# Retry on all exception types with the default error message template stringtool_node = ToolNode(tools=[my_tool], handle_tool_errors=True)# Retry on all exception types with a custom message stringtool_node = ToolNode( tools=[my_tool], handle_tool_errors="I encountered an issue. Please try rephrasing your request.")# Retry on ValueError with a custom message, otherwise raisedef handle_errors(e: ValueError) -> str: return "Invalid input provided"tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)# Retry on ValueError and KeyError with the default error message template string, otherwise raisetool_node = ToolNode( tools=[my_tool], handle_tool_errors=(ValueError, KeyError))
We recommend that you familiarize yourself with create_agent() before covering this section. Read more about it here.
Pass a configured ToolNode directly to create_agent():
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom langchain.agents import ToolNode, create_agentimport random@tooldef fetch_user_data(user_id: str) -> str: """Fetch user data from database.""" if random.random() > 0.7: raise ConnectionError("Database connection timeout") return f"User {user_id}: John Doe, john@example.com, Active"@tooldef process_transaction(amount: float, user_id: str) -> str: """Process a financial transaction.""" if amount > 10000: raise ValueError(f"Amount {amount} exceeds maximum limit of 10000") return f"Processed ${amount} for user {user_id}"def handle_errors(e: Exception) -> str: if isinstance(e, ConnectionError): return "The database is currently overloaded, but it is safe to retry. Please try again with the same parameters." elif isinstance(e, ValueError): return f"Error: {e}. Try to process the transaction in smaller amounts." return f"Error: {e}. Please try again."tool_node = ToolNode( tools=[fetch_user_data, process_transaction], handle_tool_errors=handle_errors)agent = create_agent( model=ChatOpenAI(model="gpt-4o"), tools=tool_node, prompt="You are a financial assistant.")agent.invoke({ "messages": [{"role": "user", "content": "Process a payment of 15000 dollars for user123. Generate a receipt email and address it to the user."}]})
When you pass a ToolNode to create_agent(), the agent uses your exact configuration including error handling, custom names, and tags. This is useful when you need fine-grained control over tool execution behavior.
state: The agent maintains state throughout its execution - this includes messages, custom fields, and any data your tools need to track. State flows through the graph and can be accessed and modified by tools.
InjectedState: An annotation that allows tools to access the current graph state without exposing it to the LLM. This lets tools read information like message history or custom state fields while keeping the tool’s schema simple.
Tools can access the current graph state using the InjectedState annotation:
Copy
Ask AI
from typing_extensions import Annotatedfrom langchain.agents.tool_node import InjectedState# Access the current conversation state@tooldef summarize_conversation( state: Annotated[dict, InjectedState]) -> str: """Summarize the conversation so far.""" messages = state["messages"] human_msgs = sum(1 for m in messages if m.__class__.__name__ == "HumanMessage") ai_msgs = sum(1 for m in messages if m.__class__.__name__ == "AIMessage") tool_msgs = sum(1 for m in messages if m.__class__.__name__ == "ToolMessage") return f"Conversation has {human_msgs} user messages, {ai_msgs} AI responses, and {tool_msgs} tool results"# Access custom state fields@tooldef get_user_preference( pref_name: str, preferences: Annotated[dict, InjectedState("user_preferences")] # InjectedState parameters are not visible to the model) -> str: """Get a user preference value.""" return preferences.get(pref_name, "Not set")
Important: State-injected arguments are hidden from the model. For the example above, the model only sees pref_name in the tool schema - preferences is not included in the request.
Command: A special return type that tools can use to update the agent’s state or control the graph’s execution flow. Instead of just returning data, tools can return Commands to modify state or direct the agent to specific nodes.
Use a tool that returns a Command to update the agent state:
Copy
Ask AI
from langgraph.types import Commandfrom langchain_core.messages import RemoveMessagefrom langgraph.graph.message import REMOVE_ALL_MESSAGESfrom langchain_core.tools import tool, InjectedToolCallIdfrom typing_extensions import Annotated# Update the conversation history by removing all messages@tooldef clear_conversation() -> Command: """Clear the conversation history.""" return Command( update={ "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)], } )# Update the user_name in the agent state@tooldef update_user_name( new_name: str, tool_call_id: Annotated[dict, InjectedToolCallId]) -> Command: """Update the user's name.""" return Command(update={"user_name": new_name})
runtime: The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent’s execution (e.g., user IDs, session details, or application-specific configuration).
Tools can access an agent’s runtime context through get_runtime:
Copy
Ask AI
from dataclasses import dataclassfrom langchain_openai import ChatOpenAIfrom langchain.agents import create_agentfrom langchain_core.tools import toolfrom langgraph.runtime import get_runtimeUSER_DATABASE = { "user123": { "name": "Alice Johnson", "account_type": "Premium", "balance": 5000, "email": "alice@example.com" }, "user456": { "name": "Bob Smith", "account_type": "Standard", "balance": 1200, "email": "bob@example.com" }}@dataclassclass UserContext: user_id: str@tooldef get_account_info() -> str: """Get the current user's account information.""" runtime = get_runtime(UserContext) user_id = runtime.context.user_id if user_id in USER_DATABASE: user = USER_DATABASE[user_id] return f"Account holder: {user['name']}\nType: {user['account_type']}\nBalance: ${user['balance']}" return "User not found"model = ChatOpenAI(model="gpt-4o")agent = create_agent( model, tools=[get_account_info], context_schema=UserContext, prompt="You are a financial assistant.")result = agent.invoke( {"messages": [{"role": "user", "content": "What's my current balance?"}]}, context=UserContext(user_id="user123"))
To update long-term memory, you can use the .put() method of InMemoryStore. A complete example of persistent memory across sessions:
Copy
Ask AI
from typing import Anyfrom langgraph.config import get_storefrom langgraph.store.memory import InMemoryStorefrom langchain.agents import create_agentfrom langchain_core.tools import tool@tooldef get_user_info(user_id: str) -> str: """Look up user info.""" store = get_store() user_info = store.get(("users",), user_id) return str(user_info.value) if user_info else "Unknown user"@tooldef save_user_info(user_id: str, user_info: dict[str, Any]) -> str: """Save user info.""" store = get_store() store.put(("users",), user_id, user_info) return "Successfully saved user info."store = InMemoryStore()agent = create_agent( model, tools=[get_user_info, save_user_info], store=store)# First session: save user infoagent.invoke({ "messages": [{"role": "user", "content": "Save the following user: userid: abc123, name: Foo, age: 25, email: foo@langchain.dev"}]})# Second session: get user infoagent.invoke({ "messages": [{"role": "user", "content": "Get user info for user with id 'abc123'"}]})# Here is the user info for user with ID "abc123":# - Name: Foo# - Age: 25# - Email: foo@langchain.dev