[Ch 3] Getting Started with LangChain & LangGraph
In Ch 1 we built the agent loop in raw Python. That worked, but it required us to manually manage the message list, handle tool routing, and build the loop ourselves. LangChain and LangGraph exist to automate exactly that — while adding persistence, streaming, debugging tooling, and a clean abstraction for complex workflows.
This chapter gives you a solid conceptual and practical foundation before we build a full agent in Ch 4.
Installation
pip install langchain-core langchain-openai langgraph
# .env.example
OPENAI_API_KEY=your-api-key-here
💡 Ollama users:
pip install langchain-ollama— we’ll show the swap at the end of this chapter.
Part 1: LangChain Core Concepts
LangChain provides the essential building blocks. You don’t need all of LangChain to use LangGraph — but these four concepts appear everywhere.
1.1 Messages
LangChain wraps every role in the conversation as a typed message object. These correspond exactly to the OpenAI Chat Completions API roles:
# messages_demo.py
from langchain_core.messages import (
SystemMessage,
HumanMessage,
AIMessage,
ToolMessage,
)
# System prompt
system = SystemMessage(content="You are a helpful assistant.")
# User turn
user = HumanMessage(content="What is the capital of France?")
# Assistant reply (no tool call)
assistant = AIMessage(content="The capital of France is Paris.")
# When a tool is called, the AIMessage contains tool_calls
assistant_with_tool = AIMessage(
content="",
tool_calls=[
{
"id": "call_abc123",
"name": "search_web",
"args": {"query": "capital of France"},
}
],
)
# The tool result is always a ToolMessage
tool_result = ToolMessage(
tool_call_id="call_abc123",
content="Paris is the capital and most populous city of France.",
)
# Build a conversation list (this is what gets sent to the LLM)
conversation = [system, user, assistant_with_tool, tool_result]
print(f"Conversation has {len(conversation)} messages")
Why typed messages matter: LangGraph’s state uses these types directly. When you add a message to the state, the framework knows whether it’s an AI message with tool calls or a tool result — and routes accordingly.
1.2 ChatOpenAI
The ChatOpenAI class wraps the OpenAI API with LangChain’s interface. The most important parameters:
# llm_setup.py
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o-mini", # cheaper, fast — good for most agent tasks
temperature=0.1, # low randomness for consistent tool-calling
api_key=os.environ.get("OPENAI_API_KEY"),
)
# Simple invoke
response = llm.invoke([HumanMessage(content="Hello!")])
print(response.content) # "Hello! How can I help you today?"
print(type(response)) # <class 'langchain_core.messages.ai.AIMessage'>
💡 Ollama swap: Replace the entire
ChatOpenAIblock with:from langchain_ollama import ChatOllama llm = ChatOllama(model="llama3.2", temperature=0.1)Everything else in this chapter stays identical.
1.3 The @tool Decorator
@tool converts a Python function into a LangChain tool. The framework automatically generates the tool schema (name, description, parameters) from the function’s name, docstring, and type annotations.
# tools.py
from langchain_core.tools import tool
from pydantic import BaseModel, Field
# Simple tool — docstring becomes the description
@tool
def get_current_time() -> str:
"""Get the current UTC time in ISO 8601 format."""
from datetime import datetime, timezone
return datetime.now(timezone.utc).isoformat()
# Tool with Pydantic input schema — best practice for complex inputs
class SearchInput(BaseModel):
query: str = Field(description="The search query string")
max_results: int = Field(default=5, description="Maximum number of results (1-20)")
@tool("search_documents", args_schema=SearchInput)
def search_documents(query: str, max_results: int = 5) -> str:
"""Search the document knowledge base for passages relevant to the query."""
# In a real system, this calls your vector store
return f"[Simulated] Found {max_results} results for: '{query}'"
# Inspect what the framework generated
print(get_current_time.name) # "get_current_time"
print(get_current_time.description) # "Get the current UTC time in ISO 8601 format."
print(search_documents.args_schema.schema()) # Full JSON schema
# Bind tools to the LLM — this sends schemas in every API call
tools = [get_current_time, search_documents]
llm_with_tools = llm.bind_tools(tools)
The key advantage of @tool over raw function schemas: the LLM SDK automatically serializes/deserializes arguments and validates them against your Pydantic schema.
1.4 RunnableConfig
RunnableConfig is how you pass runtime configuration through any LangChain/LangGraph call without polluting the message stream. You’ll use it extensively in Ch 5 for injecting user context into tools.
# runnable_config_demo.py
from langchain_core.runnables import RunnableConfig
# Pass config through invoke
config: RunnableConfig = {
"configurable": {
"user_id": "user-42",
"session_id": "sess-abc",
"thread_id": "thread-001", # LangGraph uses this for checkpointing
},
"callbacks": [], # attach tracing callbacks here
}
response = llm_with_tools.invoke(
[HumanMessage(content="What time is it?")],
config=config,
)
Inside a tool, you can access this config via the special config parameter:
@tool
def my_tool(query: str, config: RunnableConfig) -> str: # config is injected automatically
"""A tool that knows who is calling it."""
user_id = config.get("configurable", {}).get("user_id", "unknown")
return f"Hello user {user_id}, you searched for: {query}"
Part 2: LangGraph Core Concepts
LangGraph builds stateful agent workflows as directed graphs. The key insight: state flows through nodes, and edges determine routing.
2.1 The State
State is a TypedDict that every node reads from and writes to. The Annotated type with add_messages is the standard pattern for message lists — it appends rather than overwrites:
# state.py
from typing import Annotated, TypedDict
from langchain_core.messages import AnyMessage
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
# add_messages reducer: new messages are appended, not replaced
messages: Annotated[list[AnyMessage], add_messages]
# You can add other fields for custom state
step_count: int
error: str | None
2.2 Nodes
A node is any Python function (sync or async) that takes the current state and returns a dict of updates:
# nodes.py
from langchain_core.messages import SystemMessage
SYSTEM_PROMPT = "You are a helpful assistant. Use tools when needed."
def agent_node(state: AgentState) -> dict:
"""The brain: calls the LLM with the current message history."""
messages = [SystemMessage(content=SYSTEM_PROMPT)] + state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]} # add_messages will append this
def increment_step(state: AgentState) -> dict:
"""A simple state-tracking node."""
return {"step_count": state.get("step_count", 0) + 1}
2.3 Edges and Conditional Edges
Edges connect nodes. A conditional edge uses a function to decide where to go next:
# routing.py
from langgraph.prebuilt import ToolNode
def should_continue(state: AgentState) -> str:
"""
Return "tools" if the last AI message has tool calls,
otherwise return "end" to finish.
"""
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return "end"
2.4 Building the Graph
# graph.py
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
def build_graph(tools: list) -> StateGraph:
tool_node = ToolNode(tools) # handles tool execution + ToolMessage creation
graph = StateGraph(AgentState)
# Add nodes
graph.add_node("agent", agent_node)
graph.add_node("tools", tool_node)
# Entry point
graph.set_entry_point("agent")
# Conditional edge: after agent runs, decide what happens next
graph.add_conditional_edges(
"agent",
should_continue,
{
"tools": "tools", # if tool call → run tools
"end": END, # if no tool call → stop
},
)
# After tools run, always go back to agent
graph.add_edge("tools", "agent")
return graph.compile()
Here’s the graph structure visualized:
Part 3: Hello World Agent
Let’s put it all together into a runnable Hello World agent:
# hello_agent.py
import os
from typing import Annotated, TypedDict
from langchain_core.messages import AnyMessage, HumanMessage, SystemMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
# ── State ──────────────────────────────────────────────────────────────────
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
# ── LLM ────────────────────────────────────────────────────────────────────
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0.1,
api_key=os.environ.get("OPENAI_API_KEY"),
)
# ── Tools ───────────────────────────────────────────────────────────────────
@tool
def add_numbers(a: float, b: float) -> float:
"""Add two numbers together and return the result."""
return a + b
@tool
def get_word_count(text: str) -> int:
"""Count the number of words in the given text."""
return len(text.split())
TOOLS = [add_numbers, get_word_count]
llm_with_tools = llm.bind_tools(TOOLS)
# ── Nodes ────────────────────────────────────────────────────────────────────
SYSTEM_PROMPT = "You are a helpful assistant. Use tools when you need to compute things."
def agent_node(state: AgentState) -> dict:
messages = [SystemMessage(content=SYSTEM_PROMPT)] + state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: AgentState) -> str:
last = state["messages"][-1]
if hasattr(last, "tool_calls") and last.tool_calls:
return "tools"
return "end"
# ── Graph ─────────────────────────────────────────────────────────────────────
graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode(TOOLS))
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue, {"tools": "tools", "end": END})
graph.add_edge("tools", "agent")
app = graph.compile()
# ── Run ───────────────────────────────────────────────────────────────────────
if __name__ == "__main__":
initial_state = {
"messages": [HumanMessage(content="What is 1337 + 42? Also, how many words are in: 'the quick brown fox'?")]
}
# Invoke (blocking, returns final state)
final_state = app.invoke(initial_state)
print("\n=== Final Answer ===")
print(final_state["messages"][-1].content)
# Optional: print all messages to see the full trajectory
print("\n=== Full Trajectory ===")
for msg in final_state["messages"]:
role = msg.__class__.__name__
content = msg.content if msg.content else f"[tool calls: {msg.tool_calls}]"
print(f"[{role}] {str(content)[:120]}")
Expected output:
=== Final Answer ===
I calculated both for you:
- 1337 + 42 = **1379**
- The sentence "the quick brown fox" contains **4 words**
=== Full Trajectory ===
[HumanMessage] What is 1337 + 42? Also, how many words are in: 'the quick brown fox'?
[AIMessage] [tool calls: [{'name': 'add_numbers', 'args': {'a': 1337, 'b': 42}...}]]
[ToolMessage] 1379.0
[ToolMessage] 4
[AIMessage] I calculated both for you: ...
Notice the trajectory: the LLM called both tools in one step (parallel tool calling), received both results, then composed the final answer.
LangChain vs. LangGraph: When to Use Which
| LangChain (LCEL chains) | LangGraph | |
|---|---|---|
| Best for | Simple, linear pipelines | Stateful, multi-step agents |
| State | No built-in state | Full TypedDict state with reducers |
| Persistence | Manual | Built-in checkpointers |
| Loops | Not natively supported | First-class citizen |
| Human-in-the-loop | Awkward | Native interrupt() |
| Streaming | Token-level | Token-level + step-level events |
| Complexity | Low | Medium |
Rule of thumb: If you need a fixed, linear chain (e.g., extract → classify → format), use LCEL. If you need a loop, branching, or persistence across sessions, use LangGraph.
In this series, we use LangGraph for everything from Ch 4 onwards.
.env.example
# .env.example
OPENAI_API_KEY=your-api-key-here
💡 Ollama swap: In
hello_agent.py, replace the LLM block with:from langchain_ollama import ChatOllama llm = ChatOllama(model="llama3.2", temperature=0.1)Note: smaller local models may not reliably trigger parallel tool calls. If results are inconsistent, simplify the query to one tool at a time.
Summary
| Concept | Description |
|---|---|
HumanMessage / AIMessage / ToolMessage | Typed wrappers for each conversation role |
ChatOpenAI | LangChain’s OpenAI client; use .bind_tools() to attach tool schemas |
@tool | Decorator that generates tool schemas from Python function signatures |
RunnableConfig | Runtime config dict for thread IDs, user context, callbacks |
AgentState | TypedDict with Annotated[list, add_messages] for message accumulation |
StateGraph | LangGraph’s graph builder; nodes + edges + conditional routing |
ToolNode | Prebuilt node that executes tool calls and produces ToolMessage results |
should_continue | Routing function: “tools” if the last AI message has tool_calls, else “end” |
In the next chapter, we build a complete, production-grade agent: persistent memory with a SQLite checkpointer, proper error handling, streaming output, and multi-turn conversation support.
← Ch 2: Components & Context Engineering | Ch 4: Build Your First Agent →
