04. Structured Reasoning & LangGraph
Overview
In this session, we explore structured reasoning patterns that go beyond linear thinking:
| Pattern | Description | Use Case |
|---|---|---|
| Decomposition | Break complex problems into sub-tasks | Multi-step questions |
| Tree of Thoughts | Explore multiple reasoning paths | Creative/strategic tasks |
| LangGraph | State machine for agent workflows | Complex agent orchestration |
Part 1: Decomposition (Least-to-Most)
When faced with complex queries, LLMs can get confused. Breaking the query into distinct sub-questions is key.
Implementation
from pydantic import BaseModel, Field
from typing import List
class SubQueries(BaseModel):
queries: List[str] = Field(description="Sub-questions to solve the original query")
def decompose_query(query: str) -> List[str]:
"""Decomposes a complex query into sub-queries."""
completion = client.beta.chat.completions.parse(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Break down complex problems into 3-4 sequential sub-questions."},
{"role": "user", "content": query}
],
response_format=SubQueries
)
return completion.choices[0].message.parsed.queriesExample
Complex Query: "Who is the current president of the country where Elon Musk was born, and when does their term end?"
Decomposed Sub-Questions:
- What country was Elon Musk born in?
- Who is the current president of that country?
- What is the presidential term length in that country?
- When does the current president's term end?
Sequential Solving
def solve_decomposed_queries(original_query: str):
sub_qs = decompose_query(original_query)
context = ""
for i, q in enumerate(sub_qs):
print(f"[Step {i+1}] Q: {q}")
answer = answer_question(q, context)
print(f" ✅ A: {answer}")
context += f"Q: {q}\nA: {answer}\n\n"
# Synthesize final answer
final_answer = answer_question(f"Answer: '{original_query}'", context)
return final_answerPart 2: Tree of Thoughts (ToT)
Tree of Thoughts explores multiple reasoning paths simultaneously—like playing chess by considering several moves before choosing.
ToT Components
Proposer
Generate N possible next steps/thoughts
Evaluator
Score each candidate (1-10 scale)
Selector
Choose the best candidate (greedy or beam search)
Implementation
# 1. Proposer: Generate candidate continuations
def propose_next_steps(current_state: str, n: int = 3) -> List[str]:
prompt = f"""Current state: {current_state}
Propose {n} possible next steps. Number them 1, 2, 3.
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
# Parse numbered responses
lines = response.choices[0].message.content.strip().split("\n")
return [line.split(". ", 1)[1] for line in lines if ". " in line][:n]
# 2. Evaluator: Score candidates
def evaluate_candidates(candidates: List[str], goal: str) -> List[int]:
scores = []
for cand in candidates:
prompt = f"""Goal: {goal}
Candidate: {cand}
Rate relevance and quality (1-10). Return number only.
"""
response = client.chat.completions.create(...)
scores.append(int(response.choices[0].message.content.strip()))
return scores
# 3. ToT Search Loop
def run_tot(goal: str, steps: int = 3):
current_state = ""
for i in range(steps):
candidates = propose_next_steps(current_state)
scores = evaluate_candidates(candidates, goal)
# Select best candidate
best_idx = scores.index(max(scores))
current_state += candidates[best_idx] + " "
return current_stateToT vs Greedy
| Approach | Method | Pros | Cons |
|---|---|---|---|
| Greedy | Always pick best score | Fast, simple | Can get stuck in local optima |
| Beam Search | Keep top-k candidates | Explores more paths | Higher compute cost |
| Backtracking | Go back if scores are low | Can recover from bad choices | Even higher cost |
Part 3: LangGraph
LangGraph transforms agent logic into visual state machines, making complex workflows easier to understand and maintain.
Why LangGraph?
| Feature | Python Loops | LangGraph |
|---|---|---|
| Visualization | Code only | Visual graph |
| State Management | Manual | Built-in TypedDict |
| Conditional Logic | if/else | Conditional edges |
| Debugging | Print statements | Graph inspection |
| Checkpointing | Manual | Built-in |
Key Concepts
from langgraph.graph import StateGraph
from typing import TypedDict, List
# 1. Define State Schema
class AgentState(TypedDict):
query: str
plan: List[str]
results: List[str]
done: bool
# 2. Create Graph
graph = StateGraph(AgentState)
# 3. Add Nodes
graph.add_node("planner", planner_node)
graph.add_node("executor", executor_node)
graph.add_node("checker", checker_node)
# 4. Add Edges
graph.add_edge("planner", "executor")
graph.add_edge("executor", "checker")
# 5. Add Conditional Edge
graph.add_conditional_edges(
"checker",
should_replan, # Function returning next node name
{
"replan": "planner",
"done": END
}
)
# 6. Compile and Run
app = graph.compile()
result = app.invoke({"query": "...", "plan": [], "results": [], "done": False})Example: Plan-and-Execute with LangGraph
def planner_node(state: AgentState) -> AgentState:
"""Generate execution plan"""
plan = create_plan(state["query"])
return {"plan": plan}
def executor_node(state: AgentState) -> AgentState:
"""Execute next step in plan"""
results = execute_step(state["plan"][0])
remaining_plan = state["plan"][1:]
return {"plan": remaining_plan, "results": state["results"] + [results]}
def checker_node(state: AgentState) -> AgentState:
"""Check if we're done"""
done = len(state["plan"]) == 0
return {"done": done}
def should_replan(state: AgentState) -> str:
if state["done"]:
return "done"
return "continue"Hands-on Practice
In the notebooks, you will:
Implement Decomposition
Break complex queries into sub-questions and solve sequentially
Build Tree of Thoughts
Create a creative writing agent that explores multiple paths
LangGraph Basics
Transform a simple loop into a state graph
Add Replanning
Implement conditional edges for dynamic plan adjustment
Key Takeaways
- Decomposition simplifies - Breaking problems into parts makes them tractable
- ToT explores breadth - Multiple paths find better solutions for creative tasks
- LangGraph adds structure - State machines make agent logic explicit and debuggable
- Combine patterns - Use decomposition within ToT, or LangGraph to orchestrate both
References & Further Reading
Academic Papers
-
"Tree of Thoughts: Deliberate Problem Solving with Large Language Models" - Yao et al., 2023
- arXiv:2305.10601 (opens in a new tab)
- Foundation for tree-based reasoning
-
"Least-to-Most Prompting Enables Complex Reasoning in Large Language Models" - Zhou et al., 2023
- arXiv:2205.10625 (opens in a new tab)
- Decomposition for complex reasoning
-
"Graph of Thoughts: Solving Elaborate Problems with Large Language Models" - Besta et al., 2023
- arXiv:2308.09687 (opens in a new tab)
- Extends ToT to arbitrary graph structures
-
"Language Agent Tree Search" - Zhou et al., 2023
- arXiv:2310.04406 (opens in a new tab)
- MCTS-style search for language agents
Related Tools
- LangGraph: GitHub (opens in a new tab)
- AutoGen: GitHub (opens in a new tab)
- DSPy: GitHub (opens in a new tab)
Next Steps
Now that you understand structured reasoning, head to Advanced Self-RAG for retrieval-augmented generation with self-correction!