04. Model Context Protocol (MCP) & A2A
Overview
In this session, we explore MCP (Model Context Protocol)—a universal standard for connecting LLMs to external data sources and tools—and A2A (Agent-to-Agent) communication patterns.
The Problem: Tool Fragmentation
Before MCP, every AI application had to build custom integrations:
Problems:
- Every LLM needs its own integration code
- N LLMs × M tools = N×M integrations
- Inconsistent implementations
- Maintenance nightmare
What is MCP?
Model Context Protocol (MCP) is an open standard developed by Anthropic that provides a universal way for LLMs to connect with external systems.
Benefits:
- Write once, use everywhere
- Standardized interface
- Ecosystem of pre-built connectors
- Security and access control built-in
MCP Core Primitives
| Primitive | Description | Example |
|---|---|---|
| Resources | Data exposed by servers | Files, database records, logs |
| Prompts | Pre-defined templates | "Summarize this document" |
| Tools | Executable functions | get_user(), send_message() |
MCP Architecture
Building a Mock MCP Server
class MockMCPServer:
def __init__(self, name: str):
self.name = name
self.users = {
"user_1": {"name": "Alice", "email": "alice@example.com"},
"user_2": {"name": "Bob", "email": "bob@example.com"}
}
def list_tools(self) -> List[Dict]:
"""Discovery: What can this server do?"""
return [
{
"name": "get_user_info",
"description": "Retrieve user details by ID",
"input_schema": {
"type": "object",
"properties": {"user_id": {"type": "string"}},
"required": ["user_id"]
}
}
]
def call_tool(self, tool_name: str, arguments: Dict) -> str:
"""Execute a tool and return results"""
if tool_name == "get_user_info":
user = self.users.get(arguments["user_id"])
return json.dumps(user) if user else "User not found"Building an MCP Client (Agent)
class MCPClient:
def __init__(self, server: MockMCPServer):
self.server = server
def run(self, query: str):
# 1. Discovery Phase
tools = self.server.list_tools()
# 2. Convert to OpenAI format
openai_tools = [{
"type": "function",
"function": {
"name": t["name"],
"description": t["description"],
"parameters": t["input_schema"]
}
} for t in tools]
# 3. Let LLM decide and execute
# ... (agent loop with tool calls)A2A: Agent-to-Agent Communication
Just as a client talks to a server, agents can talk to each other. If Agent A exposes an ask_me tool, Agent B can use it.
A2A Implementation Pattern
class ResearchAgent:
"""Agent that exposes itself as a tool"""
def as_tool(self) -> Dict:
return {
"name": "ask_researcher",
"description": "Ask the research agent to find information",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string"}
}
}
}
def invoke(self, query: str) -> str:
# This agent does its own research
return self._search_and_summarize(query)
# Manager can now use ResearchAgent as a tool!
manager_tools = [research_agent.as_tool()]Real-World MCP Servers
Example: Using MCP with Claude Desktop
// claude_desktop_config.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}Hands-on Practice
In the notebook, you will:
Build a Mock MCP Server
Create a server that exposes user data as tools
Implement Discovery
Let the client discover available tools dynamically
Build the Agent Loop
Connect the LLM to the server using OpenAI function calling
Experiment with A2A
Create an agent that uses another agent as a tool
MCP vs Direct Tool Integration
| Aspect | Direct Integration | MCP |
|---|---|---|
| Reusability | Per-application | Universal |
| Discovery | Hardcoded | Dynamic |
| Security | Ad-hoc | Built-in scopes |
| Ecosystem | DIY | Growing library |
| Maintenance | Each app separately | Centralized |
When to Use MCP
Use MCP when:
- You want tools to work across multiple LLM providers
- You're building production systems that need standard interfaces
- You want to leverage the MCP ecosystem
Consider alternatives when:
- Simple one-off integrations
- Prototyping (direct function calling is faster)
- Highly specialized, proprietary tools
References & Further Reading
Related Concepts
- Google A2A Protocol: Agent-to-Agent Communication (opens in a new tab)
- LangChain Tool Calling: LangChain Docs (opens in a new tab)
- OpenAI Function Calling: OpenAI Docs (opens in a new tab)
Academic Papers
-
"Toolformer: Language Models Can Teach Themselves to Use Tools" - Schick et al., 2023
- arXiv:2302.04761 (opens in a new tab)
- Foundation for tool-using LLMs
-
"Gorilla: Large Language Model Connected with Massive APIs" - Patil et al., 2023
- arXiv:2305.15334 (opens in a new tab)
- API-aware LLM training
Next Steps
Now that you understand MCP and A2A, head to CrewAI to see how frameworks abstract these patterns for rapid multi-agent development!