The rapid evolution of AI agents has made it easier than ever to create autonomous systems. A new example demonstrates how to build a functional agent using Python and the Model Context Protocol (MCP) in roughly 70 lines of code.
What is the Model Context Protocol?
MCP is an open protocol that standardizes how applications provide context and tools to large language models. It enables agents to act on user requests by connecting to external data sources and APIs.
The 70-Line Agent
The sample code sets up an agent that can:
- Accept natural language queries
- Use MCP to call external tools (e.g., fetch weather data, search the web)
- Return concise results
Key components:
- MCP Client: Handles communication with an MCP server.
- Tool definitions: Describe available functions the agent can invoke.
- Query loop: Reads user input, processes it via the LLM, and executes tool calls.
The example uses the openai library and assumes an OpenAI-compatible API, but the pattern works with any LLM provider supporting the Chat Completions endpoint.
How It Works
- The user asks a question (e.g., "What's the weather in Tokyo?")
- The LLM identifies that a tool (like
get_weather) is needed. - The agent calls the MCP server, which executes the tool and returns results.
- The LLM formats a final answer for the user.
The entire agent logic fits in a single file, making it easy to understand and modify.
Practical Implications
Lightweight agents like this lower the barrier for developers to integrate AI into their applications. By using MCP, the agent can be extended with new tools without changing the core code. This modularity is key for building scalable, maintainable AI systems.
While this example is minimal, it illustrates the fundamental pattern behind more complex agent frameworks. For many use cases, a simple agent is all you need.