Quick Start: Two Agents Solving a Task
AutoGen’s core idea is simple — you define agents with roles, put them in a conversation, and they collaborate to solve a problem. Here’s the fastest path to a working multi-agent setup:
| |
That’s it. The assistant writes code, the executor runs it in a Docker container, and if something fails, the assistant fixes it. This back-and-forth continues until the task succeeds or hits the reply limit.
Understanding the Agent Types
AutoGen ships with a few agent classes, but you’ll mostly use two.
AssistantAgent
This is your LLM-powered worker. It receives messages, reasons about them, and responds — usually with code or analysis. It doesn’t execute anything itself. Think of it as the brain.
UserProxyAgent
Despite the name, this isn’t really about users. It’s an agent that can execute code, call tools, and optionally ask a human for input. The human_input_mode parameter controls this:
"ALWAYS"— asks for human approval before every reply"TERMINATE"— asks only when the conversation would end"NEVER"— fully autonomous, no human in the loop
ConversableAgent
Both AssistantAgent and UserProxyAgent inherit from ConversableAgent. If you need custom behavior, subclass this directly:
| |
Group Chat: Multi-Agent Collaboration
Two agents talking is useful. Three or more agents with distinct roles is where AutoGen gets interesting. GroupChat and GroupChatManager handle the orchestration.
| |
The speaker_selection_method matters. "auto" uses the LLM to decide who speaks next based on conversation context. You can also use "round_robin" for predictable turn order, or pass a custom function for full control.
Tool Registration
Agents can call Python functions as tools. This is better than having the LLM generate code when you already know exactly what function to call.
| |
The decorator pattern is clean — register_for_llm tells the assistant the tool exists, and register_for_execution tells the proxy to actually run it when the assistant calls it.
Docker Code Execution
Running LLM-generated code on your machine without sandboxing is asking for trouble. AutoGen supports Docker-based execution out of the box.
| |
Make sure Docker is running on your machine. AutoGen pulls the image automatically if it’s not cached. You can also pass a custom Dockerfile path if your agents need specific packages pre-installed.
If you set use_docker: False, code runs directly on your host. Only do this in throwaway environments — never on production machines.
AutoGen Studio
If you want to prototype agent workflows visually before writing code, AutoGen Studio is worth a look. It’s a web UI that lets you configure agents, define group chats, and test conversations without touching Python.
Install it with:
| |
It’s useful for experimenting with system messages and agent configurations. Once you’ve nailed the setup, export the config and move to code for production.
Common Errors
Docker is not running or docker.errors.DockerException
AutoGen tries to spin up containers for code execution. If Docker isn’t installed or the daemon isn’t running, you get this error. Start Docker or set use_docker: False (not recommended for untrusted code).
Rate limit exceeded with OpenAI
Multi-agent conversations burn through tokens fast. A 15-round group chat with 4 agents can easily hit rate limits. Use config_list with multiple API keys or add a fallback model:
| |
Agents loop forever without solving the task
Set max_consecutive_auto_reply on your UserProxyAgent and max_round on your GroupChat. Without these limits, agents can keep talking in circles. Also make sure at least one agent’s system message includes instructions to say TERMINATE when the task is done.
ModuleNotFoundError inside Docker containers
The default Docker image is minimal. If your generated code imports packages like pandas or requests, the execution fails. Either use a custom image with those packages pre-installed or add a pip install step in the generated code. You can also set up a custom Dockerfile:
| |
No agent can execute the code
This happens when you have AssistantAgent instances but no UserProxyAgent with code_execution_config. At least one agent in the conversation needs execution capabilities.
Related Guides
- How to Build a Multi-Agent Pipeline Using Anthropic’s Agent SDK and MCP
- How to Build a Data Analysis Agent with Code Execution
- How to Build Multi-Agent Systems with CrewAI
- How to Build a GitHub Issue Triage Agent with LLMs and the GitHub API
- How to Build a Retrieval Agent with Tool Calling and Reranking
- How to Build a Data Pipeline Agent with LLMs and Pandas
- How to Build a Planning Agent with Task Decomposition
- How to Build a Memory-Augmented Agent with Vector Search
- How to Build a Debugging Agent with Stack Trace Analysis
- How to Build a Research Agent with LangGraph and Tavily