CrewAI takes a different approach to agents than most frameworks. Instead of building a single agent with tools, you define a team of agents – each with a specific role, goal, and backstory – and let them collaborate on a set of tasks. Think of it like assigning a project to a small team where each person has a clear job.
The framework handles delegation, context passing between agents, and process orchestration. You pick whether agents work sequentially (assembly line) or hierarchically (manager delegates to workers). As of early 2026, CrewAI is at version 0.86+ and runs on Python 3.10 through 3.13.
Install CrewAI
| |
The crewai package is the core framework. crewai-tools gives you pre-built tools for web search, file reading, and more. If you want the CLI for scaffolding projects:
| |
You need an LLM provider key. CrewAI defaults to OpenAI, but supports Anthropic, Ollama, and any OpenAI-compatible endpoint:
| |
To use a different provider, set the llm parameter on each agent or configure it globally.
Build a Research and Writing Crew
Here is a complete working example: two agents that research a topic and then write a blog post about it. The researcher gathers information, and the writer turns that research into polished content.
| |
The {topic} placeholder gets replaced at runtime with whatever you pass to kickoff(inputs=...). The sequential process means the researcher finishes first, and its output automatically becomes context for the writer’s task.
Agent Anatomy
Every CrewAI agent has three required fields that shape its behavior:
- role – What the agent does. This appears in prompts and tells the LLM what persona to adopt. Be specific: “Senior Data Analyst” works better than “Analyst.”
- goal – What the agent is trying to achieve. This drives the agent’s decision-making. Make it concrete and measurable.
- backstory – Context that shapes how the agent approaches problems. This is where you inject domain expertise, working style, and constraints.
Optional Agent Parameters
| |
Setting allow_delegation=True lets an agent hand off subtasks to other crew members. This is powerful in hierarchical crews but can cause unexpected loops in sequential ones. Start with it disabled and enable it only when you need it.
Process Types
CrewAI supports two process types that control how agents interact.
Sequential
| |
Tasks execute one after another. Each task’s output feeds into the next as context. This is the default and works for most pipelines where order matters.
Hierarchical
| |
A manager agent (automatically created) coordinates the team. It decides which agent handles each task, reviews outputs, and can ask agents to redo work. You must specify manager_llm for this process type. Hierarchical works well when tasks have dependencies that aren’t strictly linear – say, when the editor might send the writer back to revise a section.
Adding Custom Tools
CrewAI tools are simple to build. Any class that extends BaseTool works:
| |
You can also wrap plain functions with the @tool decorator:
| |
Common Errors and Fixes
pydantic ValidationError on Agent Creation
| |
You left out a required field. Every agent needs role, goal, and backstory. All three are mandatory – skip one and Pydantic rejects the entire object.
Rate Limit Errors from Your LLM Provider
| |
CrewAI agents can fire many LLM calls in quick succession, especially with verbose=True and multiple agents. Use max_rpm on your agents to throttle them:
| |
For longer crews, also consider setting a cheaper model for agents that do simpler work. Your researcher might need GPT-4o, but your formatter probably does fine with GPT-4o-mini.
Task Output is Empty or Generic
If an agent returns vague output like “Here is the research” without substance, tighten up your expected_output field. CrewAI uses this string to validate and guide the agent’s response. Be explicit:
| |
ModuleNotFoundError: No module named ‘crewai_tools’
| |
You installed crewai but not the tools package. They are separate:
| |
CrewAI vs. LangGraph
Both frameworks build multi-step agent systems, but they solve different problems. CrewAI gives you role-based agents with built-in collaboration patterns – you define who does what and the framework handles the rest. LangGraph gives you a state machine where you control every edge and node.
Pick CrewAI when you want to prototype agent teams quickly and your workflow maps naturally to roles and tasks. Pick LangGraph when you need fine-grained control over execution flow, custom state management, or complex branching logic.
They are not mutually exclusive. Some teams use CrewAI for high-level orchestration and LangGraph for individual agents that need sophisticated internal logic.
Production Tips
Pin your CrewAI version. The API has changed significantly between releases. Put crewai==0.86.0 (or whatever you tested with) in your requirements.txt so deploys don’t break.
Use callbacks for monitoring. CrewAI supports step and task callbacks that let you log progress, track token usage, and detect stuck agents:
| |
Keep crews small. Three to five agents is the sweet spot. More agents means more LLM calls, higher latency, and harder debugging. If you need more, split into multiple crews and chain them.
Test agents individually first. Before assembling a crew, run each agent on its task alone. This isolates prompt quality issues from orchestration problems.
Related Guides
- How to Build a GitHub Issue Triage Agent with LLMs and the GitHub API
- How to Build a Multi-Agent Pipeline Using Anthropic’s Agent SDK and MCP
- How to Build Agent Workflows with Microsoft AutoGen
- How to Build an MCP Server for AI Agents with Python
- How to Build Agents with LangGraph
- How to Build a Retrieval Agent with Tool Calling and Reranking
- How to Build an Email Triage Agent with LLMs and IMAP
- How to Build a Data Pipeline Agent with LLMs and Pandas
- How to Build a Planning Agent with Task Decomposition
- How to Build a Data Analysis Agent with Code Execution