Argilla is an open-source data annotation platform that fits into your Python workflow. You define datasets, fields, and labeling questions in code, push records to the server, then annotate through the web UI. When you’re done, you pull the labeled data back into Python for training.

Here’s the fastest way to get a working annotation pipeline running.

Install Argilla and Start the Server

Install the Python SDK first:

1
pip install argilla

For the server, Docker is the simplest path. Download the compose file and start everything:

1
2
3
mkdir argilla && cd argilla
wget -O docker-compose.yaml https://raw.githubusercontent.com/argilla-io/argilla/main/examples/deployments/docker/docker-compose.yaml
docker compose up -d

This spins up the Argilla server at http://localhost:6900. The default login credentials are:

  • Username: argilla
  • Password: 12345678

Three users ship with the quickstart image: owner, admin, and argilla. For a real project, change these passwords immediately.

Create a Dataset with Fields and Questions

Connect to your server and define what annotators will see. A dataset needs at least one field (the data to display) and one question (what you’re asking annotators to label).

This example sets up a sentiment annotation task for customer reviews:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import argilla as rg

# Connect to the Argilla server
client = rg.Argilla(api_url="http://localhost:6900", api_key="argilla.apikey")

# Define dataset structure
settings = rg.Settings(
    guidelines="Read each customer review and classify its sentiment. "
               "Choose 'positive' for satisfied customers, 'negative' for "
               "complaints or dissatisfaction, and 'neutral' for factual "
               "statements without clear sentiment.",
    fields=[
        rg.TextField(
            name="review",
            title="Customer Review",
            use_markdown=False,
            required=True,
        ),
        rg.TextField(
            name="product",
            title="Product Name",
            use_markdown=False,
            required=False,
        ),
    ],
    questions=[
        rg.LabelQuestion(
            name="sentiment",
            title="What is the sentiment of this review?",
            labels=["positive", "negative", "neutral"],
            required=True,
        ),
    ],
)

# Create the dataset on the server
dataset = rg.Dataset(
    name="customer-reviews-sentiment",
    workspace="argilla",
    settings=settings,
    client=client,
)

dataset.create()
print(f"Dataset created: {dataset.name}")

The guidelines field is important. Clear instructions reduce disagreement between annotators and produce more consistent labels. Write them like you’re onboarding a new teammate.

Add Records to the Dataset

Now push your data into Argilla. Each record is a dictionary matching the field names you defined in settings.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import argilla as rg

client = rg.Argilla(api_url="http://localhost:6900", api_key="argilla.apikey")
dataset = client.datasets(name="customer-reviews-sentiment", workspace="argilla")

# Sample customer reviews
reviews = [
    {
        "review": "Absolutely love this keyboard. The mechanical switches feel great and the RGB lighting is customizable. Best purchase I've made this year.",
        "product": "MechPro K1 Keyboard",
    },
    {
        "review": "The laptop arrived with a scratched screen. Customer support took 3 weeks to respond and then refused a replacement. Never buying from this brand again.",
        "product": "UltraBook Pro 15",
    },
    {
        "review": "Battery lasts about 6 hours with moderate use. Charges in roughly 2 hours with the included adapter.",
        "product": "PowerBank 20K",
    },
    {
        "review": "Terrible noise cancellation. I can hear everything around me. The sound quality is muddy and the ear cups hurt after 30 minutes.",
        "product": "QuietMax Headphones",
    },
    {
        "review": "Works exactly as described. Paired with my phone instantly over Bluetooth 5.3. The speaker gets loud enough for a small room.",
        "product": "SoundDot Mini Speaker",
    },
    {
        "review": "Stopped working after two weeks. The charging port is flimsy and broke off. Complete waste of money.",
        "product": "FitTrack Band v2",
    },
    {
        "review": "This monitor's color accuracy is stunning. I calibrated it with a Datacolor Spyder and Delta E was under 1.0 out of the box.",
        "product": "ProVision 4K Display",
    },
    {
        "review": "The mouse weighs 58 grams. DPI goes up to 25600. Polling rate is 1000Hz. No software required for basic operation.",
        "product": "AeroGlide Mouse",
    },
]

# Log records to the dataset
dataset.records.log(reviews)
print(f"Logged {len(reviews)} records")

You can also use the mapping parameter if your source data has different column names. For example, if your CSV has a "text" column instead of "review":

1
2
3
4
5
6
import pandas as pd

df = pd.read_csv("reviews.csv")
data = df.to_dict(orient="records")

dataset.records.log(data, mapping={"text": "review", "item": "product"})

This maps text from your data to the review field in Argilla, and item to product.

Export Annotated Data for Training

After your team finishes labeling, pull the annotated records back into Python. Argilla exports directly to Hugging Face datasets format, which plugs straight into most training pipelines.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
import argilla as rg

client = rg.Argilla(api_url="http://localhost:6900", api_key="argilla.apikey")
dataset = client.datasets(name="customer-reviews-sentiment", workspace="argilla")

# Export to Hugging Face datasets format
hf_dataset = dataset.records.to_datasets()
print(hf_dataset)
print(hf_dataset.column_names)

# Convert to pandas for inspection
df = hf_dataset.to_pandas()
print(df.head())

# Save locally as a parquet file
hf_dataset.to_parquet("annotated_reviews.parquet")

# Or push directly to Hugging Face Hub
dataset.to_hub(repo_id="your-org/customer-reviews-sentiment")

The to_datasets() method returns a standard Hugging Face Dataset object. From there you can filter, shuffle, split into train/test, or feed it into transformers.Trainer.

You can also save the full dataset (settings + records) to disk for backup or migration:

1
2
3
4
5
6
7
8
# Save everything to a local directory
dataset.to_disk(path="./backup/customer-reviews-sentiment")

# Restore later on a different server
restored = rg.Dataset.from_disk(
    path="./backup/customer-reviews-sentiment",
    client=client,
)

Common Errors and Fixes

RecordsIngestionError: field names don't match

This happens when the keys in your record dictionaries don’t match the name values in your TextField definitions. Double-check spelling and casing. If your settings define name="review", your records must use "review" as the key, not "text" or "Review".

Dataset already exists

You’ll see this when calling dataset.create() on a name that’s already taken in that workspace. Either delete the existing dataset first through the UI, or retrieve it instead:

1
2
# Retrieve existing dataset instead of creating
dataset = client.datasets(name="customer-reviews-sentiment", workspace="argilla")

Connection refused on localhost:6900

The Argilla server isn’t running. Check your Docker containers:

1
2
docker compose ps
docker compose logs -f

If the container is restarting, it’s usually an Elasticsearch memory issue. Increase the Docker memory limit to at least 4GB.

BadRequestApiError: A workspace must be provided

You forgot the workspace parameter when creating or retrieving a dataset. Always specify which workspace to use:

1
2
3
4
5
# Wrong - missing workspace
dataset = rg.Dataset(name="my-dataset", settings=settings, client=client)

# Correct
dataset = rg.Dataset(name="my-dataset", workspace="argilla", settings=settings, client=client)

ArgillaCredentialsError or 401 Unauthorized

Your API key is wrong. The default key for the quickstart image is argilla.apikey. If you changed it or are connecting to a hosted instance, check your server configuration for the correct key.