The Short Version

Install Polars and switch your ML data pipeline from Pandas to lazy evaluation:

1
pip install polars pyarrow
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
import polars as pl

# Lazy scan — doesn't load the file into memory
df = (
    pl.scan_parquet("training_data/*.parquet")
    .filter(pl.col("label").is_not_null())
    .with_columns(
        pl.col("price").log().alias("log_price"),
        pl.col("timestamp").dt.hour().alias("hour"),
    )
    .group_by("category")
    .agg(
        pl.col("price").mean().alias("avg_price"),
        pl.col("clicks").sum().alias("total_clicks"),
    )
    .collect()  # execution happens here
)

That .scan_parquet() call is the key. Polars builds a query plan, optimizes it (predicate pushdown, projection pruning, parallel execution), and only materializes what you actually need. On a 10GB CSV, this pattern runs 10-50x faster than the equivalent Pandas code depending on the operation.

Reading Large Files

Polars supports CSV, Parquet, JSON, and IPC (Arrow) formats. For ML workloads, always prefer Parquet — it’s columnar, compressed, and Polars can push filters down into the file reader so you never load columns you don’t use.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Eager reads — loads everything into memory
df = pl.read_csv("data.csv")
df = pl.read_parquet("data.parquet")
df = pl.read_ndjson("data.jsonl")

# Lazy scans — builds a query plan, executes on .collect()
lf = pl.scan_csv("data.csv")
lf = pl.scan_parquet("data.parquet")
lf = pl.scan_ndjson("data.jsonl")

# Glob patterns work for sharded datasets
lf = pl.scan_parquet("features/part_*.parquet")

Use scan_* by default. The only reason to use read_* is when your dataset fits comfortably in memory and you need the DataFrame immediately for interactive exploration.

Feature Engineering with Expressions

Polars expressions are where the real speed comes from. Every expression runs in parallel across columns, and the query optimizer fuses operations to minimize memory allocations.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
lf = pl.scan_parquet("user_events.parquet")

features = lf.with_columns(
    # Basic transforms
    pl.col("revenue").log1p().alias("log_revenue"),
    pl.col("event_time").dt.weekday().alias("day_of_week"),
    pl.col("event_time").dt.hour().alias("hour"),

    # Categorical encoding
    pl.col("country").cast(pl.Categorical),

    # Interaction feature
    (pl.col("clicks") / pl.col("impressions")).alias("ctr"),
)

Window functions are critical for time-series and sequential features. Polars calls them over expressions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
features = lf.with_columns(
    # Rolling mean per user over last 7 entries
    pl.col("purchase_amount")
        .rolling_mean(window_size=7)
        .over("user_id")
        .alias("rolling_avg_purchase"),

    # Rank within group
    pl.col("score")
        .rank()
        .over("category")
        .alias("score_rank"),

    # Lag feature
    pl.col("purchase_amount")
        .shift(1)
        .over("user_id")
        .alias("prev_purchase"),
)

Compare this to Pandas where you’d need groupby().transform() with a lambda — Polars runs these in parallel across all groups simultaneously.

Joins and Aggregations

Joining datasets is a bread-and-butter ML task (combining user features with transaction data, merging label files, etc.). Polars joins are hash-based and multithreaded:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
users = pl.scan_parquet("users.parquet")
transactions = pl.scan_parquet("transactions.parquet")

# Left join — keep all users, attach transaction summaries
training_data = (
    users
    .join(
        transactions.group_by("user_id").agg(
            pl.col("amount").sum().alias("total_spend"),
            pl.col("amount").mean().alias("avg_spend"),
            pl.len().alias("transaction_count"),
        ),
        on="user_id",
        how="left",
    )
    .collect()
)

For group-by aggregations, Polars shines because it parallelizes across groups:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
stats = (
    pl.scan_parquet("events.parquet")
    .group_by("product_id")
    .agg(
        pl.col("price").mean().alias("avg_price"),
        pl.col("price").std().alias("std_price"),
        pl.col("price").quantile(0.95).alias("p95_price"),
        pl.col("rating").filter(pl.col("rating").is_not_null()).mean().alias("avg_rating"),
        pl.len().alias("count"),
    )
    .filter(pl.col("count") > 100)  # filter after aggregation
    .collect()
)

Handling Nulls for ML

Null handling is non-negotiable before feeding data into a model. Polars gives you precise control:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
df = pl.scan_parquet("features.parquet")

clean = df.with_columns(
    # Fill numeric nulls with median
    pl.col("income").fill_null(pl.col("income").median()),

    # Fill categorical nulls with a sentinel
    pl.col("occupation").fill_null("unknown"),

    # Forward fill time-series data within each group
    pl.col("sensor_value").forward_fill().over("device_id"),

    # Drop rows where the label is null
).filter(pl.col("label").is_not_null())

You can also check null counts across all columns to catch data quality issues early:

1
2
null_report = df.collect().null_count()
print(null_report)

Converting to NumPy and PyTorch

Once your features are ready, you need to get them into NumPy arrays or PyTorch tensors. Polars uses Arrow under the hood, so conversion is zero-copy when possible:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import numpy as np

df = features.collect()

# Zero-copy to NumPy
X = df.select(["log_revenue", "hour", "ctr", "rolling_avg_purchase"]).to_numpy()
y = df.get_column("label").to_numpy()

# For PyTorch
import torch

X_tensor = torch.from_numpy(X).float()
y_tensor = torch.from_numpy(y).long()

If you have categorical columns, encode them first:

1
2
3
# One-hot encode before export
df_encoded = df.to_dummies(columns=["country", "device_type"])
X = df_encoded.drop("label").to_numpy()

The .to_numpy() call is near-instant for numeric columns because Polars stores data in Arrow format, which shares memory layout with NumPy. No serialization, no copying.

Benchmarks vs Pandas

Here are rough numbers from a 5GB CSV with 50 million rows on a 16-core machine. Your results will vary, but the ratios are consistent:

OperationPandasPolars (lazy)Speedup
Read CSV45s8s5.6x
Read Parquet12s1.8s6.7x
Filter rows3.2s0.15s21x
Group-by + agg (100k groups)8.5s0.4s21x
Join two DataFrames14s0.9s15x
Sort by column6.1s0.5s12x
Window function (rolling mean)22s1.1s20x

The biggest wins come from lazy evaluation with Parquet files. When you chain filters and selects before .collect(), Polars pushes predicates into the file reader and only deserializes the columns and rows you need. Pandas reads everything, then filters.

Common Errors and Fixes

SchemaError: column not found after a join — Polars is strict about column names. If both DataFrames have a column with the same name (other than the join key), Polars appends _right to the duplicate. Use .rename() or .select() to clean up:

1
result = left.join(right, on="id", how="left", suffix="_from_right")

ComputeError: cannot cast to numeric — You’re trying to do math on a string column. Cast explicitly:

1
df = df.with_columns(pl.col("price_str").cast(pl.Float64).alias("price"))

InvalidOperationError: filter not supported in lazy mode — Some operations force eager execution. Wrap them in a .collect() first, or restructure your pipeline. The most common offender is indexing with df[0] — use .head(1) or .first() instead.

Memory blowup on large CSVs — If scan_csv still uses too much memory on collect, process in batches:

1
2
3
4
5
6
reader = pl.read_csv_batched("huge.csv", batch_size=1_000_000)
batches = reader.next_batches(10)
while batches:
    for batch in batches:
        process(batch)
    batches = reader.next_batches(10)

Slow .to_pandas() conversion — Avoid round-tripping through Pandas entirely. If a downstream library requires Pandas, convert only the columns you need: df.select(["col_a", "col_b"]).to_pandas().