Color extraction shows up in more places than you’d expect. Product catalogs need dominant color tags for filtering. Design tools generate palettes from reference photos. Accessibility checkers need to know what colors are actually on screen. The good news: you can build a solid color detection pipeline with OpenCV, scikit-learn, and about 50 lines of Python.

The core idea is straightforward. Treat every pixel as a point in 3D color space, then cluster those points with K-means. The cluster centroids are your dominant colors. Layer on HSV filtering for targeted color detection, and a KDTree for mapping raw RGB values to human-readable names.

Extracting Dominant Colors with K-Means

K-means is the workhorse here. You reshape the image into a flat array of pixels, run clustering, and read off the centroids. The cluster sizes tell you each color’s proportion in the image.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import cv2
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt


def extract_dominant_colors(image_path, n_colors=5):
    """Extract the N most dominant colors from an image."""
    img = cv2.imread(image_path)
    if img is None:
        raise FileNotFoundError(f"Could not load image: {image_path}")

    # OpenCV loads BGR -- convert to RGB for accurate color representation
    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    # Reshape to a flat list of pixels (height*width, 3)
    pixels = img_rgb.reshape(-1, 3).astype(np.float32)

    # Run K-means clustering
    kmeans = KMeans(n_clusters=n_colors, n_init=10, max_iter=300, random_state=42)
    kmeans.fit(pixels)

    # Get dominant colors (centroids) and their proportions
    colors = kmeans.cluster_centers_.astype(int)
    labels, counts = np.unique(kmeans.labels_, return_counts=True)
    proportions = counts / counts.sum()

    # Sort by proportion (most dominant first)
    order = np.argsort(-proportions)
    colors = colors[order]
    proportions = proportions[order]

    return colors, proportions


def plot_palette(colors, proportions):
    """Display a horizontal swatch bar of the extracted palette."""
    fig, ax = plt.subplots(1, 1, figsize=(10, 2))
    start = 0.0
    for color, prop in zip(colors, proportions):
        ax.barh(0, prop, left=start, color=color / 255.0, height=1.0)
        hex_code = "#{:02x}{:02x}{:02x}".format(*color)
        ax.text(start + prop / 2, 0, f"{hex_code}\n{prop:.0%}",
                ha="center", va="center", fontsize=8,
                color="white" if np.mean(color) < 128 else "black")
        start += prop
    ax.set_xlim(0, 1)
    ax.set_ylim(-0.5, 0.5)
    ax.axis("off")
    plt.tight_layout()
    plt.savefig("palette.png", dpi=150, bbox_inches="tight")
    plt.show()


# Usage
colors, proportions = extract_dominant_colors("product_photo.jpg", n_colors=5)
plot_palette(colors, proportions)

for color, prop in zip(colors, proportions):
    print(f"RGB({color[0]}, {color[1]}, {color[2]}) - {prop:.1%}")

Pick n_colors based on your use case. For product images with clean backgrounds, 3-5 works well. For complex scenes like landscapes, bump it to 8-10. If K-means feels slow on large images, downsample first – resize to 200px wide before clustering. The dominant colors won’t change meaningfully.

HSV Color Space Filtering

K-means tells you what colors are present. HSV filtering tells you where specific colors appear. This is the approach you want when you need to locate red objects, segment green regions, or isolate blue sky.

HSV (Hue, Saturation, Value) separates color identity from brightness, which makes range-based filtering far more reliable than working in RGB directly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import cv2
import numpy as np


def detect_color_regions(image_path, color_name="red"):
    """Detect regions of a specific color and draw bounding boxes."""
    img = cv2.imread(image_path)
    if img is None:
        raise FileNotFoundError(f"Could not load image: {image_path}")

    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

    # Define HSV ranges for common colors
    color_ranges = {
        "red_low":  (np.array([0, 100, 100]),   np.array([10, 255, 255])),
        "red_high": (np.array([160, 100, 100]),  np.array([180, 255, 255])),
        "blue":     (np.array([100, 100, 100]),  np.array([130, 255, 255])),
        "green":    (np.array([35, 80, 80]),     np.array([85, 255, 255])),
        "yellow":   (np.array([20, 100, 100]),   np.array([35, 255, 255])),
    }

    # Build the mask
    if color_name == "red":
        # Red wraps around the hue wheel, so combine two ranges
        lower1, upper1 = color_ranges["red_low"]
        lower2, upper2 = color_ranges["red_high"]
        mask = cv2.inRange(hsv, lower1, upper1) | cv2.inRange(hsv, lower2, upper2)
    else:
        if color_name not in color_ranges:
            raise ValueError(f"Unknown color: {color_name}. Options: red, blue, green, yellow")
        lower, upper = color_ranges[color_name]
        mask = cv2.inRange(hsv, lower, upper)

    # Clean up noise
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7))
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
    mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)

    # Find contours and draw bounding boxes
    contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    output = img.copy()
    detected_regions = []
    for contour in contours:
        area = cv2.contourArea(contour)
        if area < 500:  # skip tiny noise regions
            continue
        x, y, w, h = cv2.boundingRect(contour)
        cv2.rectangle(output, (x, y), (x + w, y + h), (0, 255, 0), 2)
        cv2.putText(output, color_name, (x, y - 10),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)
        detected_regions.append({"x": x, "y": y, "w": w, "h": h, "area": area})

    cv2.imwrite(f"detected_{color_name}.jpg", output)
    print(f"Found {len(detected_regions)} {color_name} regions")
    return detected_regions


# Detect red objects in an image
regions = detect_color_regions("scene.jpg", color_name="red")
for r in regions:
    print(f"  Region at ({r['x']}, {r['y']}) size {r['w']}x{r['h']}, area={r['area']}px")

The morphological open/close operations are important. Without them, you get hundreds of tiny fragmented contours from noise and JPEG artifacts. The area < 500 threshold filters out anything too small to be meaningful – adjust based on your image resolution.

Matching to Named Colors

Raw RGB triplets are not useful for end users. You want “Tomato Red” or “Navy Blue”, not (214, 62, 48). A KDTree gives you fast nearest-neighbor lookup against a palette of named colors.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
import numpy as np
from scipy.spatial import KDTree

# CSS named colors (a representative subset)
NAMED_COLORS = {
    "black":       (0, 0, 0),
    "white":       (255, 255, 255),
    "red":         (255, 0, 0),
    "lime":        (0, 255, 0),
    "blue":        (0, 0, 255),
    "yellow":      (255, 255, 0),
    "cyan":        (0, 255, 255),
    "magenta":     (255, 0, 255),
    "silver":      (192, 192, 192),
    "gray":        (128, 128, 128),
    "maroon":      (128, 0, 0),
    "olive":       (128, 128, 0),
    "navy":        (0, 0, 128),
    "teal":        (0, 128, 128),
    "coral":       (255, 127, 80),
    "salmon":      (250, 128, 114),
    "tomato":      (255, 99, 71),
    "orange":      (255, 165, 0),
    "gold":        (255, 215, 0),
    "khaki":       (240, 230, 140),
    "lavender":    (230, 230, 250),
    "plum":        (221, 160, 221),
    "sienna":      (160, 82, 45),
    "slate gray":  (112, 128, 144),
    "steel blue":  (70, 130, 180),
    "forest green": (34, 139, 34),
    "dark orange":  (255, 140, 0),
    "indian red":   (205, 92, 92),
    "sky blue":     (135, 206, 235),
    "sandy brown":  (244, 164, 96),
}

# Build the KDTree once
color_names = list(NAMED_COLORS.keys())
color_values = np.array(list(NAMED_COLORS.values()))
tree = KDTree(color_values)


def match_color_name(rgb):
    """Find the closest named color to an RGB value."""
    dist, idx = tree.query(rgb)
    return color_names[idx], color_values[idx], dist


def label_palette(dominant_colors):
    """Map an array of dominant RGB colors to named colors."""
    results = []
    for color in dominant_colors:
        name, matched_rgb, distance = match_color_name(color)
        results.append({
            "original_rgb": tuple(color),
            "matched_name": name,
            "matched_rgb": tuple(matched_rgb),
            "distance": round(distance, 1),
        })
    return results


# Tie it all together with the K-means output
# (assuming 'colors' from the extract_dominant_colors function above)
sample_colors = np.array([[214, 62, 48], [45, 52, 68], [230, 218, 200]])
labeled = label_palette(sample_colors)

for entry in labeled:
    print(f"RGB{entry['original_rgb']} -> \"{entry['matched_name']}\" "
          f"(distance: {entry['distance']})")

The distance value tells you how confident the match is. Anything under 30 is a solid match. Over 50 means you probably need more colors in your palette. You can expand NAMED_COLORS with the full CSS4 set (148 colors) or build a domain-specific palette for your use case.

Common Errors and Fixes

BGR vs RGB ordering. This trips up everyone at least once. OpenCV loads images in BGR format, but matplotlib, PIL, and most other libraries expect RGB. If your “blue” image looks orange in your swatch visualization, you forgot cv2.cvtColor(img, cv2.COLOR_BGR2RGB). The K-means clustering itself works in either color space, but your centroid values will be BGR unless you convert first.

K-means not converging on small images. If your image has fewer unique pixels than n_clusters, K-means throws a warning and produces duplicate centroids. Guard against this by checking len(np.unique(pixels, axis=0)) before clustering and capping n_clusters accordingly.

HSV red hue wrapping. Red sits at both ends of the hue spectrum in OpenCV’s 0-180 range (roughly 0-10 and 160-180). You need two inRange calls combined with a bitwise OR, as shown in the detection function above. Forgetting this means you only catch half the red pixels.

Oversaturated masks from JPEG artifacts. JPEG compression introduces color noise at block boundaries. Always apply morphological operations (open and close) to your HSV masks before finding contours. A 5x5 or 7x7 elliptical kernel handles most cases.

Slow K-means on high-res images. A 4000x3000 image has 12 million pixels. Resize to 300px wide before clustering – it cuts runtime from seconds to milliseconds with nearly identical color results. Use cv2.resize(img, (300, int(300 * h / w))) to maintain the aspect ratio.