Shot boundary detection is the backbone of any automated video editing pipeline. You need it for generating highlight reels, building video search indexes, creating chapter markers, or just chopping a long recording into manageable clips. PySceneDetect handles this well – it ships a CLI for quick one-liners and a Python API for when you need programmatic control.

Here’s the fastest way to split a video into scenes from your terminal:

1
2
pip install scenedetect[opencv]
scenedetect -i input.mp4 detect-content list-scenes split-video

That detects cuts, prints a scene list, and splits the file into separate clips. Three commands chained together, done in seconds. But the real power is in the Python API.

Quick Start with the CLI

The scenedetect CLI follows a pipeline pattern: you specify an input, pick a detection algorithm, then chain output commands. Each command has its own flags.

Detect scenes and save a CSV report:

1
scenedetect -i interview.mp4 detect-content --threshold 27.0 list-scenes

This creates an interview-Scenes.csv file with timecodes for every detected cut. The --threshold flag controls sensitivity – lower values catch more subtle changes, higher values only trigger on hard cuts.

Use adaptive detection for footage with lots of camera motion:

1
scenedetect -i drone_footage.mp4 detect-adaptive --threshold 3.0 list-scenes save-images

detect-adaptive uses a rolling average of frame differences instead of a fixed threshold, which cuts down on false positives when the camera is panning or tracking a subject.

Split video into separate files without re-encoding (fast, lossless):

1
scenedetect -i lecture.mp4 detect-content split-video --copy

The --copy flag tells ffmpeg to stream-copy rather than transcode. This is nearly instant but cut points may be slightly off since it snaps to the nearest keyframe.

You can also limit detection to a time range:

1
scenedetect -i movie.mp4 time --start 00:05:00 --end 00:15:00 detect-content list-scenes

Programmatic Scene Detection with Python

The CLI is great for quick jobs, but you’ll want the Python API when building pipelines. PySceneDetect gives you two ways to detect scenes: the simple detect() function and the full SceneManager class.

The simplest approach:

1
2
3
4
5
6
7
from scenedetect import detect, ContentDetector

scene_list = detect("input.mp4", ContentDetector(threshold=27.0))

for i, (start, end) in enumerate(scene_list):
    print(f"Scene {i+1}: {start.get_timecode()} -> {end.get_timecode()} "
          f"({end.get_frames() - start.get_frames()} frames)")

detect() returns a list of (start_timecode, end_timecode) tuples. Each timecode is a FrameTimecode object that gives you frame numbers, seconds, or formatted timecodes.

For more control, use SceneManager directly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
from scenedetect import open_video, SceneManager, ContentDetector

video = open_video("input.mp4")
scene_manager = SceneManager()
scene_manager.add_detector(ContentDetector(threshold=27.0, min_scene_len=15))

# Detect scenes and get frame count processed
num_frames = scene_manager.detect_scenes(video, show_progress=True)
scene_list = scene_manager.get_scene_list()

print(f"Processed {num_frames} frames, found {len(scene_list)} scenes")

for i, (start, end) in enumerate(scene_list):
    duration = end.get_seconds() - start.get_seconds()
    print(f"Scene {i+1}: {start.get_timecode()} to {end.get_timecode()} ({duration:.1f}s)")

min_scene_len=15 means at least 15 frames must pass between detected cuts. This prevents rapid-fire false detections in noisy footage. At 30fps, that’s a half-second minimum scene length.

For detecting fade-ins and fade-outs instead of hard cuts, use ThresholdDetector:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
from scenedetect import open_video, SceneManager, ThresholdDetector

video = open_video("presentation.mp4")
scene_manager = SceneManager()
scene_manager.add_detector(ThresholdDetector(threshold=12, min_scene_len=15))

scene_manager.detect_scenes(video, show_progress=True)
scene_list = scene_manager.get_scene_list()

for i, (start, end) in enumerate(scene_list):
    print(f"Scene {i+1}: {start.get_timecode()} -> {end.get_timecode()}")

ThresholdDetector monitors average pixel intensity. When a frame drops below the threshold (fade to black) and then rises back above it, that triggers a scene boundary. Works well for presentations, lectures, and anything with deliberate fade transitions.

Extracting Scene Thumbnails and Metadata

Once you have a scene list, you’ll often want thumbnails for each scene and a structured export for downstream processing. PySceneDetect has built-in functions for both.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from scenedetect import open_video, SceneManager, ContentDetector
from scenedetect.scene_manager import save_images, write_scene_list
import os

video_path = "input.mp4"
video = open_video(video_path)
scene_manager = SceneManager()
scene_manager.add_detector(ContentDetector(threshold=27.0))
scene_manager.detect_scenes(video, show_progress=True)
scene_list = scene_manager.get_scene_list()

# Create output directory
output_dir = "scene_output"
os.makedirs(output_dir, exist_ok=True)

# Save 3 images per scene (start, middle, end)
image_map = save_images(
    scene_list=scene_list,
    video=video,
    num_images=3,
    output_dir=output_dir,
    image_extension="jpg",
    encoder_param=95,
    image_name_template="$VIDEO_NAME-Scene-$SCENE_NUMBER-$IMAGE_NUMBER",
    show_progress=True,
)

print(f"Saved images for {len(image_map)} scenes")
for scene_num, paths in image_map.items():
    print(f"  Scene {scene_num}: {[os.path.basename(p) for p in paths]}")

# Export scene list to CSV
csv_path = os.path.join(output_dir, "scenes.csv")
with open(csv_path, "w") as csv_file:
    write_scene_list(
        output_csv_file=csv_file,
        scene_list=scene_list,
        include_cut_list=True,
    )
print(f"Scene list saved to {csv_path}")

save_images returns a dictionary mapping scene numbers to file paths. The num_images parameter controls how many frames get extracted per scene – 3 gives you the first frame, a middle frame, and the last frame. Set encoder_param to control JPEG quality (0-100, default 95).

The CSV output includes start/end timecodes, frame numbers, and duration for every scene. You can feed this directly into a video editor’s EDL importer or parse it in a downstream pipeline.

To split the video into separate files programmatically:

1
2
3
4
5
6
7
8
from scenedetect import split_video_ffmpeg

split_video_ffmpeg(
    video_path,
    scene_list,
    output_dir=output_dir,
    show_progress=True,
)

This calls ffmpeg under the hood. Make sure ffmpeg is on your PATH.

Fine-Tuning Detection Parameters

Picking the right detector and threshold depends on your footage. Here’s a practical breakdown.

ContentDetector compares HSV color histograms between consecutive frames. It works best on footage with distinct hard cuts – interviews, vlogs, edited content.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from scenedetect import ContentDetector

# Aggressive: catches subtle changes (more scenes, more false positives)
sensitive = ContentDetector(threshold=20.0, min_scene_len=10)

# Default: good balance for most edited video
balanced = ContentDetector(threshold=27.0, min_scene_len=15)

# Conservative: only hard cuts (fewer scenes, fewer false positives)
conservative = ContentDetector(threshold=40.0, min_scene_len=30)

AdaptiveDetector uses a rolling average of frame-to-frame differences. The threshold is relative to the local average, not absolute. This makes it much better for footage with camera movement, varying lighting, or handheld shots.

1
2
3
4
5
6
7
8
9
from scenedetect import AdaptiveDetector

# Works well for documentary/cinema footage
detector = AdaptiveDetector(
    adaptive_threshold=3.0,  # multiplier over rolling average
    min_scene_len=15,
    window_width=2,          # frames on each side for averaging
    min_content_val=15.0,    # minimum absolute score to consider
)

window_width controls how many frames contribute to the rolling average. A wider window smooths out more noise but reacts slower to real cuts. min_content_val sets a floor – any frame-to-frame difference below this value is ignored entirely, even if it’s above the adaptive threshold.

A quick comparison to help you choose:

ScenarioDetectorThreshold
Edited videos, hard cutsContentDetector25-30
Camera motion, documentariesAdaptiveDetector2.5-3.5
Presentations with fadesThresholdDetector10-15
Music videos, fast editsContentDetector35-45
Surveillance footageAdaptiveDetector4.0-5.0

When in doubt, start with AdaptiveDetector at its default settings. It handles the widest range of footage without tuning.

Common Errors and Fixes

ffmpeg not found when splitting video

1
OSError: ffmpeg not found on system

PySceneDetect uses ffmpeg for split-video and split_video_ffmpeg(). Install it:

1
2
3
4
5
6
7
8
# Ubuntu/Debian
sudo apt install ffmpeg

# macOS
brew install ffmpeg

# Check it's available
ffmpeg -version

Scene detection itself only needs OpenCV, not ffmpeg. So detect() and list-scenes work without it.

Video codec not supported

1
cv2.error: OpenCV(4.x.x) ... error: (-215:Assertion failed) !_src.empty()

OpenCV can’t decode the video. Usually happens with HEVC/H.265 or AV1 files. Fix by installing OpenCV with full codec support or transcode the file first:

1
ffmpeg -i input.hevc.mp4 -c:v libx264 -crf 18 input_h264.mp4

Memory issues with large files

PySceneDetect processes frames sequentially, so memory usage stays flat regardless of video length. But if you’re also loading all scene images into memory, that adds up fast. Process scenes in batches:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
from scenedetect import detect, ContentDetector, open_video
from scenedetect.scene_manager import save_images

scene_list = detect("long_video.mp4", ContentDetector())

# Process scenes in batches of 50 to limit memory usage
video = open_video("long_video.mp4")
batch_size = 50
for batch_start in range(0, len(scene_list), batch_size):
    batch = scene_list[batch_start:batch_start + batch_size]
    save_images(
        scene_list=batch,
        video=video,
        num_images=1,
        output_dir="thumbnails",
    )
    print(f"Processed scenes {batch_start + 1} to {batch_start + len(batch)}")

Scenes too short or too many false detections

Increase min_scene_len or raise the threshold. A min_scene_len of 30 at 30fps means no scene can be shorter than 1 second. Combine this with a higher threshold to filter out noise:

1
2
3
from scenedetect import ContentDetector

detector = ContentDetector(threshold=35.0, min_scene_len=30)