Lane detection is one of those problems that looks simple until you actually try it. Shadows, worn markings, curved roads, rain – the real world fights you at every step. The good news: you can get surprisingly far with classical OpenCV methods, and when those break down, YOLOv8 segmentation picks up the slack.
Here is the fastest path to a working pipeline. We start with the classical approach, then layer in deep learning.
| |
Quick Start: Edge Detection and Hough Lines
This gets you lane lines on a single frame in under 20 lines of code:
| |
That works. But it produces a mess of overlapping line segments instead of clean lane boundaries. Let us fix that.
Building the Classical Pipeline
The classical approach has five stages: grayscale conversion, Gaussian blur, Canny edge detection, region-of-interest masking, and Hough line detection. We already saw these above. The missing piece is turning raw Hough segments into two clean lane lines.
Separating Left and Right Lanes
Each Hough segment has a slope. Negative slope (in image coordinates where y increases downward) means left lane, positive means right lane. Filter out nearly horizontal lines – those are noise.
| |
Fitting Lane Lines with Polynomial Regression
Raw Hough segments are noisy. Fitting a first-degree polynomial through all the points on each side gives you a single clean line. For curved roads, bump it up to second degree.
| |
We fit x as a function of y (not the other way around) because lane lines are nearly vertical in image space. Fitting y = f(x) would blow up when the line is steep.
Drawing the Lane Overlay
Once you have left and right lane points, fill the area between them with a translucent green polygon:
| |
Putting It All Together on Video
| |
This classical pipeline runs at hundreds of FPS on a CPU. It handles clear highway footage well. It falls apart on curved roads, poor lighting, and missing lane markings. That is where deep learning comes in.
YOLOv8 Segmentation for Lane Detection
YOLOv8-seg outputs pixel-level segmentation masks. The pretrained COCO model does not include a “lane” class, but you can fine-tune it on a lane dataset like TuSimple, CULane, or BDD100K. Even without fine-tuning, the road segmentation mask from a model trained on driving data gives you the drivable area, which you can post-process into lane boundaries.
Here is the inference workflow with a fine-tuned or custom-trained YOLOv8-seg model:
| |
Training YOLOv8-seg on Lane Data
If you have lane annotations in YOLO segmentation format (polygon points per image), training is straightforward:
| |
Your lane_dataset.yaml needs train and val image paths plus a names dictionary mapping class indices to labels like lane_marking or drivable_area. The TuSimple dataset is a solid starting point – it has 6,408 annotated clips of US highway driving.
Overlaying Detected Lanes on Video Frames
The strongest pipeline uses both approaches. Run YOLOv8-seg for drivable area detection, then apply classical Hough lines within that mask to find precise lane boundaries. This gives you the reliability of deep learning with the geometric precision of traditional CV:
| |
For temporal smoothing across frames, keep a running average of the polynomial coefficients. This eliminates jitter without adding much latency:
| |
Use this smoother in your video loop by replacing the direct fit_lane_line calls with smoother.smooth(). A window of 5 frames works well at 30 FPS – it gives you about 165ms of smoothing, which is enough to kill flicker without introducing visible lag.
Common Errors and Fixes
Canny detects too many or too few edges. The two threshold parameters control sensitivity. Start with (50, 150) and adjust. A 1:3 ratio between the low and high threshold is a reasonable default. If you get too much noise, raise both. If lanes disappear, lower them.
HoughLinesP returns None. This happens when no line segments pass the threshold. Lower the threshold parameter or reduce minLineLength. Also check that your ROI mask actually covers the lane area – save the mask image with cv2.imwrite("debug_mask.png", roi) and visually inspect it.
polyfit raises “poorly conditioned” warnings. You have too few points or they are nearly collinear in a degenerate way. Add a check: only call np.polyfit when you have at least 4 points. Wrap it in a try/except and fall back to the previous frame’s lane line:
| |
YOLOv8 mask resolution does not match frame size. YOLOv8 outputs masks at the model’s internal resolution (often 160x160). Always resize with cv2.resize() and cv2.INTER_NEAREST interpolation to preserve hard mask boundaries. Bilinear interpolation creates soft edges that mess up downstream thresholding.
Lane overlay flickers between frames. Use the LaneSmoother class from the previous section. A window of 5-10 frames eliminates jitter. If lanes still jump around, increase the minLineLength parameter in HoughLinesP to reject short spurious segments.
Video output is empty or corrupted. Make sure the VideoWriter codec matches your system. On Linux, mp4v works with .mp4 files. On macOS, try avc1. Always verify that cap.isOpened() returns True before entering the loop, and check that the frame dimensions you pass to VideoWriter match what VideoCapture actually returns.
CUDA out of memory with YOLOv8. Resize frames before inference with cv2.resize(frame, (640, 480)), or force CPU inference by setting the device: results = model(frame, device="cpu", verbose=False). The nano model (yolov8n-seg) is your best bet for constrained hardware – it runs at 40+ FPS on a modern CPU.
Related Guides
- How to Build a Document Comparison Pipeline with Vision Models
- How to Build a Real-Time Pose Estimation Pipeline with MediaPipe
- How to Build a Vehicle Counting Pipeline with YOLOv8 and OpenCV
- How to Build Video Analytics Pipelines with OpenCV and Deep Learning
- How to Build a Scene Text Recognition Pipeline with PaddleOCR
- How to Build a Video Shot Boundary Detection Pipeline with PySceneDetect
- How to Build a Video Surveillance Analytics Pipeline with YOLOv8
- How to Build a Product Defect Detector with YOLOv8 and OpenCV
- How to Build Multi-Object Tracking with DeepSORT and YOLOv8
- How to Build a Receipt Scanner with OCR and Structured Extraction