Seamless patterns repeat infinitely in every direction without visible seams or edges. They show up everywhere: fabric prints, wallpaper designs, game textures, website backgrounds, wrapping paper. Traditionally, designers create these by hand in tools like Photoshop, carefully aligning edges pixel by pixel. That process is slow and tedious.

Stable Diffusion can generate pattern images in seconds, but the raw output won’t tile cleanly. The edges won’t match. You need a post-processing step to blend the seams, and optionally an img2img refinement pass to keep the result coherent. This guide walks through the full pipeline: generate a base pattern, make it seamless with the offset-and-blend technique, refine it with img2img, and verify the result tiles correctly.

Generating Base Patterns with Stable Diffusion

Start by generating a base pattern image. The prompt matters a lot here. You want to describe a repeating, flat pattern rather than a scene with perspective or a single object. Words like “seamless pattern,” “repeating tile,” “flat design,” and “textile print” push the model toward pattern-like outputs.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import torch
from diffusers import StableDiffusionPipeline
from PIL import Image

model_id = "stabilityai/stable-diffusion-2-1"
pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
pipe.enable_attention_slicing()

prompt = (
    "seamless repeating pattern of tropical leaves and flowers, "
    "flat design, textile print, high detail, vibrant colors, "
    "top-down view, no perspective, tiling texture"
)
negative_prompt = (
    "blurry, low quality, 3d render, perspective, depth of field, "
    "single object, portrait, text, watermark"
)

generator = torch.Generator(device="cuda").manual_seed(42)

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=512,
    height=512,
    num_inference_steps=30,
    guidance_scale=7.5,
    generator=generator,
).images[0]

image.save("base_pattern.png")
print(f"Generated base pattern: {image.size}")

Use 512x512 for SD 2.1. Square images work best for tiling. The seed is fixed so you can reproduce results while iterating on prompts. The negative prompt steers the model away from 3D scenes and single-subject compositions that won’t tile well.

Making Patterns Seamless with Offset Tiling

The classic technique for making any image tileable: shift the image by half its width and height so the edges move to the center, then blend the visible seam in the middle. This is the same “Offset” filter that Photoshop uses, but done programmatically with numpy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import numpy as np
from PIL import Image, ImageFilter

def make_seamless(image: Image.Image, blend_width: int = 64) -> Image.Image:
    """Make an image seamlessly tileable using offset and gradient blending."""
    img_array = np.array(image, dtype=np.float32)
    h, w, c = img_array.shape

    # Shift image by half width and half height (edges move to center)
    shifted = np.roll(img_array, shift=(h // 2, w // 2), axis=(0, 1))

    # Create a blending mask that feathers the center cross
    mask = np.ones((h, w), dtype=np.float32)

    # Horizontal blend strip across the center
    cy = h // 2
    for i in range(blend_width):
        alpha = i / blend_width
        if cy - blend_width // 2 + i < h:
            mask[cy - blend_width // 2 + i, :] = alpha

    # Vertical blend strip down the center
    cx = w // 2
    for j in range(blend_width):
        alpha = j / blend_width
        if cx - blend_width // 2 + j < w:
            mask[:, cx - blend_width // 2 + j] = np.minimum(
                mask[:, cx - blend_width // 2 + j], alpha
            )

    # Expand mask to 3 channels
    mask_3d = np.stack([mask] * c, axis=-1)

    # Blend: where mask is 0, use shifted; where 1, use original
    blended = shifted * (1 - mask_3d) + img_array * mask_3d
    blended = np.clip(blended, 0, 255).astype(np.uint8)

    result = Image.fromarray(blended)
    # Light Gaussian blur over the seam areas to smooth artifacts
    return result.filter(ImageFilter.GaussianBlur(radius=0.5))


base_image = Image.open("base_pattern.png").convert("RGB")
seamless_image = make_seamless(base_image, blend_width=80)
seamless_image.save("seamless_pattern.png")
print("Seamless pattern saved")

The blend_width parameter controls how wide the feathered transition is. Wider blends hide seams better but can soften details. Start with 64-80 pixels for 512x512 images and adjust from there. The final Gaussian blur with a small radius smooths any remaining hard transitions without destroying detail.

Using img2img for Pattern Refinement

The offset-and-blend step can leave the center area looking slightly different from the rest of the image. An img2img pass at low strength cleans this up while preserving the seamless edges. The key is using a low strength value (0.2-0.4) so the model refines without completely redrawing the image.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from diffusers import StableDiffusionImg2ImgPipeline

img2img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1",
    torch_dtype=torch.float16,
)
img2img_pipe = img2img_pipe.to("cuda")
img2img_pipe.enable_attention_slicing()

seamless_input = Image.open("seamless_pattern.png").convert("RGB")
seamless_input = seamless_input.resize((512, 512))

refined = img2img_pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=seamless_input,
    strength=0.3,
    num_inference_steps=30,
    guidance_scale=7.5,
    generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]

# Run the seamless step again after img2img to fix any new edge artifacts
final_pattern = make_seamless(refined, blend_width=48)
final_pattern.save("final_pattern.png")
print("Refined seamless pattern saved")

Run make_seamless again after the img2img pass. The model can introduce subtle edge differences even at low strength, so a second blending pass with a narrower blend width locks the tiling back in. This two-pass approach (blend, refine, blend again) produces the cleanest results.

Verifying Seamless Tiling

The only real test for a seamless pattern is tiling it into a grid. If you can’t spot the seams in a 3x3 grid, the pattern works.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
def verify_tiling(pattern_path: str, grid_size: int = 3, output_path: str = "tiled_preview.png"):
    """Create an NxN tiled grid to verify seamless tiling."""
    tile = Image.open(pattern_path).convert("RGB")
    tw, th = tile.size

    grid = Image.new("RGB", (tw * grid_size, th * grid_size))

    for row in range(grid_size):
        for col in range(grid_size):
            grid.paste(tile, (col * tw, row * th))

    grid.save(output_path)
    print(f"Tiled {grid_size}x{grid_size} preview saved to {output_path}")
    print(f"Grid dimensions: {grid.size[0]}x{grid.size[1]}")
    return grid


tiled = verify_tiling("final_pattern.png", grid_size=3)

Open tiled_preview.png and look for visible seam lines. If you see horizontal or vertical lines where tiles meet, increase the blend_width or add another img2img refinement pass. A good seamless pattern should look like one continuous image at any zoom level.

For automated verification, you can also compare pixel values along the edges:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
def check_edge_continuity(pattern_path: str, tolerance: int = 10) -> dict:
    """Check how well edges match for seamless tiling."""
    img = np.array(Image.open(pattern_path).convert("RGB"), dtype=np.float32)
    h, w, _ = img.shape

    # Compare top row with bottom row
    horizontal_diff = np.mean(np.abs(img[0, :, :] - img[-1, :, :]))

    # Compare left column with right column
    vertical_diff = np.mean(np.abs(img[:, 0, :] - img[:, -1, :]))

    result = {
        "horizontal_edge_diff": round(float(horizontal_diff), 2),
        "vertical_edge_diff": round(float(vertical_diff), 2),
        "seamless": horizontal_diff < tolerance and vertical_diff < tolerance,
    }
    print(f"Edge continuity: {result}")
    return result


check_edge_continuity("final_pattern.png", tolerance=15)

A mean pixel difference under 10-15 across edges means the pattern tiles cleanly. Values above 30 will produce visible seam lines.

Common Errors and Fixes

CUDA out of memory: SD 2.1 at 512x512 needs roughly 4-5 GB of VRAM with float16. If you’re running out, add pipe.enable_vae_slicing() alongside enable_attention_slicing(), or drop to 384x384 and upscale afterward.

Visible seam cross in the center: The blend width is too narrow. Increase it to 96 or 128. If the center still looks different, run a second img2img pass at strength 0.2.

Pattern looks blurry after blending: The Gaussian blur radius is too high. Set it to 0.3 or remove it entirely. You can also skip the blur and rely purely on the gradient blending.

img2img changes the pattern too much: Lower the strength parameter. At 0.3 the model makes subtle refinements. At 0.5+ it starts redrawing significant portions.

Pattern doesn’t repeat in all directions: Make sure your input image is square. Non-square images need different shift amounts for each axis. The make_seamless function handles this, but the visual result is better with square inputs.

Prompt generates scenes instead of patterns: Add “top-down view, no perspective, flat design” to your prompt and “3d render, perspective, depth of field” to the negative prompt. The model defaults to generating scenes with depth, so you need to explicitly push it toward flat pattern layouts.

Edge continuity check fails but pattern looks fine visually: The tolerance might be too strict. Values under 20 are generally imperceptible when tiled. Increase the tolerance threshold or rely on the visual grid test.