Shadow Mapping
Rendering depth from the light's perspective to create shadows
The Shadow Problem
Shadows ground objects in a scene. Without them, objects appear to float—disconnected from the world around them. Yet for all their visual importance, shadows are conceptually simple: a point is in shadow if something blocks the light from reaching it.
The challenge is determining, for every visible pixel, whether light reaches it. Direct computation is expensive—tracing rays from each pixel toward the light, checking for intersections. Real-time rendering needs a faster approach.
Shadow mapping solves this by reframing the question. Instead of asking "Can light reach this pixel?" we ask "What can the light see?" Render the scene from the light's point of view, storing only depth information. Then, when rendering the scene from the camera, compare each fragment's distance from the light to what the light "saw"—if the fragment is farther, something is blocking it.
Two-Pass Rendering
Shadow mapping requires two rendering passes.
Pass 1: Render the shadow map. Place the camera at the light's position, looking in the light's direction. Render the scene, but instead of computing colors, write only depth values to a texture. This texture becomes the shadow map.
Pass 2: Render the scene. Now render from the actual camera. For each fragment, transform its world position into the light's view space and compare against the shadow map. If the fragment's depth exceeds the stored depth, it is in shadow.
Interactive: Shadow Mapping Pipeline
Pass 1: Shadow Map
- • Camera at light position
- • Render depth only (no color)
- • Output: depth texture
Pass 2: Scene Render
- • Camera at viewer position
- • Sample shadow map for each fragment
- • Compare depth to determine shadows
The two-pass structure is fundamental to many rendering techniques. You will see it again in reflection probes, deferred rendering, and post-processing. Shadow mapping is often a programmer's first encounter with render-to-texture workflows.
The Shadow Map
The shadow map is a depth texture rendered from the light's perspective. For a directional light (like the sun), you use an orthographic projection. For a point light, you render six faces into a cube map. For a spotlight, a single perspective projection suffices.
Creating the shadow map texture in WebGPU:
const shadowMapSize = 1024;
const shadowMap = device.createTexture({
size: [shadowMapSize, shadowMapSize],
format: 'depth32float',
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});The format depth32float stores 32-bit floating-point depth values. You can also use depth24plus for better performance when high precision is not critical.
The first pass uses a render pass with only a depth attachment—no color:
const shadowPassDescriptor = {
colorAttachments: [],
depthStencilAttachment: {
view: shadowMap.createView(),
depthClearValue: 1.0,
depthLoadOp: 'clear',
depthStoreOp: 'store',
},
};The vertex shader for the shadow pass transforms vertices into the light's clip space:
@group(0) @binding(0) var<uniform> lightViewProjection: mat4x4f;
@vertex
fn shadowVertex(@location(0) position: vec3f) -> @builtin(position) vec4f {
return lightViewProjection * vec4f(position, 1.0);
}There is no fragment shader output needed—the depth buffer is written automatically by the rasterizer.
Interactive: Shadow Map Texture
Bright areas are far from the light. Dark areas are close. Objects appear as darker regions in the shadow map.
The shadow map visualization shows what the light "sees." Brighter values are farther from the light. Dark regions are close. When rendering the scene, you compare against these stored depths to determine shadows.
Sampling the Shadow Map
In the main rendering pass, each fragment must query the shadow map. First, transform the fragment's world position into the light's clip space:
@group(0) @binding(0) var<uniform> lightViewProjection: mat4x4f;
@group(0) @binding(1) var shadowMap: texture_depth_2d;
@group(0) @binding(2) var shadowSampler: sampler_comparison;
@fragment
fn fragmentMain(input: VertexOutput) -> @location(0) vec4f {
// Transform to light space
let lightSpacePos = lightViewProjection * vec4f(input.worldPosition, 1.0);
// Perspective divide and convert to [0,1] range
let shadowCoord = lightSpacePos.xyz / lightSpacePos.w;
let uv = shadowCoord.xy * 0.5 + 0.5;
let depthFromLight = shadowCoord.z;
// Sample shadow map with comparison
let shadow = textureSampleCompare(shadowMap, shadowSampler, uv, depthFromLight);
// shadow is 0.0 if in shadow, 1.0 if lit
let finalColor = input.color * (0.3 + 0.7 * shadow);
return vec4f(finalColor, 1.0);
}The textureSampleCompare function compares the provided depth against the shadow map and returns 0.0 or 1.0. Using a comparison sampler enables hardware-accelerated shadow testing.
Shadow Acne
If you implement basic shadow mapping and render a lit surface, you will likely see a disturbing pattern of stripes or noise across surfaces that should be uniformly lit. This artifact is called shadow acne.
Shadow acne occurs because of precision limitations. When a surface faces the light directly, its depth in the shadow map nearly matches its actual depth. But floating-point precision and the shadow map's discrete resolution mean the comparison sometimes fails—the surface thinks it is shadowing itself.
Interactive: Shadow Acne Artifact
Shadow Acne
Surfaces shadow themselves due to precision limits in depth comparison.
Depth Bias Fix
Offset the depth test to prevent surfaces from failing their own shadow test.
The cause is geometric: imagine the shadow map capturing depth at sample points. Between samples, the actual surface might be slightly closer or farther than the stored value. When rendering, if the surface is even a tiny bit farther than the shadow map's sample, it registers as shadowed.
Bias: The Standard Fix
The standard solution is a depth bias—offsetting the depth comparison to avoid self-shadowing:
let bias = 0.005; // Tune this per-scene
let shadow = textureSampleCompare(shadowMap, shadowSampler, uv, depthFromLight - bias);By subtracting a small value from the depth being compared, you allow surfaces to pass the shadow test even when precision errors would cause failure.
But bias introduces its own artifact: Peter Panning. If the bias is too large, shadows detach from their casters, appearing to float. Objects look like Peter Pan hovering above the ground.
The art is finding the sweet spot—enough bias to eliminate acne, not so much that shadows disconnect.
Slope-scale bias improves on constant bias. Surfaces that face the light head-on need less bias; surfaces at grazing angles need more. The GPU can compute this automatically:
// In the shadow render pass pipeline
depthStencilState: {
format: 'depth32float',
depthWriteEnabled: true,
depthCompare: 'less',
depthBias: 1,
depthBiasSlopeScale: 1.0,
depthBiasClamp: 0.01,
}Hardware slope-scale bias adjusts per-fragment, providing better results than a constant offset.
PCF: Soft Shadow Edges
Basic shadow mapping produces hard-edged shadows. Every pixel is either fully lit or fully shadowed. In reality, shadows have soft edges—penumbras where light is partially blocked.
Percentage Closer Filtering (PCF) softens shadow edges by sampling the shadow map multiple times and averaging the results:
fn pcfShadow(shadowCoord: vec3f, shadowMap: texture_depth_2d, sampler: sampler_comparison) -> f32 {
let texelSize = 1.0 / 1024.0; // Shadow map resolution
var shadow = 0.0;
// Sample in a 3x3 grid
for (var x = -1; x <= 1; x++) {
for (var y = -1; y <= 1; y++) {
let offset = vec2f(f32(x), f32(y)) * texelSize;
shadow += textureSampleCompare(
shadowMap, sampler,
shadowCoord.xy + offset,
shadowCoord.z
);
}
}
return shadow / 9.0;
}Each sample tests a slightly different position. Near shadow edges, some samples pass and others fail, producing intermediate values that soften the transition.
Interactive: Hard vs Soft Shadows (PCF)
PCF samples the shadow map multiple times with small offsets and averages the results. More samples = smoother edges but higher cost.
More samples produce smoother edges but cost more. A 3×3 kernel (9 samples) is a common starting point. For very smooth shadows, 5×5 or higher may be needed.
Poisson disk sampling or rotated grid patterns can reduce the banding that regular grid sampling sometimes produces. Advanced techniques like PCSS (Percentage Closer Soft Shadows) vary the blur radius based on distance from the occluder, producing contact-hardening shadows that sharpen near contact points.
Cascaded Shadow Maps
Directional lights like the sun illuminate everything—from objects at your feet to mountains on the horizon. A single shadow map cannot provide good quality at all distances. Near the camera, you want high resolution. Far away, lower resolution is acceptable.
Cascaded Shadow Maps (CSM) divide the view frustum into slices, rendering a separate shadow map for each. Near cascades cover small areas at high resolution; far cascades cover large areas at lower resolution.
// Example cascade splits (in view space Z)
const cascadeSplits = [0, 10, 50, 200, 1000];
// Render a shadow map per cascade
for (let i = 0; i < 4; i++) {
const near = cascadeSplits[i];
const far = cascadeSplits[i + 1];
const lightMatrix = computeCascadeLightMatrix(near, far);
renderShadowMap(shadowMaps[i], lightMatrix);
}In the fragment shader, you select which cascade to sample based on the fragment's distance from the camera:
fn getCascadeIndex(viewZ: f32) -> u32 {
if (viewZ < 10.0) { return 0u; }
if (viewZ < 50.0) { return 1u; }
if (viewZ < 200.0) { return 2u; }
return 3u;
}CSM is standard for outdoor scenes in modern games. It provides consistent shadow quality from nearby blades of grass to distant buildings.
Implementation Checklist
When implementing shadow mapping, work through these steps:
First, create the shadow map texture and its sampler. Use a comparison sampler for hardware shadow testing.
Second, set up the shadow render pass. Render geometry with a simple shader that only outputs position—no color computation needed.
Third, compute the light's view-projection matrix. For directional lights, use orthographic projection sized to fit the scene or camera frustum.
Fourth, in the main pass, bind the shadow map and light matrix. Transform fragments to light space and sample with comparison.
Fifth, add bias to fix shadow acne. Start with a small constant, then consider slope-scale bias.
Sixth, implement PCF if hard shadows are unacceptable. A 3×3 kernel is a reasonable default.
Seventh, for large scenes with directional lights, consider cascaded shadow maps.
Key Takeaways
- Shadow mapping determines shadows by comparing fragment depth against a depth texture rendered from the light
- Two-pass rendering: first render the shadow map (depth from light), then render the scene (sample shadow map)
- Shadow acne occurs from self-shadowing due to precision limits; fix with depth bias
- Peter Panning occurs when bias is too large and shadows detach from casters
- PCF softens shadow edges by averaging multiple shadow map samples
- Cascaded Shadow Maps provide consistent quality across distances for directional lights
- Shadow mapping is foundational—the two-pass pattern appears throughout graphics programming