Render to Texture

Framebuffer attachments and multi-pass rendering

Drawing Off-Screen

By default, render passes draw to the canvas—the visible screen. But many techniques require drawing to an intermediate texture first: shadow mapping needs a depth buffer from the light's perspective, post-processing effects need the full scene as input, and reflections require rendering the world from a mirror's viewpoint.

Render-to-texture means using a texture as the destination for a render pass instead of the canvas. The GPU draws geometry as usual, but the pixels land in your texture rather than on screen. That texture can then be sampled in subsequent passes, creating multi-stage rendering pipelines.

Creating Render Targets

A texture that receives render output needs the RENDER_ATTACHMENT usage flag:

const renderTarget = device.createTexture({
  size: [512, 512],
  format: "rgba8unorm",
  usage:
    GPUTextureUsage.RENDER_ATTACHMENT |  // Can be used as color attachment
    GPUTextureUsage.TEXTURE_BINDING,     // Can be sampled later
});
typescript

This texture can now serve as a color attachment in a render pass. The combination with TEXTURE_BINDING is essential—you want to write to it in one pass, then read from it in the next.

For depth testing in off-screen renders, create a matching depth texture:

const depthTexture = device.createTexture({
  size: [512, 512],
  format: "depth24plus",
  usage: GPUTextureUsage.RENDER_ATTACHMENT,
});
typescript

Render Passes with Custom Attachments

Instead of passing context.getCurrentTexture() as the attachment view, pass your render target:

const commandEncoder = device.createCommandEncoder();
 
// First pass: render scene to texture
const offscreenPass = commandEncoder.beginRenderPass({
  colorAttachments: [{
    view: renderTarget.createView(),
    clearValue: { r: 0, g: 0, b: 0, a: 1 },
    loadOp: "clear",
    storeOp: "store",
  }],
  depthStencilAttachment: {
    view: depthTexture.createView(),
    depthClearValue: 1.0,
    depthLoadOp: "clear",
    depthStoreOp: "store",
  },
});
 
// Draw your scene...
offscreenPass.setPipeline(scenePipeline);
offscreenPass.draw(vertexCount);
offscreenPass.end();
 
// Second pass: draw to screen, sampling from renderTarget
const screenPass = commandEncoder.beginRenderPass({
  colorAttachments: [{
    view: context.getCurrentTexture().createView(),
    clearValue: { r: 0, g: 0, b: 0, a: 1 },
    loadOp: "clear",
    storeOp: "store",
  }],
});
 
screenPass.setPipeline(postProcessPipeline);
screenPass.setBindGroup(0, textureBindGroup); // binds renderTarget for sampling
screenPass.draw(6); // full-screen quad
screenPass.end();
 
device.queue.submit([commandEncoder.finish()]);
typescript

Interactive: Render scene to texture, display on quad

Pass 1: Render the rotating triangle to a 256×256 texture.
Pass 2: Sample that texture onto a full-screen quad.

The first pass draws a 3D scene to an intermediate texture. The second pass samples that texture onto a full-screen quad, displaying the result. This two-pass structure is the foundation for all post-processing effects.

The Post-Processing Pattern

Post-processing transforms the rendered image before display. The pattern is consistent:

  1. Render the scene to an off-screen texture
  2. Sample that texture in a fragment shader that applies an effect
  3. Output to the canvas (or to another texture for chaining)

A blur effect, for example:

@group(0) @binding(0) var inputTexture: texture_2d<f32>;
@group(0) @binding(1) var texSampler: sampler;
 
@fragment
fn blurFragment(@location(0) uv: vec2f) -> @location(0) vec4f {
  let texelSize = 1.0 / vec2f(textureDimensions(inputTexture));
  var color = vec4f(0.0);
  
  for (var y = -2; y <= 2; y++) {
    for (var x = -2; x <= 2; x++) {
      color += textureSample(inputTexture, texSampler, uv + vec2f(f32(x), f32(y)) * texelSize);
    }
  }
  
  return color / 25.0;
}
wgsl

Interactive: Post-processing chain

SceneVignetteScreen

Multiple effects chain together: scene → blur → color correction → vignette → screen. Each step reads from the previous output and writes to a new target. The final step writes to the canvas.

Ping-Pong Buffers

Some effects require iterative refinement—each pass reads the previous frame's output and writes an updated version. Blur with large radii, for instance, is often computed in multiple passes. Feedback effects explicitly depend on what was drawn last frame.

The problem: you cannot read from and write to the same texture simultaneously. The solution is ping-pong buffers: two textures that alternate roles.

Interactive: Ping-pong buffer technique

Texture AWriting
Frame 0
Texture BReading

Each frame, the textures swap roles. The shader reads from one and writes to the other. This allows each frame to build on the previous one—essential for feedback effects, motion blur accumulation, and iterative blur.

const textureA = device.createTexture({ /* ... */ });
const textureB = device.createTexture({ /* ... */ });
 
let readTexture = textureA;
let writeTexture = textureB;
 
function renderFrame() {
  const commandEncoder = device.createCommandEncoder();
  
  // Render pass writes to writeTexture, samples from readTexture
  const pass = commandEncoder.beginRenderPass({
    colorAttachments: [{
      view: writeTexture.createView(),
      loadOp: "clear",
      storeOp: "store",
    }],
  });
  
  pass.setPipeline(pipeline);
  pass.setBindGroup(0, createBindGroup(readTexture)); // sample from read
  pass.draw(6);
  pass.end();
  
  device.queue.submit([commandEncoder.finish()]);
  
  // Swap roles for next frame
  [readTexture, writeTexture] = [writeTexture, readTexture];
}
typescript

Frame 1: read from A, write to B. Frame 2: read from B, write to A. The textures alternate, and each frame builds on the previous one. This pattern enables temporal effects, motion blur accumulation, and iterative simulations.

Multi-Pass Rendering

Complex scenes often require multiple render passes before the final output:

Shadow mapping: First pass renders the scene from the light's perspective into a depth texture. Second pass renders from the camera, sampling the shadow map to determine visibility.

Deferred rendering: First pass writes geometry attributes (position, normal, albedo) to multiple render targets (G-buffer). Second pass performs lighting calculations by sampling those textures.

Reflections: Render the scene from the reflection viewpoint to a texture. Sample that texture when drawing reflective surfaces.

Interactive: Multi-pass visualization

Pass 1
Shadow Map Pass
Depth values
Pass 2
G-Buffer Pass
Multiple textures
Pass 3
Lighting Pass
Lit scene
Pass 4
Post-Process Pass
Final image

Shadow Map Pass

Render scene from light's perspective to depth texture

Complex scenes use multiple render passes. Each pass generates intermediate data that subsequent passes consume. The shadow map informs lighting; the G-buffer enables deferred shading; post-processing adds final polish.

Each pass has a distinct purpose. Early passes generate intermediate data; later passes consume it. The final pass produces the visible output.

Multiple Render Targets (MRT)

A single render pass can write to multiple textures simultaneously. This is essential for deferred rendering, where you need position, normal, and color outputs from one geometry pass.

const gBufferPosition = device.createTexture({
  size: [width, height],
  format: "rgba16float",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});
 
const gBufferNormal = device.createTexture({
  size: [width, height],
  format: "rgba16float",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});
 
const gBufferAlbedo = device.createTexture({
  size: [width, height],
  format: "rgba8unorm",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});
 
const pass = commandEncoder.beginRenderPass({
  colorAttachments: [
    { view: gBufferPosition.createView(), loadOp: "clear", storeOp: "store" },
    { view: gBufferNormal.createView(), loadOp: "clear", storeOp: "store" },
    { view: gBufferAlbedo.createView(), loadOp: "clear", storeOp: "store" },
  ],
  depthStencilAttachment: { /* ... */ },
});
typescript

The fragment shader outputs to all three:

struct GBufferOutput {
  @location(0) position: vec4f,
  @location(1) normal: vec4f,
  @location(2) albedo: vec4f,
}
 
@fragment
fn main(input: VertexOutput) -> GBufferOutput {
  var output: GBufferOutput;
  output.position = vec4f(input.worldPosition, 1.0);
  output.normal = vec4f(normalize(input.normal), 0.0);
  output.albedo = vec4f(input.color, 1.0);
  return output;
}
wgsl

Each @location(n) corresponds to the nth color attachment. One draw call populates all three textures.

Resolving Multisampled Textures

When using multisampling (MSAA) for anti-aliasing, render targets are multisampled textures. Before you can sample from them in a shader, you must resolve them to a regular texture.

const msaaTexture = device.createTexture({
  size: [width, height],
  format: "rgba8unorm",
  sampleCount: 4,
  usage: GPUTextureUsage.RENDER_ATTACHMENT,
});
 
const resolveTexture = device.createTexture({
  size: [width, height],
  format: "rgba8unorm",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});
 
const pass = commandEncoder.beginRenderPass({
  colorAttachments: [{
    view: msaaTexture.createView(),
    resolveTarget: resolveTexture.createView(),  // Resolve to this texture
    loadOp: "clear",
    storeOp: "store",
  }],
});
typescript

The resolveTarget field tells WebGPU to average the multisampled pixels and write the result to the resolve texture. After the pass ends, resolveTexture contains the anti-aliased image, ready for sampling.

Texture Size Considerations

Render targets do not need to match the canvas size. You might render shadows at 1024×1024 regardless of screen resolution. You might render a reflection at half resolution for performance. You might render a minimap at a fixed small size.

When sampling a render target of different size, the sampler handles scaling. Linear filtering smooths the result; nearest filtering preserves pixel boundaries. For effects like bloom, rendering at lower resolution and upscaling is a common optimization.

Key Takeaways

  • Render-to-texture uses a texture as the render pass destination instead of the canvas
  • Create render targets with GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING
  • Post-processing renders the scene to a texture, then samples it with an effect shader
  • Ping-pong buffers alternate read/write roles to enable iterative and temporal effects
  • Multiple Render Targets (MRT) write to several textures in one pass—essential for deferred rendering
  • Multisampled render targets require a resolve step before sampling
  • Render target size is independent of canvas size—use this for performance optimization