Multi-Canvas and Offscreen

OffscreenCanvas and multiple views

Real applications often need more than a single canvas. A 3D editor might show perspective, top, front, and side views simultaneously. A game might render to multiple windows or display debug visualizations alongside the main view. WebGPU handles these cases through device sharing and OffscreenCanvas, which moves rendering off the main thread entirely.

One Device, Multiple Canvases

A single WebGPU device can render to multiple canvases. Each canvas gets its own context, but they all share the same device, pipelines, buffers, and textures:

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
 
// Configure multiple canvases with the same device
const canvas1 = document.getElementById('main') as HTMLCanvasElement;
const canvas2 = document.getElementById('preview') as HTMLCanvasElement;
 
const context1 = canvas1.getContext('webgpu')!;
const context2 = canvas2.getContext('webgpu')!;
 
const format = navigator.gpu.getPreferredCanvasFormat();
 
context1.configure({ device, format });
context2.configure({ device, format });
typescript

Interactive: Rendering to multiple canvases

Side view...

All views share the same GPU device and mesh data. Only the camera differs per canvas.

This pattern works well when:

  • All canvases are on the same page
  • You want to share resources between views
  • Rendering happens on the main thread

Each canvas needs a separate render pass (different color attachments), but you can batch them into a single command encoder submission:

const encoder = device.createCommandEncoder();
 
// Render to canvas 1
const pass1 = encoder.beginRenderPass({
  colorAttachments: [{
    view: context1.getCurrentTexture().createView(),
    loadOp: 'clear',
    storeOp: 'store',
    clearValue: { r: 0.1, g: 0.1, b: 0.1, a: 1 },
  }],
});
renderScene(pass1, camera1);
pass1.end();
 
// Render to canvas 2 (same encoder)
const pass2 = encoder.beginRenderPass({
  colorAttachments: [{
    view: context2.getCurrentTexture().createView(),
    loadOp: 'clear',
    storeOp: 'store',
    clearValue: { r: 0.1, g: 0.1, b: 0.1, a: 1 },
  }],
});
renderScene(pass2, camera2);
pass2.end();
 
device.queue.submit([encoder.finish()]);
typescript

Interactive: Shared device architecture

GPU Device
Single instance shared across all canvases
Vertex Buffer2.4 MB
Index Buffer800 KB
Uniform Buffer256 B
Diffuse Texture4 MB
Normal Map4 MB
PBR Pipeline
Canvas 1
Context 1
Canvas 2
Context 2
Canvas 3
Context 3
Buffers
Textures
Pipelines

One device, shared resources, multiple output targets. Memory is allocated once.

OffscreenCanvas

OffscreenCanvas provides a canvas that isn't attached to the DOM. You can create one programmatically and render to it without displaying anything—useful for:

  • Generating textures procedurally
  • Server-side rendering (in Node.js with appropriate WebGPU bindings)
  • Moving rendering to a web worker
// Create an offscreen canvas
const offscreen = new OffscreenCanvas(512, 512);
const context = offscreen.getContext('webgpu')!;
 
context.configure({
  device,
  format: navigator.gpu.getPreferredCanvasFormat(),
});
 
// Render to it
const encoder = device.createCommandEncoder();
const pass = encoder.beginRenderPass({
  colorAttachments: [{
    view: context.getCurrentTexture().createView(),
    loadOp: 'clear',
    storeOp: 'store',
    clearValue: { r: 1, g: 0, b: 0, a: 1 },
  }],
});
// ... draw commands ...
pass.end();
device.queue.submit([encoder.finish()]);
 
// Get the result as a bitmap
const bitmap = offscreen.transferToImageBitmap();
typescript

The resulting ImageBitmap can be drawn to a regular 2D canvas, sent to another thread, or used as a texture source.

Web Workers

The most significant use of OffscreenCanvas is enabling WebGPU rendering in web workers. This moves all GPU work off the main thread, keeping the UI responsive even during heavy rendering.

// Main thread
const canvas = document.getElementById('canvas') as HTMLCanvasElement;
const offscreen = canvas.transferControlToOffscreen();
 
const worker = new Worker('render-worker.js');
worker.postMessage({ canvas: offscreen }, [offscreen]);
typescript
// render-worker.js
self.onmessage = async (e) => {
  const { canvas } = e.data;
  
  const adapter = await navigator.gpu.requestAdapter();
  const device = await adapter.requestDevice();
  
  const context = canvas.getContext('webgpu')!;
  context.configure({
    device,
    format: navigator.gpu.getPreferredCanvasFormat(),
  });
  
  function render() {
    const encoder = device.createCommandEncoder();
    // ... rendering ...
    device.queue.submit([encoder.finish()]);
    requestAnimationFrame(render);
  }
  
  render();
};
typescript

Interactive: Worker-based rendering architecture

Main Thread
• DOM / UI
• Event handlers
• Game logic
postMessage
Render Worker
• WebGPU device
• OffscreenCanvas
• Render loop
Message Log
No messages yet...

Main thread sends commands (canvas, camera, resize). Worker renders independently and can report back (frame completion, errors).

After transferControlToOffscreen(), the original canvas can no longer be drawn to from the main thread. All control passes to the offscreen canvas in the worker.

Communication Patterns

Workers communicate via message passing. For render configuration changes:

// Main thread
function setCamera(position, target) {
  worker.postMessage({
    type: 'camera',
    position: position.toArray(),
    target: target.toArray(),
  });
}
 
// Worker
self.onmessage = (e) => {
  if (e.data.type === 'camera') {
    cameraPosition = new Vec3(...e.data.position);
    cameraTarget = new Vec3(...e.data.target);
  }
};
typescript

For high-frequency updates (like mouse position), consider:

  • Debouncing/throttling messages
  • Using SharedArrayBuffer for lock-free data sharing (requires cross-origin isolation)
  • Batching multiple updates into single messages
// SharedArrayBuffer approach (requires COOP/COEP headers)
const sharedBuffer = new SharedArrayBuffer(16);
const sharedView = new Float32Array(sharedBuffer);
 
// Main thread updates directly
canvas.onmousemove = (e) => {
  sharedView[0] = e.clientX;
  sharedView[1] = e.clientY;
};
 
// Worker reads without message passing
function render() {
  const mouseX = sharedView[0];
  const mouseY = sharedView[1];
  // ...
}
typescript

Canvas Size Changes

Handle resize events by messaging the worker:

// Main thread
const resizeObserver = new ResizeObserver(entries => {
  const entry = entries[0];
  const width = entry.contentBoxSize[0].inlineSize;
  const height = entry.contentBoxSize[0].blockSize;
  
  worker.postMessage({
    type: 'resize',
    width: Math.max(1, Math.floor(width)),
    height: Math.max(1, Math.floor(height)),
  });
});
resizeObserver.observe(canvas);
typescript
// Worker
self.onmessage = (e) => {
  if (e.data.type === 'resize') {
    canvas.width = e.data.width;
    canvas.height = e.data.height;
    // Recreate depth buffer, update projection matrix, etc.
  }
};
typescript

The worker must recreate any size-dependent resources (depth textures, projection matrices) when dimensions change.

Interactive: OffscreenCanvas rendering

0
FPS
0ms
Main Thread Work

Main thread rendering is blocked by other work, causing stutters.

When to Use Workers

Worker-based rendering adds complexity. Use it when:

  • Main thread handles heavy non-GPU work (physics, AI, networking)
  • Frame rate must stay stable regardless of UI interactions
  • You're building a library where users control the main thread

Skip workers when:

  • The application is GPU-bound anyway (worker won't help)
  • Rendering is simple and main thread has spare capacity
  • You need tight synchronization between rendering and other systems

Resource Sharing Across Workers

Each worker that uses WebGPU needs its own adapter and device. GPU resources (buffers, textures) cannot be directly shared between workers. However, you can:

  1. Render to a texture in one worker
  2. Read it back to CPU (copyTextureToBuffer + mapAsync)
  3. Send the data to another worker
  4. Upload to a new texture there

This is expensive. For most applications, keep rendering in a single worker and send only high-level commands across the boundary.

Key Takeaways

  • One WebGPU device can render to multiple canvases by configuring multiple contexts
  • OffscreenCanvas enables rendering without a DOM-attached canvas
  • transferControlToOffscreen() moves canvas control to a worker, enabling off-main-thread rendering
  • Worker communication uses message passing; use SharedArrayBuffer for high-frequency data when possible
  • Handle canvas resizing by messaging new dimensions to the worker
  • Use worker-based rendering when main thread responsiveness is critical; skip it for simple applications