The WebGPU API
Understanding the objects that connect your code to the GPU
The Initialization Sequence
Every WebGPU application begins the same way: a carefully orchestrated handshake between your JavaScript and the GPU hardware. This sequence involves four objects, each representing a different layer of abstraction.
Interactive: The initialization flow
navigator.gpu
Check if WebGPU is available
requestAdapter()
Get access to a physical GPU
requestDevice()
Create your interface to the GPU
configure()
Connect canvas to the device
Watch the initialization sequence. Each step depends on the previous one succeeding.
The flow is always the same: check availability, request an adapter, request a device, configure the canvas. Each step can fail, and each failure means something different. Understanding this sequence is essential before writing any GPU code.
Checking Availability
WebGPU is not universally available. Before attempting anything else, you must verify that the browser supports it.
if (!navigator.gpu) {
console.error("WebGPU is not supported in this browser");
return;
}The navigator.gpu object is your entry point. If it exists, WebGPU is available. If it does not, the browser lacks support entirely—either because it is too old, the feature is disabled, or the operating system cannot provide GPU access.
This check is synchronous and immediate. No promises, no async—just a simple property access that tells you whether to proceed.
The Adapter
Once you know WebGPU exists, you request an adapter. The adapter represents a specific GPU on the system.
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
console.error("No adapter found");
return;
}Most machines have exactly one GPU, so requestAdapter() returns it. But laptops often have two: an integrated GPU for power efficiency and a discrete GPU for performance. Desktops might have multiple high-performance cards. The adapter abstraction lets you specify preferences:
const adapter = await navigator.gpu.requestAdapter({
powerPreference: "high-performance", // or "low-power"
});Setting powerPreference to "high-performance" asks for the most capable GPU, typically a discrete graphics card. Setting it to "low-power" prefers integrated graphics, which extends battery life on laptops.
The adapter can tell you about the GPU it represents:
Live: Your GPU's adapter info
The adapter also knows its limits—maximum texture sizes, maximum buffer sizes, supported features. These limits constrain what resources you can create. Exceeding them causes device creation to fail.
The Device
The adapter describes a GPU. The device is your interface to it.
const device = await adapter.requestDevice();Creating a device is like opening a connection. Through this object you create buffers, textures, shaders, pipelines—every GPU resource comes from the device. The device also provides error handling and the command queue.
You can request specific features or higher limits when creating a device:
const device = await adapter.requestDevice({
requiredFeatures: ["texture-compression-bc"],
requiredLimits: {
maxStorageBufferBindingSize: 256 * 1024 * 1024,
},
});If the adapter cannot provide what you request, device creation fails. This is intentional: better to fail explicitly than to discover mid-render that a required feature is missing.
Live: Your device's capabilities
The device is the workhorse of WebGPU. Every other object either comes from it or operates on resources created through it.
The Queue
Every device comes with a queue. This is where you submit commands for the GPU to execute.
device.queue.submit([commandBuffer]);Think of the queue as an inbox. You write a list of instructions (a command buffer), seal it, and drop it in the inbox. The GPU picks it up when ready and executes the commands in order.
The queue is also how you upload data:
device.queue.writeBuffer(buffer, 0, data);
device.queue.writeTexture({ texture }, data, layout, size);These methods schedule data transfers from CPU memory to GPU memory. The transfer happens asynchronously—the call returns immediately, but the data arrives on the GPU later.
There is no readBuffer on the queue. Reading data back from the GPU requires explicit synchronization through buffer mapping, which we will cover when discussing compute shaders.
The Canvas Context
To display GPU-rendered content, you need a canvas context. This connects your WebGPU device to an HTML canvas element.
const canvas = document.querySelector("canvas");
const context = canvas.getContext("webgpu");
context.configure({
device: device,
format: navigator.gpu.getPreferredCanvasFormat(),
alphaMode: "premultiplied",
});The configuration specifies which device will render to this canvas and what pixel format to use. navigator.gpu.getPreferredCanvasFormat() returns the optimal format for the display—typically "bgra8unorm" or "rgba8unorm" depending on the platform.
The alphaMode controls how the canvas blends with the page background. "premultiplied" assumes RGB values are already multiplied by alpha, which is standard for compositing. "opaque" ignores alpha entirely, which is slightly faster if you never need transparency.
Each frame, you request a texture from the context:
const texture = context.getCurrentTexture();
const view = texture.createView();This texture is the render target. Draw to it, and when the frame ends, the browser displays it on screen.
The Complete Initialization
Here is the full sequence, with proper error handling:
async function initWebGPU(canvas: HTMLCanvasElement) {
// 1. Check availability
if (!navigator.gpu) {
throw new Error("WebGPU not supported");
}
// 2. Request adapter
const adapter = await navigator.gpu.requestAdapter({
powerPreference: "high-performance",
});
if (!adapter) {
throw new Error("No GPU adapter found");
}
// 3. Request device
const device = await adapter.requestDevice();
// 4. Configure canvas
const context = canvas.getContext("webgpu");
if (!context) {
throw new Error("Could not get WebGPU context");
}
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });
return { adapter, device, context, format };
}This function returns everything you need to start rendering. The format is included because you will need it when creating render pipelines—it must match the canvas configuration.
Error Handling
GPUs are complex hardware. Things go wrong. WebGPU provides several mechanisms to detect and handle errors.
Device Lost
The device can be lost at any time. This happens when:
- The GPU driver crashes
- The GPU is physically unplugged (for external GPUs)
- The user switches to a different GPU (on laptops)
- The system suspends and the GPU state is invalidated
device.lost.then((info) => {
console.error(`Device lost: ${info.message}`);
if (info.reason === "destroyed") {
// We called device.destroy() ourselves
} else {
// External cause—attempt recovery
reinitialize();
}
});The lost promise resolves when the device becomes unusable. The reason field indicates whether you caused the loss ("destroyed") or something external did ("unknown").
Validation Errors
Most WebGPU operations validate their inputs. Invalid operations push errors to an error scope:
device.pushErrorScope("validation");
// Operations that might fail
device.createBuffer({ size: -1, usage: GPUBufferUsage.VERTEX }); // Invalid!
device.popErrorScope().then((error) => {
if (error) {
console.error(`Validation error: ${error.message}`);
}
});Error scopes are like try-catch blocks for GPU operations. Push a scope, do some work, pop the scope and check for errors. Scopes can filter by error type: "validation", "out-of-memory", or "internal".
Uncaptured Errors
Errors outside any error scope trigger the uncapturederror event:
device.addEventListener("uncapturederror", (event) => {
console.error(`Uncaptured GPU error: ${event.error.message}`);
});In development, this helps catch errors you forgot to scope. In production, it acts as a safety net.
Interactive: Error types and handling
WebGPU is required to run error simulations.
Shader Compilation Errors
Shader errors work differently. When you create a shader module with invalid WGSL, the module is created but marked as invalid. The error appears when you try to use the module in a pipeline:
const module = device.createShaderModule({
code: `@vertex fn main() -> vec4f { return 1; }`, // Type error!
});
// Error surfaces here, not above
const pipeline = device.createRenderPipeline({
vertex: { module, entryPoint: "main" },
// ...
});To get shader errors early, check the compilation info:
const module = device.createShaderModule({ code: shaderCode });
const info = await module.getCompilationInfo();
for (const message of info.messages) {
console.log(`${message.type}: ${message.message}`);
}Resource Cleanup
GPUs have limited memory. When you are done with a resource, release it:
buffer.destroy();
texture.destroy();
device.destroy();In practice, JavaScript's garbage collector eventually releases unreferenced resources. But GPU memory pressure can cause problems before GC runs. Explicit cleanup is faster and more predictable.
Context configuration can also be reset:
context.unconfigure();This releases the canvas association and any textures held by the swap chain.
Key Takeaways
- WebGPU initialization follows a strict sequence: check
navigator.gpu, request adapter, request device, configure canvas - The adapter represents a physical GPU; use
powerPreferenceto choose between performance and battery life - The device is your connection to the GPU—all resources are created through it
- The queue is where you submit commands and upload data
- The canvas context connects rendering to the screen; always use
getPreferredCanvasFormat()for the pixel format - Handle errors at three levels: device lost events, error scopes for validation, and uncaptured error events as a fallback
- Explicitly destroy resources when done to avoid GPU memory pressure