wgpu_hal/lib.rs
1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//! should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//! validation, if any, and incorrect use will often cause undefined behavior.
20//! This allows us to minimize the overhead we impose over the underlying
21//! graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//! safe API for driving `wgpu-hal`, implementing all necessary validation,
23//! resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//! use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//! `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//! anticipate*, like running out of memory or losing the device. Any errors
29//! that the user could reasonably anticipate are their responsibility to
30//! avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//! not mappable: as the buffer creator, the user should already know if they
32//! can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//! generally object-safe. You must select a specific backend type
36//! like [`vulkan::Api`] or [`metal::Api`], and then use that
37//! according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//! taking objects by reference, returning them by value, and so on,
41//! unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//! remain mapped on the CPU while the GPU reads or writes to it. You must
45//! explicitly indicate when data might need to be transferred between CPU and
46//! GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//! resource. For example, if a buffer is written to by a compute
50//! shader, and then used as and index buffer to a draw call, you
51//! must use [`CommandEncoder::transition_buffers`] between those two
52//! operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//! Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//! store data in particular containers. The implementation doesn't guarantee
59//! that any of the iterators are drained, unless stated otherwise by the
60//! function documentation. For this reason, we recommend that iterators don't
61//! do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//! Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//! available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//! own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//! creates an instance value, which you can use to enumerate the adapters
121//! available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//! returns an instance that can enumerate the Vulkan physical devices on your
123//! system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//! particular device from a particular backend. For example, a Vulkan instance
127//! might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//! link to a device. You get a device value by calling [`Adapter::open`], and
131//! then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//! command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//! use to build buffers of commands to submit to a queue. This has all the
138//! methods for drawing and running compute shaders, which is presumably what
139//! you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//! swapchain for presenting images on the screen, via interaction with the
143//! system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208 // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209 clippy::arc_with_non_send_sync,
210 // We don't use syntax sugar where it's not necessary.
211 clippy::match_like_matches_macro,
212 // Redundant matching is more explicit.
213 clippy::redundant_pattern_matching,
214 // Explicit lifetimes are often easier to reason about.
215 clippy::needless_lifetimes,
216 // No need for defaults in the internal types.
217 clippy::new_without_default,
218 // Matches are good and extendable, no need to make an exception here.
219 clippy::single_match,
220 // Push commands are more regular than macros.
221 clippy::vec_init_then_push,
222 // We unsafe impl `Send` for a reason.
223 clippy::non_send_fields_in_send_ty,
224 // TODO!
225 clippy::missing_safety_doc,
226 // It gets in the way a lot and does not prevent bugs in practice.
227 clippy::pattern_type_mismatch,
228 // We should investigate these.
229 clippy::large_enum_variant
230)]
231#![warn(
232 clippy::alloc_instead_of_core,
233 clippy::ptr_as_ptr,
234 clippy::std_instead_of_alloc,
235 clippy::std_instead_of_core,
236 trivial_casts,
237 trivial_numeric_casts,
238 unsafe_op_in_unsafe_fn,
239 unused_extern_crates,
240 unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268 #[cfg(dx12)]
269 pub use super::dx12::Api as Dx12;
270 #[cfg(gles)]
271 pub use super::gles::Api as Gles;
272 #[cfg(metal)]
273 pub use super::metal::Api as Metal;
274 pub use super::noop::Api as Noop;
275 #[cfg(vulkan)]
276 pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288 DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289 DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290 DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291 DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292 DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299 borrow::Borrow,
300 error::Error,
301 fmt,
302 num::{NonZeroU32, NonZeroU64},
303 ops::{Range, RangeInclusive},
304 ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use thiserror::Error;
309use wgt::WasmNotSendSync;
310
311cfg_if::cfg_if! {
312 if #[cfg(supports_ptr_atomics)] {
313 use alloc::sync::Arc;
314 } else if #[cfg(feature = "portable-atomic")] {
315 use portable_atomic_util::Arc;
316 }
317}
318
319// - Vertex + Fragment
320// - Compute
321// Task + Mesh + Fragment
322pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
323pub const MAX_ANISOTROPY: u8 = 16;
324pub const MAX_BIND_GROUPS: usize = 8;
325pub const MAX_VERTEX_BUFFERS: usize = 16;
326pub const MAX_COLOR_ATTACHMENTS: usize = 8;
327pub const MAX_MIP_LEVELS: u32 = 16;
328/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
329/// cbindgen:ignore
330pub const QUERY_SIZE: wgt::BufferAddress = 8;
331
332pub type Label<'a> = Option<&'a str>;
333pub type MemoryRange = Range<wgt::BufferAddress>;
334pub type FenceValue = u64;
335#[cfg(supports_64bit_atomics)]
336pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
337#[cfg(not(supports_64bit_atomics))]
338pub type AtomicFenceValue = portable_atomic::AtomicU64;
339
340/// A callback to signal that wgpu is no longer using a resource.
341#[cfg(any(gles, vulkan))]
342pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
343
344#[cfg(any(gles, vulkan))]
345pub struct DropGuard {
346 callback: Option<DropCallback>,
347}
348
349#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
350impl DropGuard {
351 fn from_option(callback: Option<DropCallback>) -> Option<Self> {
352 callback.map(|callback| Self {
353 callback: Some(callback),
354 })
355 }
356}
357
358#[cfg(any(gles, vulkan))]
359impl Drop for DropGuard {
360 fn drop(&mut self) {
361 if let Some(cb) = self.callback.take() {
362 (cb)();
363 }
364 }
365}
366
367#[cfg(any(gles, vulkan))]
368impl fmt::Debug for DropGuard {
369 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
370 f.debug_struct("DropGuard").finish()
371 }
372}
373
374#[derive(Clone, Debug, PartialEq, Eq, Error)]
375pub enum DeviceError {
376 #[error("Out of memory")]
377 OutOfMemory,
378 #[error("Device is lost")]
379 Lost,
380 #[error("Unexpected error variant (driver implementation is at fault)")]
381 Unexpected,
382}
383
384#[allow(dead_code)] // may be unused on some platforms
385#[cold]
386fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
387 panic!("wgpu-hal invariant was violated (usage error): {txt}")
388}
389
390#[allow(dead_code)] // may be unused on some platforms
391#[cold]
392fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
393 panic!("wgpu-hal ran into a preventable internal error: {txt}")
394}
395
396#[derive(Clone, Debug, Eq, PartialEq, Error)]
397pub enum ShaderError {
398 #[error("Compilation failed: {0:?}")]
399 Compilation(String),
400 #[error(transparent)]
401 Device(#[from] DeviceError),
402}
403
404#[derive(Clone, Debug, Eq, PartialEq, Error)]
405pub enum PipelineError {
406 #[error("Linkage failed for stage {0:?}: {1}")]
407 Linkage(wgt::ShaderStages, String),
408 #[error("Entry point for stage {0:?} is invalid")]
409 EntryPoint(naga::ShaderStage),
410 #[error(transparent)]
411 Device(#[from] DeviceError),
412 #[error("Pipeline constant error for stage {0:?}: {1}")]
413 PipelineConstants(wgt::ShaderStages, String),
414}
415
416#[derive(Clone, Debug, Eq, PartialEq, Error)]
417pub enum PipelineCacheError {
418 #[error(transparent)]
419 Device(#[from] DeviceError),
420}
421
422#[derive(Clone, Debug, Eq, PartialEq, Error)]
423pub enum SurfaceError {
424 #[error("Surface is lost")]
425 Lost,
426 #[error("Surface is outdated, needs to be re-created")]
427 Outdated,
428 #[error(transparent)]
429 Device(#[from] DeviceError),
430 #[error("Other reason: {0}")]
431 Other(&'static str),
432}
433
434/// Error occurring while trying to create an instance, or create a surface from an instance;
435/// typically relating to the state of the underlying graphics API or hardware.
436#[derive(Clone, Debug, Error)]
437#[error("{message}")]
438pub struct InstanceError {
439 /// These errors are very platform specific, so do not attempt to encode them as an enum.
440 ///
441 /// This message should describe the problem in sufficient detail to be useful for a
442 /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
443 /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
444 message: String,
445
446 /// Underlying error value, if any is available.
447 #[source]
448 source: Option<Arc<dyn Error + Send + Sync + 'static>>,
449}
450
451impl InstanceError {
452 #[allow(dead_code)] // may be unused on some platforms
453 pub(crate) fn new(message: String) -> Self {
454 Self {
455 message,
456 source: None,
457 }
458 }
459 #[allow(dead_code)] // may be unused on some platforms
460 pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
461 cfg_if::cfg_if! {
462 if #[cfg(supports_ptr_atomics)] {
463 let source = Arc::new(source);
464 } else {
465 // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
466 let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
467 let source = Arc::from(source);
468 }
469 }
470 Self {
471 message,
472 source: Some(source),
473 }
474 }
475}
476
477/// All the types and methods that make up a implementation on top of a backend.
478///
479/// Only the types that have non-dyn trait bounds have methods on them. Most methods
480/// are either on [`CommandEncoder`] or [`Device`].
481///
482/// The api can either be used through generics (through use of this trait and associated
483/// types) or dynamically through using the `Dyn*` traits.
484pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
485 const VARIANT: wgt::Backend;
486
487 type Instance: DynInstance + Instance<A = Self>;
488 type Surface: DynSurface + Surface<A = Self>;
489 type Adapter: DynAdapter + Adapter<A = Self>;
490 type Device: DynDevice + Device<A = Self>;
491
492 type Queue: DynQueue + Queue<A = Self>;
493 type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
494
495 /// This API's command buffer type.
496 ///
497 /// The only thing you can do with `CommandBuffer`s is build them
498 /// with a [`CommandEncoder`] and then pass them to
499 /// [`Queue::submit`] for execution, or destroy them by passing
500 /// them to [`CommandEncoder::reset_all`].
501 ///
502 /// [`CommandEncoder`]: Api::CommandEncoder
503 type CommandBuffer: DynCommandBuffer;
504
505 type Buffer: DynBuffer;
506 type Texture: DynTexture;
507 type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
508 type TextureView: DynTextureView;
509 type Sampler: DynSampler;
510 type QuerySet: DynQuerySet;
511
512 /// A value you can block on to wait for something to finish.
513 ///
514 /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
515 /// [`Device::wait`] to block until a fence reaches or passes a value you
516 /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
517 /// store in it when the submitted work is complete.
518 ///
519 /// Attempting to set a fence to a value less than its current value has no
520 /// effect.
521 ///
522 /// Waiting on a fence returns as soon as the fence reaches *or passes* the
523 /// requested value. This implies that, in order to reliably determine when
524 /// an operation has completed, operations must finish in order of
525 /// increasing fence values: if a higher-valued operation were to finish
526 /// before a lower-valued operation, then waiting for the fence to reach the
527 /// lower value could return before the lower-valued operation has actually
528 /// finished.
529 type Fence: DynFence;
530
531 type BindGroupLayout: DynBindGroupLayout;
532 type BindGroup: DynBindGroup;
533 type PipelineLayout: DynPipelineLayout;
534 type ShaderModule: DynShaderModule;
535 type RenderPipeline: DynRenderPipeline;
536 type ComputePipeline: DynComputePipeline;
537 type PipelineCache: DynPipelineCache;
538
539 type AccelerationStructure: DynAccelerationStructure + 'static;
540}
541
542pub trait Instance: Sized + WasmNotSendSync {
543 type A: Api;
544
545 unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
546 unsafe fn create_surface(
547 &self,
548 display_handle: raw_window_handle::RawDisplayHandle,
549 window_handle: raw_window_handle::RawWindowHandle,
550 ) -> Result<<Self::A as Api>::Surface, InstanceError>;
551 /// `surface_hint` is only used by the GLES backend targeting WebGL2
552 unsafe fn enumerate_adapters(
553 &self,
554 surface_hint: Option<&<Self::A as Api>::Surface>,
555 ) -> Vec<ExposedAdapter<Self::A>>;
556}
557
558pub trait Surface: WasmNotSendSync {
559 type A: Api;
560
561 /// Configure `self` to use `device`.
562 ///
563 /// # Safety
564 ///
565 /// - All GPU work using `self` must have been completed.
566 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
567 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
568 /// - The surface `self` must not currently be configured to use any other [`Device`].
569 unsafe fn configure(
570 &self,
571 device: &<Self::A as Api>::Device,
572 config: &SurfaceConfiguration,
573 ) -> Result<(), SurfaceError>;
574
575 /// Unconfigure `self` on `device`.
576 ///
577 /// # Safety
578 ///
579 /// - All GPU work that uses `surface` must have been completed.
580 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
581 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
582 /// - The surface `self` must have been configured on `device`.
583 unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
584
585 /// Return the next texture to be presented by `self`, for the caller to draw on.
586 ///
587 /// On success, return an [`AcquiredSurfaceTexture`] representing the
588 /// texture into which the caller should draw the image to be displayed on
589 /// `self`.
590 ///
591 /// If `timeout` elapses before `self` has a texture ready to be acquired,
592 /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
593 /// timeout.
594 ///
595 /// # Using an [`AcquiredSurfaceTexture`]
596 ///
597 /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
598 /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
599 /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
600 /// carries some metadata about that [`SurfaceTexture`].
601 ///
602 /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
603 /// include the [`SurfaceTexture`] in the `surface_textures` argument.
604 ///
605 /// When you are done drawing on the texture, you can display it on `self`
606 /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
607 ///
608 /// If you do not wish to display the texture, you must pass the
609 /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
610 /// by future acquisitions.
611 ///
612 /// # Portability
613 ///
614 /// Some backends can't support a timeout when acquiring a texture. On these
615 /// backends, `timeout` is ignored.
616 ///
617 /// # Safety
618 ///
619 /// - The surface `self` must currently be configured on some [`Device`].
620 ///
621 /// - The `fence` argument must be the same [`Fence`] passed to all calls to
622 /// [`Queue::submit`] that used [`Texture`]s acquired from this surface.
623 ///
624 /// - You may only have one texture acquired from `self` at a time. When
625 /// `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
626 /// [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
627 /// [`Surface::discard_texture`] before calling `acquire_texture` again.
628 ///
629 /// [`texture`]: AcquiredSurfaceTexture::texture
630 /// [`SurfaceTexture`]: Api::SurfaceTexture
631 /// [`borrow`]: alloc::borrow::Borrow::borrow
632 /// [`Texture`]: Api::Texture
633 /// [`Fence`]: Api::Fence
634 /// [`self.discard_texture`]: Surface::discard_texture
635 unsafe fn acquire_texture(
636 &self,
637 timeout: Option<core::time::Duration>,
638 fence: &<Self::A as Api>::Fence,
639 ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
640
641 /// Relinquish an acquired texture without presenting it.
642 ///
643 /// After this call, the texture underlying [`SurfaceTexture`] may be
644 /// returned by subsequent calls to [`self.acquire_texture`].
645 ///
646 /// # Safety
647 ///
648 /// - The surface `self` must currently be configured on some [`Device`].
649 ///
650 /// - `texture` must be a [`SurfaceTexture`] returned by a call to
651 /// [`self.acquire_texture`] that has not yet been passed to
652 /// [`Queue::present`].
653 ///
654 /// [`SurfaceTexture`]: Api::SurfaceTexture
655 /// [`self.acquire_texture`]: Surface::acquire_texture
656 unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
657}
658
659pub trait Adapter: WasmNotSendSync {
660 type A: Api;
661
662 unsafe fn open(
663 &self,
664 features: wgt::Features,
665 limits: &wgt::Limits,
666 memory_hints: &wgt::MemoryHints,
667 ) -> Result<OpenDevice<Self::A>, DeviceError>;
668
669 /// Return the set of supported capabilities for a texture format.
670 unsafe fn texture_format_capabilities(
671 &self,
672 format: wgt::TextureFormat,
673 ) -> TextureFormatCapabilities;
674
675 /// Returns the capabilities of working with a specified surface.
676 ///
677 /// `None` means presentation is not supported for it.
678 unsafe fn surface_capabilities(
679 &self,
680 surface: &<Self::A as Api>::Surface,
681 ) -> Option<SurfaceCapabilities>;
682
683 /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
684 ///
685 /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
686 unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
687}
688
689/// A connection to a GPU and a pool of resources to use with it.
690///
691/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
692/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
693/// used for creating resources. Each `Device` has an associated [`Queue`] used
694/// for command submission.
695///
696/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
697/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
698/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
699/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
700/// `Adapter`.
701///
702/// A `Device`'s life cycle is generally:
703///
704/// 1) Obtain a `Device` and its associated [`Queue`] by calling
705/// [`Adapter::open`].
706///
707/// Alternatively, the backend-specific types that implement [`Adapter`] often
708/// have methods for creating a `wgpu-hal` `Device` from a platform-specific
709/// handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
710/// [`vulkan::Device`] from an [`ash::Device`].
711///
712/// 1) Create resources to use on the device by calling methods like
713/// [`Device::create_texture`] or [`Device::create_shader_module`].
714///
715/// 1) Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
716/// which you can use to build [`CommandBuffer`]s holding commands to be
717/// executed on the GPU.
718///
719/// 1) Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
720/// [`CommandBuffer`]s for execution on the GPU. If needed, call
721/// [`Device::wait`] to wait for them to finish execution.
722///
723/// 1) Free resources with methods like [`Device::destroy_texture`] or
724/// [`Device::destroy_shader_module`].
725///
726/// 1) Drop the device.
727///
728/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
729/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
730/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
731/// [`wgpu_hal::Adapter`]: Adapter
732/// [`wgpu_hal::Device`]: Device
733/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
734/// [`vulkan::Device`]: vulkan/struct.Device.html
735/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
736/// [`CommandBuffer`]: Api::CommandBuffer
737///
738/// # Safety
739///
740/// As with other `wgpu-hal` APIs, [validation] is the caller's
741/// responsibility. Here are the general requirements for all `Device`
742/// methods:
743///
744/// - Any resource passed to a `Device` method must have been created by that
745/// `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
746/// have been created with the `Device` passed as `self`.
747///
748/// - Resources may not be destroyed if they are used by any submitted command
749/// buffers that have not yet finished execution.
750///
751/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
752/// [`Texture`]: Api::Texture
753pub trait Device: WasmNotSendSync {
754 type A: Api;
755
756 /// Creates a new buffer.
757 ///
758 /// The initial usage is `wgt::BufferUses::empty()`.
759 unsafe fn create_buffer(
760 &self,
761 desc: &BufferDescriptor,
762 ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
763
764 /// Free `buffer` and any GPU resources it owns.
765 ///
766 /// Note that backends are allowed to allocate GPU memory for buffers from
767 /// allocation pools, and this call is permitted to simply return `buffer`'s
768 /// storage to that pool, without making it available to other applications.
769 ///
770 /// # Safety
771 ///
772 /// - The given `buffer` must not currently be mapped.
773 unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
774
775 /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
776 unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
777
778 /// Return a pointer to CPU memory mapping the contents of `buffer`.
779 ///
780 /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
781 /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
782 /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
783 /// `wgpu_hal` buffer is also unmapped.)
784 ///
785 /// If this function returns `Ok(mapping)`, then:
786 ///
787 /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
788 ///
789 /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
790 /// memory are immediately visible on the GPU, and vice versa.
791 ///
792 /// # Safety
793 ///
794 /// - The given `buffer` must have been created with the [`MAP_READ`] or
795 /// [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
796 ///
797 /// - The given `range` must fall within the size of `buffer`.
798 ///
799 /// - The caller must avoid data races between the CPU and the GPU. A data
800 /// race is any pair of accesses to a particular byte, one of which is a
801 /// write, that are not ordered with respect to each other by some sort of
802 /// synchronization operation.
803 ///
804 /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
805 /// `false`, then:
806 ///
807 /// - Every CPU write to a mapped byte followed by a GPU read of that byte
808 /// must have at least one call to [`Device::flush_mapped_ranges`]
809 /// covering that byte that occurs between those two accesses.
810 ///
811 /// - Every GPU write to a mapped byte followed by a CPU read of that byte
812 /// must have at least one call to [`Device::invalidate_mapped_ranges`]
813 /// covering that byte that occurs between those two accesses.
814 ///
815 /// Note that the data race rule above requires that all such access pairs
816 /// be ordered, so it is meaningful to talk about what must occur
817 /// "between" them.
818 ///
819 /// - Zero-sized mappings are not allowed.
820 ///
821 /// - The returned [`BufferMapping::ptr`] must not be used after a call to
822 /// [`Device::unmap_buffer`].
823 ///
824 /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
825 /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
826 unsafe fn map_buffer(
827 &self,
828 buffer: &<Self::A as Api>::Buffer,
829 range: MemoryRange,
830 ) -> Result<BufferMapping, DeviceError>;
831
832 /// Remove the mapping established by the last call to [`Device::map_buffer`].
833 ///
834 /// # Safety
835 ///
836 /// - The given `buffer` must be currently mapped.
837 unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
838
839 /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
840 ///
841 /// # Safety
842 ///
843 /// - The given `buffer` must be currently mapped.
844 ///
845 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
846 unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
847 where
848 I: Iterator<Item = MemoryRange>;
849
850 /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
851 ///
852 /// # Safety
853 ///
854 /// - The given `buffer` must be currently mapped.
855 ///
856 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
857 unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
858 where
859 I: Iterator<Item = MemoryRange>;
860
861 /// Creates a new texture.
862 ///
863 /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
864 unsafe fn create_texture(
865 &self,
866 desc: &TextureDescriptor,
867 ) -> Result<<Self::A as Api>::Texture, DeviceError>;
868 unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
869
870 /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
871 unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
872
873 unsafe fn create_texture_view(
874 &self,
875 texture: &<Self::A as Api>::Texture,
876 desc: &TextureViewDescriptor,
877 ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
878 unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
879 unsafe fn create_sampler(
880 &self,
881 desc: &SamplerDescriptor,
882 ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
883 unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
884
885 /// Create a fresh [`CommandEncoder`].
886 ///
887 /// The new `CommandEncoder` is in the "closed" state.
888 unsafe fn create_command_encoder(
889 &self,
890 desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
891 ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
892
893 /// Creates a bind group layout.
894 unsafe fn create_bind_group_layout(
895 &self,
896 desc: &BindGroupLayoutDescriptor,
897 ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
898 unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
899 unsafe fn create_pipeline_layout(
900 &self,
901 desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
902 ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
903 unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
904
905 #[allow(clippy::type_complexity)]
906 unsafe fn create_bind_group(
907 &self,
908 desc: &BindGroupDescriptor<
909 <Self::A as Api>::BindGroupLayout,
910 <Self::A as Api>::Buffer,
911 <Self::A as Api>::Sampler,
912 <Self::A as Api>::TextureView,
913 <Self::A as Api>::AccelerationStructure,
914 >,
915 ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
916 unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
917
918 unsafe fn create_shader_module(
919 &self,
920 desc: &ShaderModuleDescriptor,
921 shader: ShaderInput,
922 ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
923 unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
924
925 #[allow(clippy::type_complexity)]
926 unsafe fn create_render_pipeline(
927 &self,
928 desc: &RenderPipelineDescriptor<
929 <Self::A as Api>::PipelineLayout,
930 <Self::A as Api>::ShaderModule,
931 <Self::A as Api>::PipelineCache,
932 >,
933 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
934 unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
935
936 #[allow(clippy::type_complexity)]
937 unsafe fn create_compute_pipeline(
938 &self,
939 desc: &ComputePipelineDescriptor<
940 <Self::A as Api>::PipelineLayout,
941 <Self::A as Api>::ShaderModule,
942 <Self::A as Api>::PipelineCache,
943 >,
944 ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
945 unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
946
947 unsafe fn create_pipeline_cache(
948 &self,
949 desc: &PipelineCacheDescriptor<'_>,
950 ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
951 fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
952 None
953 }
954 unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
955
956 unsafe fn create_query_set(
957 &self,
958 desc: &wgt::QuerySetDescriptor<Label>,
959 ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
960 unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
961 unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
962 unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
963 unsafe fn get_fence_value(
964 &self,
965 fence: &<Self::A as Api>::Fence,
966 ) -> Result<FenceValue, DeviceError>;
967
968 /// Wait for `fence` to reach `value`.
969 ///
970 /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
971 /// [`FenceValue`] to store in it, so you can use this `wait` function
972 /// to wait for a given queue submission to finish execution.
973 ///
974 /// The `value` argument must be a value that some actual operation you have
975 /// already presented to the device is going to store in `fence`. You cannot
976 /// wait for values yet to be submitted. (This restriction accommodates
977 /// implementations like the `vulkan` backend's [`FencePool`] that must
978 /// allocate a distinct synchronization object for each fence value one is
979 /// able to wait for.)
980 ///
981 /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
982 /// returns immediately.
983 ///
984 /// If `timeout` is provided, the function will block indefinitely or until
985 /// an error is encountered.
986 ///
987 /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
988 ///
989 /// [`Fence`]: Api::Fence
990 /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
991 unsafe fn wait(
992 &self,
993 fence: &<Self::A as Api>::Fence,
994 value: FenceValue,
995 timeout: Option<core::time::Duration>,
996 ) -> Result<bool, DeviceError>;
997
998 /// Start a graphics debugger capture.
999 ///
1000 /// # Safety
1001 ///
1002 /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1003 ///
1004 /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1005 unsafe fn start_graphics_debugger_capture(&self) -> bool;
1006
1007 /// Stop a graphics debugger capture.
1008 ///
1009 /// # Safety
1010 ///
1011 /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1012 ///
1013 /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1014 unsafe fn stop_graphics_debugger_capture(&self);
1015
1016 #[allow(unused_variables)]
1017 unsafe fn pipeline_cache_get_data(
1018 &self,
1019 cache: &<Self::A as Api>::PipelineCache,
1020 ) -> Option<Vec<u8>> {
1021 None
1022 }
1023
1024 unsafe fn create_acceleration_structure(
1025 &self,
1026 desc: &AccelerationStructureDescriptor,
1027 ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1028 unsafe fn get_acceleration_structure_build_sizes(
1029 &self,
1030 desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1031 ) -> AccelerationStructureBuildSizes;
1032 unsafe fn get_acceleration_structure_device_address(
1033 &self,
1034 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1035 ) -> wgt::BufferAddress;
1036 unsafe fn destroy_acceleration_structure(
1037 &self,
1038 acceleration_structure: <Self::A as Api>::AccelerationStructure,
1039 );
1040 fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1041
1042 fn get_internal_counters(&self) -> wgt::HalCounters;
1043
1044 fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1045 None
1046 }
1047
1048 fn check_if_oom(&self) -> Result<(), DeviceError>;
1049}
1050
1051pub trait Queue: WasmNotSendSync {
1052 type A: Api;
1053
1054 /// Submit `command_buffers` for execution on GPU.
1055 ///
1056 /// Update `fence` to `value` when the operation is complete. See
1057 /// [`Fence`] for details.
1058 ///
1059 /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1060 /// order they're submitted, with each buffer able to observe the effects of
1061 /// previous buffers' execution. Specifically:
1062 ///
1063 /// - If two calls to `submit` on a single `Queue` occur in a particular
1064 /// order (that is, they happen on the same thread, or on two threads that
1065 /// have synchronized to establish an ordering), then the first
1066 /// submission's commands all complete execution before any of the second
1067 /// submission's commands begin. All results produced by one submission
1068 /// are visible to the next.
1069 ///
1070 /// - Within a submission, command buffers execute in the order in which they
1071 /// appear in `command_buffers`. All results produced by one buffer are
1072 /// visible to the next.
1073 ///
1074 /// If two calls to `submit` on a single `Queue` from different threads are
1075 /// not synchronized to occur in a particular order, they must pass distinct
1076 /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1077 /// operations to complete is only trustworthy when operations finish in
1078 /// order of increasing fence value, but submissions from different threads
1079 /// cannot determine how to order the fence values if the submissions
1080 /// themselves are unordered. If each thread uses a separate [`Fence`], this
1081 /// problem does not arise.
1082 ///
1083 /// # Safety
1084 ///
1085 /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1086 /// from a [`CommandEncoder`][ce] that was constructed from the
1087 /// [`Device`][d] associated with this [`Queue`].
1088 ///
1089 /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1090 /// commands have finished execution. Since command buffers must not
1091 /// outlive their encoders, this implies that the encoders must remain
1092 /// alive as well.
1093 ///
1094 /// - All resources used by a submitted [`CommandBuffer`][cb]
1095 /// ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1096 /// on) must remain alive until the command buffer finishes execution.
1097 ///
1098 /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1099 /// writes to must appear in the `surface_textures` argument.
1100 ///
1101 /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1102 /// argument more than once.
1103 ///
1104 /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1105 /// for use with the [`Device`][d] associated with this [`Queue`],
1106 /// typically by calling [`Surface::configure`].
1107 ///
1108 /// - All calls to this function that include a given [`SurfaceTexture`][st]
1109 /// in `surface_textures` must use the same [`Fence`].
1110 ///
1111 /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1112 /// all submissions that will signal it have completed.
1113 ///
1114 /// [`Fence`]: Api::Fence
1115 /// [cb]: Api::CommandBuffer
1116 /// [ce]: Api::CommandEncoder
1117 /// [d]: Api::Device
1118 /// [t]: Api::Texture
1119 /// [bg]: Api::BindGroup
1120 /// [rp]: Api::RenderPipeline
1121 /// [st]: Api::SurfaceTexture
1122 unsafe fn submit(
1123 &self,
1124 command_buffers: &[&<Self::A as Api>::CommandBuffer],
1125 surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1126 signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1127 ) -> Result<(), DeviceError>;
1128 unsafe fn present(
1129 &self,
1130 surface: &<Self::A as Api>::Surface,
1131 texture: <Self::A as Api>::SurfaceTexture,
1132 ) -> Result<(), SurfaceError>;
1133 unsafe fn get_timestamp_period(&self) -> f32;
1134}
1135
1136/// Encoder and allocation pool for `CommandBuffer`s.
1137///
1138/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1139/// acts as the allocation pool that owns the buffers' underlying
1140/// storage. Thus, `CommandBuffer`s must not outlive the
1141/// `CommandEncoder` that created them.
1142///
1143/// The life cycle of a `CommandBuffer` is as follows:
1144///
1145/// - Call [`Device::create_command_encoder`] to create a new
1146/// `CommandEncoder`, in the "closed" state.
1147///
1148/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1149/// recording commands. This puts the `CommandEncoder` in the
1150/// "recording" state.
1151///
1152/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1153/// etc. on a "recording" `CommandEncoder` to add commands to the
1154/// list. (If an error occurs, you must call `discard_encoding`; see
1155/// below.)
1156///
1157/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1158/// encoder and construct a fresh `CommandBuffer` consisting of the
1159/// list of commands recorded up to that point.
1160///
1161/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1162/// the commands recorded thus far and close the encoder. This is
1163/// the only safe thing to do on a `CommandEncoder` if an error has
1164/// occurred while recording commands.
1165///
1166/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1167/// live `CommandBuffers` built from it. All the `CommandBuffer`s
1168/// are destroyed, and their resources are freed.
1169///
1170/// # Safety
1171///
1172/// - The `CommandEncoder` must be in the states described above to
1173/// make the given calls.
1174///
1175/// - A `CommandBuffer` that has been submitted for execution on the
1176/// GPU must live until its execution is complete.
1177///
1178/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1179/// built it.
1180///
1181/// It is the user's responsibility to meet this requirements. This
1182/// allows `CommandEncoder` implementations to keep their state
1183/// tracking to a minimum.
1184pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1185 type A: Api;
1186
1187 /// Begin encoding a new command buffer.
1188 ///
1189 /// This puts this `CommandEncoder` in the "recording" state.
1190 ///
1191 /// # Safety
1192 ///
1193 /// This `CommandEncoder` must be in the "closed" state.
1194 unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1195
1196 /// Discard the command list under construction.
1197 ///
1198 /// If an error has occurred while recording commands, this
1199 /// is the only safe thing to do with the encoder.
1200 ///
1201 /// This puts this `CommandEncoder` in the "closed" state.
1202 ///
1203 /// # Safety
1204 ///
1205 /// This `CommandEncoder` must be in the "recording" state.
1206 ///
1207 /// Callers must not assume that implementations of this
1208 /// function are idempotent, and thus should not call it
1209 /// multiple times in a row.
1210 unsafe fn discard_encoding(&mut self);
1211
1212 /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1213 ///
1214 /// The returned [`CommandBuffer`] holds all the commands recorded
1215 /// on this `CommandEncoder` since the last call to
1216 /// [`begin_encoding`].
1217 ///
1218 /// This puts this `CommandEncoder` in the "closed" state.
1219 ///
1220 /// # Safety
1221 ///
1222 /// This `CommandEncoder` must be in the "recording" state.
1223 ///
1224 /// The returned [`CommandBuffer`] must not outlive this
1225 /// `CommandEncoder`. Implementations are allowed to build
1226 /// `CommandBuffer`s that depend on storage owned by this
1227 /// `CommandEncoder`.
1228 ///
1229 /// [`CommandBuffer`]: Api::CommandBuffer
1230 /// [`begin_encoding`]: CommandEncoder::begin_encoding
1231 unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1232
1233 /// Reclaim all resources belonging to this `CommandEncoder`.
1234 ///
1235 /// # Safety
1236 ///
1237 /// This `CommandEncoder` must be in the "closed" state.
1238 ///
1239 /// The `command_buffers` iterator must produce all the live
1240 /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1241 /// is, every extant `CommandBuffer` returned from `end_encoding`.
1242 ///
1243 /// [`CommandBuffer`]: Api::CommandBuffer
1244 unsafe fn reset_all<I>(&mut self, command_buffers: I)
1245 where
1246 I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1247
1248 unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1249 where
1250 T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1251
1252 unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1253 where
1254 T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1255
1256 // copy operations
1257
1258 unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1259
1260 unsafe fn copy_buffer_to_buffer<T>(
1261 &mut self,
1262 src: &<Self::A as Api>::Buffer,
1263 dst: &<Self::A as Api>::Buffer,
1264 regions: T,
1265 ) where
1266 T: Iterator<Item = BufferCopy>;
1267
1268 /// Copy from an external image to an internal texture.
1269 /// Works with a single array layer.
1270 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1271 /// Note: the copy extent is in physical size (rounded to the block size)
1272 #[cfg(webgl)]
1273 unsafe fn copy_external_image_to_texture<T>(
1274 &mut self,
1275 src: &wgt::CopyExternalImageSourceInfo,
1276 dst: &<Self::A as Api>::Texture,
1277 dst_premultiplication: bool,
1278 regions: T,
1279 ) where
1280 T: Iterator<Item = TextureCopy>;
1281
1282 /// Copy from one texture to another.
1283 /// Works with a single array layer.
1284 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1285 /// Note: the copy extent is in physical size (rounded to the block size)
1286 unsafe fn copy_texture_to_texture<T>(
1287 &mut self,
1288 src: &<Self::A as Api>::Texture,
1289 src_usage: wgt::TextureUses,
1290 dst: &<Self::A as Api>::Texture,
1291 regions: T,
1292 ) where
1293 T: Iterator<Item = TextureCopy>;
1294
1295 /// Copy from buffer to texture.
1296 /// Works with a single array layer.
1297 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1298 /// Note: the copy extent is in physical size (rounded to the block size)
1299 unsafe fn copy_buffer_to_texture<T>(
1300 &mut self,
1301 src: &<Self::A as Api>::Buffer,
1302 dst: &<Self::A as Api>::Texture,
1303 regions: T,
1304 ) where
1305 T: Iterator<Item = BufferTextureCopy>;
1306
1307 /// Copy from texture to buffer.
1308 /// Works with a single array layer.
1309 /// Note: the copy extent is in physical size (rounded to the block size)
1310 unsafe fn copy_texture_to_buffer<T>(
1311 &mut self,
1312 src: &<Self::A as Api>::Texture,
1313 src_usage: wgt::TextureUses,
1314 dst: &<Self::A as Api>::Buffer,
1315 regions: T,
1316 ) where
1317 T: Iterator<Item = BufferTextureCopy>;
1318
1319 unsafe fn copy_acceleration_structure_to_acceleration_structure(
1320 &mut self,
1321 src: &<Self::A as Api>::AccelerationStructure,
1322 dst: &<Self::A as Api>::AccelerationStructure,
1323 copy: wgt::AccelerationStructureCopy,
1324 );
1325 // pass common
1326
1327 /// Sets the bind group at `index` to `group`.
1328 ///
1329 /// If this is not the first call to `set_bind_group` within the current
1330 /// render or compute pass:
1331 ///
1332 /// - If `layout` contains `n` bind group layouts, then any previously set
1333 /// bind groups at indices `n` or higher are cleared.
1334 ///
1335 /// - If the first `m` bind group layouts of `layout` are equal to those of
1336 /// the previously passed layout, but no more, then any previously set
1337 /// bind groups at indices `m` or higher are cleared.
1338 ///
1339 /// It follows from the above that passing the same layout as before doesn't
1340 /// clear any bind groups.
1341 ///
1342 /// # Safety
1343 ///
1344 /// - This [`CommandEncoder`] must be within a render or compute pass.
1345 ///
1346 /// - `index` must be the valid index of some bind group layout in `layout`.
1347 /// Call this the "relevant bind group layout".
1348 ///
1349 /// - The layout of `group` must be equal to the relevant bind group layout.
1350 ///
1351 /// - The length of `dynamic_offsets` must match the number of buffer
1352 /// bindings [with dynamic offsets][hdo] in the relevant bind group
1353 /// layout.
1354 ///
1355 /// - If those buffer bindings are ordered by increasing [`binding` number]
1356 /// and paired with elements from `dynamic_offsets`, then each offset must
1357 /// be a valid offset for the binding's corresponding buffer in `group`.
1358 ///
1359 /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1360 /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1361 unsafe fn set_bind_group(
1362 &mut self,
1363 layout: &<Self::A as Api>::PipelineLayout,
1364 index: u32,
1365 group: &<Self::A as Api>::BindGroup,
1366 dynamic_offsets: &[wgt::DynamicOffset],
1367 );
1368
1369 /// Sets a range in push constant data.
1370 ///
1371 /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1372 ///
1373 /// # Safety
1374 ///
1375 /// - `offset_bytes` must be a multiple of 4.
1376 /// - The range of push constants written must be valid for the pipeline layout at draw time.
1377 unsafe fn set_push_constants(
1378 &mut self,
1379 layout: &<Self::A as Api>::PipelineLayout,
1380 stages: wgt::ShaderStages,
1381 offset_bytes: u32,
1382 data: &[u32],
1383 );
1384
1385 unsafe fn insert_debug_marker(&mut self, label: &str);
1386 unsafe fn begin_debug_marker(&mut self, group_label: &str);
1387 unsafe fn end_debug_marker(&mut self);
1388
1389 // queries
1390
1391 /// # Safety:
1392 ///
1393 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1394 unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1395 /// # Safety:
1396 ///
1397 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1398 unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1399 unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1400 unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1401 unsafe fn copy_query_results(
1402 &mut self,
1403 set: &<Self::A as Api>::QuerySet,
1404 range: Range<u32>,
1405 buffer: &<Self::A as Api>::Buffer,
1406 offset: wgt::BufferAddress,
1407 stride: wgt::BufferSize,
1408 );
1409
1410 // render passes
1411
1412 /// Begin a new render pass, clearing all active bindings.
1413 ///
1414 /// This clears any bindings established by the following calls:
1415 ///
1416 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1417 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1418 /// - [`begin_query`](CommandEncoder::begin_query)
1419 /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1420 /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1421 /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1422 ///
1423 /// # Safety
1424 ///
1425 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1426 /// by a call to [`end_render_pass`].
1427 ///
1428 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1429 /// by a call to [`end_compute_pass`].
1430 ///
1431 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1432 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1433 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1434 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1435 unsafe fn begin_render_pass(
1436 &mut self,
1437 desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1438 ) -> Result<(), DeviceError>;
1439
1440 /// End the current render pass.
1441 ///
1442 /// # Safety
1443 ///
1444 /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1445 /// that has not been followed by a call to [`end_render_pass`].
1446 ///
1447 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1448 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1449 unsafe fn end_render_pass(&mut self);
1450
1451 unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1452
1453 unsafe fn set_index_buffer<'a>(
1454 &mut self,
1455 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1456 format: wgt::IndexFormat,
1457 );
1458 unsafe fn set_vertex_buffer<'a>(
1459 &mut self,
1460 index: u32,
1461 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1462 );
1463 unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1464 unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1465 unsafe fn set_stencil_reference(&mut self, value: u32);
1466 unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1467
1468 unsafe fn draw(
1469 &mut self,
1470 first_vertex: u32,
1471 vertex_count: u32,
1472 first_instance: u32,
1473 instance_count: u32,
1474 );
1475 unsafe fn draw_indexed(
1476 &mut self,
1477 first_index: u32,
1478 index_count: u32,
1479 base_vertex: i32,
1480 first_instance: u32,
1481 instance_count: u32,
1482 );
1483 unsafe fn draw_indirect(
1484 &mut self,
1485 buffer: &<Self::A as Api>::Buffer,
1486 offset: wgt::BufferAddress,
1487 draw_count: u32,
1488 );
1489 unsafe fn draw_indexed_indirect(
1490 &mut self,
1491 buffer: &<Self::A as Api>::Buffer,
1492 offset: wgt::BufferAddress,
1493 draw_count: u32,
1494 );
1495 unsafe fn draw_indirect_count(
1496 &mut self,
1497 buffer: &<Self::A as Api>::Buffer,
1498 offset: wgt::BufferAddress,
1499 count_buffer: &<Self::A as Api>::Buffer,
1500 count_offset: wgt::BufferAddress,
1501 max_count: u32,
1502 );
1503 unsafe fn draw_indexed_indirect_count(
1504 &mut self,
1505 buffer: &<Self::A as Api>::Buffer,
1506 offset: wgt::BufferAddress,
1507 count_buffer: &<Self::A as Api>::Buffer,
1508 count_offset: wgt::BufferAddress,
1509 max_count: u32,
1510 );
1511 unsafe fn draw_mesh_tasks(
1512 &mut self,
1513 group_count_x: u32,
1514 group_count_y: u32,
1515 group_count_z: u32,
1516 );
1517 unsafe fn draw_mesh_tasks_indirect(
1518 &mut self,
1519 buffer: &<Self::A as Api>::Buffer,
1520 offset: wgt::BufferAddress,
1521 draw_count: u32,
1522 );
1523 unsafe fn draw_mesh_tasks_indirect_count(
1524 &mut self,
1525 buffer: &<Self::A as Api>::Buffer,
1526 offset: wgt::BufferAddress,
1527 count_buffer: &<Self::A as Api>::Buffer,
1528 count_offset: wgt::BufferAddress,
1529 max_count: u32,
1530 );
1531
1532 // compute passes
1533
1534 /// Begin a new compute pass, clearing all active bindings.
1535 ///
1536 /// This clears any bindings established by the following calls:
1537 ///
1538 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1539 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1540 /// - [`begin_query`](CommandEncoder::begin_query)
1541 /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1542 ///
1543 /// # Safety
1544 ///
1545 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1546 /// by a call to [`end_render_pass`].
1547 ///
1548 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1549 /// by a call to [`end_compute_pass`].
1550 ///
1551 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1552 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1553 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1554 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1555 unsafe fn begin_compute_pass(
1556 &mut self,
1557 desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1558 );
1559
1560 /// End the current compute pass.
1561 ///
1562 /// # Safety
1563 ///
1564 /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1565 /// that has not been followed by a call to [`end_compute_pass`].
1566 ///
1567 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1568 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1569 unsafe fn end_compute_pass(&mut self);
1570
1571 unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1572
1573 unsafe fn dispatch(&mut self, count: [u32; 3]);
1574 unsafe fn dispatch_indirect(
1575 &mut self,
1576 buffer: &<Self::A as Api>::Buffer,
1577 offset: wgt::BufferAddress,
1578 );
1579
1580 /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1581 /// All buffers must be synchronized externally
1582 /// All buffer regions, which are written to may only be passed once per function call,
1583 /// with the exception of updates in the same descriptor.
1584 /// Consequences of this limitation:
1585 /// - scratch buffers need to be unique
1586 /// - a tlas can't be build in the same call with a blas it contains
1587 unsafe fn build_acceleration_structures<'a, T>(
1588 &mut self,
1589 descriptor_count: u32,
1590 descriptors: T,
1591 ) where
1592 Self::A: 'a,
1593 T: IntoIterator<
1594 Item = BuildAccelerationStructureDescriptor<
1595 'a,
1596 <Self::A as Api>::Buffer,
1597 <Self::A as Api>::AccelerationStructure,
1598 >,
1599 >;
1600
1601 unsafe fn place_acceleration_structure_barrier(
1602 &mut self,
1603 barrier: AccelerationStructureBarrier,
1604 );
1605 // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1606 unsafe fn read_acceleration_structure_compact_size(
1607 &mut self,
1608 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1609 buf: &<Self::A as Api>::Buffer,
1610 );
1611}
1612
1613bitflags!(
1614 /// Pipeline layout creation flags.
1615 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1616 pub struct PipelineLayoutFlags: u32 {
1617 /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1618 /// via push constants for direct execution.
1619 const FIRST_VERTEX_INSTANCE = 1 << 0;
1620 /// D3D12: Add support for `num_workgroups` builtins via push constants
1621 /// for direct execution.
1622 const NUM_WORK_GROUPS = 1 << 1;
1623 /// D3D12: Add support for the builtins that the other flags enable for
1624 /// indirect execution.
1625 const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1626 }
1627);
1628
1629bitflags!(
1630 /// Pipeline layout creation flags.
1631 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1632 pub struct BindGroupLayoutFlags: u32 {
1633 /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1634 const PARTIALLY_BOUND = 1 << 0;
1635 }
1636);
1637
1638bitflags!(
1639 /// Texture format capability flags.
1640 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1641 pub struct TextureFormatCapabilities: u32 {
1642 /// Format can be sampled.
1643 const SAMPLED = 1 << 0;
1644 /// Format can be sampled with a linear sampler.
1645 const SAMPLED_LINEAR = 1 << 1;
1646 /// Format can be sampled with a min/max reduction sampler.
1647 const SAMPLED_MINMAX = 1 << 2;
1648
1649 /// Format can be used as storage with read-only access.
1650 const STORAGE_READ_ONLY = 1 << 3;
1651 /// Format can be used as storage with write-only access.
1652 const STORAGE_WRITE_ONLY = 1 << 4;
1653 /// Format can be used as storage with both read and write access.
1654 const STORAGE_READ_WRITE = 1 << 5;
1655 /// Format can be used as storage with atomics.
1656 const STORAGE_ATOMIC = 1 << 6;
1657
1658 /// Format can be used as color and input attachment.
1659 const COLOR_ATTACHMENT = 1 << 7;
1660 /// Format can be used as color (with blending) and input attachment.
1661 const COLOR_ATTACHMENT_BLEND = 1 << 8;
1662 /// Format can be used as depth-stencil and input attachment.
1663 const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1664
1665 /// Format can be multisampled by x2.
1666 const MULTISAMPLE_X2 = 1 << 10;
1667 /// Format can be multisampled by x4.
1668 const MULTISAMPLE_X4 = 1 << 11;
1669 /// Format can be multisampled by x8.
1670 const MULTISAMPLE_X8 = 1 << 12;
1671 /// Format can be multisampled by x16.
1672 const MULTISAMPLE_X16 = 1 << 13;
1673
1674 /// Format can be used for render pass resolve targets.
1675 const MULTISAMPLE_RESOLVE = 1 << 14;
1676
1677 /// Format can be copied from.
1678 const COPY_SRC = 1 << 15;
1679 /// Format can be copied to.
1680 const COPY_DST = 1 << 16;
1681 }
1682);
1683
1684bitflags!(
1685 /// Texture format capability flags.
1686 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1687 pub struct FormatAspects: u8 {
1688 const COLOR = 1 << 0;
1689 const DEPTH = 1 << 1;
1690 const STENCIL = 1 << 2;
1691 const PLANE_0 = 1 << 3;
1692 const PLANE_1 = 1 << 4;
1693 const PLANE_2 = 1 << 5;
1694
1695 const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1696 }
1697);
1698
1699impl FormatAspects {
1700 pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1701 let aspect_mask = match aspect {
1702 wgt::TextureAspect::All => Self::all(),
1703 wgt::TextureAspect::DepthOnly => Self::DEPTH,
1704 wgt::TextureAspect::StencilOnly => Self::STENCIL,
1705 wgt::TextureAspect::Plane0 => Self::PLANE_0,
1706 wgt::TextureAspect::Plane1 => Self::PLANE_1,
1707 wgt::TextureAspect::Plane2 => Self::PLANE_2,
1708 };
1709 Self::from(format) & aspect_mask
1710 }
1711
1712 /// Returns `true` if only one flag is set
1713 pub fn is_one(&self) -> bool {
1714 self.bits().is_power_of_two()
1715 }
1716
1717 pub fn map(&self) -> wgt::TextureAspect {
1718 match *self {
1719 Self::COLOR => wgt::TextureAspect::All,
1720 Self::DEPTH => wgt::TextureAspect::DepthOnly,
1721 Self::STENCIL => wgt::TextureAspect::StencilOnly,
1722 Self::PLANE_0 => wgt::TextureAspect::Plane0,
1723 Self::PLANE_1 => wgt::TextureAspect::Plane1,
1724 Self::PLANE_2 => wgt::TextureAspect::Plane2,
1725 _ => unreachable!(),
1726 }
1727 }
1728}
1729
1730impl From<wgt::TextureFormat> for FormatAspects {
1731 fn from(format: wgt::TextureFormat) -> Self {
1732 match format {
1733 wgt::TextureFormat::Stencil8 => Self::STENCIL,
1734 wgt::TextureFormat::Depth16Unorm
1735 | wgt::TextureFormat::Depth32Float
1736 | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1737 wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1738 Self::DEPTH_STENCIL
1739 }
1740 wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1741 _ => Self::COLOR,
1742 }
1743 }
1744}
1745
1746bitflags!(
1747 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1748 pub struct MemoryFlags: u32 {
1749 const TRANSIENT = 1 << 0;
1750 const PREFER_COHERENT = 1 << 1;
1751 }
1752);
1753
1754//TODO: it's not intuitive for the backends to consider `LOAD` being optional.
1755
1756bitflags!(
1757 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1758 pub struct AttachmentOps: u8 {
1759 const LOAD = 1 << 0;
1760 const STORE = 1 << 1;
1761 }
1762);
1763
1764#[derive(Clone, Debug)]
1765pub struct InstanceDescriptor<'a> {
1766 pub name: &'a str,
1767 pub flags: wgt::InstanceFlags,
1768 pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1769 pub backend_options: wgt::BackendOptions,
1770}
1771
1772#[derive(Clone, Debug)]
1773pub struct Alignments {
1774 /// The alignment of the start of the buffer used as a GPU copy source.
1775 pub buffer_copy_offset: wgt::BufferSize,
1776
1777 /// The alignment of the row pitch of the texture data stored in a buffer that is
1778 /// used in a GPU copy operation.
1779 pub buffer_copy_pitch: wgt::BufferSize,
1780
1781 /// The finest alignment of bound range checking for uniform buffers.
1782 ///
1783 /// When `wgpu_hal` restricts shader references to the [accessible
1784 /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1785 /// is the bind group binding's stated [size], rounded up to the next
1786 /// multiple of this value.
1787 ///
1788 /// We don't need an analogous field for storage buffer bindings, because
1789 /// all our backends promise to enforce the size at least to a four-byte
1790 /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1791 /// of four anyway.
1792 ///
1793 /// [ar]: struct.BufferBinding.html#accessible-region
1794 /// [`Uniform`]: wgt::BufferBindingType::Uniform
1795 /// [size]: BufferBinding::size
1796 pub uniform_bounds_check_alignment: wgt::BufferSize,
1797
1798 /// The size of the raw TLAS instance
1799 pub raw_tlas_instance_size: usize,
1800
1801 /// What the scratch buffer for building an acceleration structure must be aligned to
1802 pub ray_tracing_scratch_buffer_alignment: u32,
1803}
1804
1805#[derive(Clone, Debug)]
1806pub struct Capabilities {
1807 pub limits: wgt::Limits,
1808 pub alignments: Alignments,
1809 pub downlevel: wgt::DownlevelCapabilities,
1810}
1811
1812/// An adapter with all the information needed to reason about its capabilities.
1813///
1814/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1815/// methods on the backend [`Instance`] or [`Adapter`].
1816#[derive(Debug)]
1817pub struct ExposedAdapter<A: Api> {
1818 pub adapter: A::Adapter,
1819 pub info: wgt::AdapterInfo,
1820 pub features: wgt::Features,
1821 pub capabilities: Capabilities,
1822}
1823
1824/// Describes information about what a `Surface`'s presentation capabilities are.
1825/// Fetch this with [Adapter::surface_capabilities].
1826#[derive(Debug, Clone)]
1827pub struct SurfaceCapabilities {
1828 /// List of supported texture formats.
1829 ///
1830 /// Must be at least one.
1831 pub formats: Vec<wgt::TextureFormat>,
1832
1833 /// Range for the number of queued frames.
1834 ///
1835 /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1836 /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1837 ///
1838 /// - `maximum_frame_latency.start` must be at least 1.
1839 /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1840 pub maximum_frame_latency: RangeInclusive<u32>,
1841
1842 /// Current extent of the surface, if known.
1843 pub current_extent: Option<wgt::Extent3d>,
1844
1845 /// Supported texture usage flags.
1846 ///
1847 /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1848 pub usage: wgt::TextureUses,
1849
1850 /// List of supported V-sync modes.
1851 ///
1852 /// Must be at least one.
1853 pub present_modes: Vec<wgt::PresentMode>,
1854
1855 /// List of supported alpha composition modes.
1856 ///
1857 /// Must be at least one.
1858 pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1859}
1860
1861#[derive(Debug)]
1862pub struct AcquiredSurfaceTexture<A: Api> {
1863 pub texture: A::SurfaceTexture,
1864 /// The presentation configuration no longer matches
1865 /// the surface properties exactly, but can still be used to present
1866 /// to the surface successfully.
1867 pub suboptimal: bool,
1868}
1869
1870/// An open connection to a device and a queue.
1871///
1872/// This can be created from [`Adapter::open`] or backend
1873/// specific methods on the backend's [`Instance`] or [`Adapter`].
1874#[derive(Debug)]
1875pub struct OpenDevice<A: Api> {
1876 pub device: A::Device,
1877 pub queue: A::Queue,
1878}
1879
1880#[derive(Clone, Debug)]
1881pub struct BufferMapping {
1882 pub ptr: NonNull<u8>,
1883 pub is_coherent: bool,
1884}
1885
1886#[derive(Clone, Debug)]
1887pub struct BufferDescriptor<'a> {
1888 pub label: Label<'a>,
1889 pub size: wgt::BufferAddress,
1890 pub usage: wgt::BufferUses,
1891 pub memory_flags: MemoryFlags,
1892}
1893
1894#[derive(Clone, Debug)]
1895pub struct TextureDescriptor<'a> {
1896 pub label: Label<'a>,
1897 pub size: wgt::Extent3d,
1898 pub mip_level_count: u32,
1899 pub sample_count: u32,
1900 pub dimension: wgt::TextureDimension,
1901 pub format: wgt::TextureFormat,
1902 pub usage: wgt::TextureUses,
1903 pub memory_flags: MemoryFlags,
1904 /// Allows views of this texture to have a different format
1905 /// than the texture does.
1906 pub view_formats: Vec<wgt::TextureFormat>,
1907}
1908
1909impl TextureDescriptor<'_> {
1910 pub fn copy_extent(&self) -> CopyExtent {
1911 CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
1912 }
1913
1914 pub fn is_cube_compatible(&self) -> bool {
1915 self.dimension == wgt::TextureDimension::D2
1916 && self.size.depth_or_array_layers % 6 == 0
1917 && self.sample_count == 1
1918 && self.size.width == self.size.height
1919 }
1920
1921 pub fn array_layer_count(&self) -> u32 {
1922 match self.dimension {
1923 wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
1924 wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
1925 }
1926 }
1927}
1928
1929/// TextureView descriptor.
1930///
1931/// Valid usage:
1932///. - `format` has to be the same as `TextureDescriptor::format`
1933///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
1934///. - `usage` has to be a subset of `TextureDescriptor::usage`
1935///. - `range` has to be a subset of parent texture
1936#[derive(Clone, Debug)]
1937pub struct TextureViewDescriptor<'a> {
1938 pub label: Label<'a>,
1939 pub format: wgt::TextureFormat,
1940 pub dimension: wgt::TextureViewDimension,
1941 pub usage: wgt::TextureUses,
1942 pub range: wgt::ImageSubresourceRange,
1943}
1944
1945#[derive(Clone, Debug)]
1946pub struct SamplerDescriptor<'a> {
1947 pub label: Label<'a>,
1948 pub address_modes: [wgt::AddressMode; 3],
1949 pub mag_filter: wgt::FilterMode,
1950 pub min_filter: wgt::FilterMode,
1951 pub mipmap_filter: wgt::MipmapFilterMode,
1952 pub lod_clamp: Range<f32>,
1953 pub compare: Option<wgt::CompareFunction>,
1954 // Must in the range [1, 16].
1955 //
1956 // Anisotropic filtering must be supported if this is not 1.
1957 pub anisotropy_clamp: u16,
1958 pub border_color: Option<wgt::SamplerBorderColor>,
1959}
1960
1961/// BindGroupLayout descriptor.
1962///
1963/// Valid usage:
1964/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
1965#[derive(Clone, Debug)]
1966pub struct BindGroupLayoutDescriptor<'a> {
1967 pub label: Label<'a>,
1968 pub flags: BindGroupLayoutFlags,
1969 pub entries: &'a [wgt::BindGroupLayoutEntry],
1970}
1971
1972#[derive(Clone, Debug)]
1973pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
1974 pub label: Label<'a>,
1975 pub flags: PipelineLayoutFlags,
1976 pub bind_group_layouts: &'a [&'a B],
1977 pub push_constant_ranges: &'a [wgt::PushConstantRange],
1978}
1979
1980/// A region of a buffer made visible to shaders via a [`BindGroup`].
1981///
1982/// [`BindGroup`]: Api::BindGroup
1983///
1984/// ## Construction
1985///
1986/// The recommended way to construct a `BufferBinding` is using the `binding`
1987/// method on a wgpu-core `Buffer`, which will validate the binding size
1988/// against the buffer size. A `new_unchecked` constructor is also provided for
1989/// cases where direct construction is necessary.
1990///
1991/// ## Accessible region
1992///
1993/// `wgpu_hal` guarantees that shaders compiled with
1994/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
1995/// write data via this binding outside the *accessible region* of a buffer:
1996///
1997/// - The accessible region starts at [`offset`].
1998///
1999/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2000/// which must be a multiple of 4.
2001///
2002/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2003/// rounded up to the next multiple of
2004/// [`Alignments::uniform_bounds_check_alignment`].
2005///
2006/// Note that this guarantee is stricter than WGSL's requirements for
2007/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2008/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2009/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2010/// never be read by the application before they are overwritten. This
2011/// optimization consults bind group buffer binding regions to determine which
2012/// parts of which buffers shaders might observe. This optimization is only
2013/// sound if shader access is bounds-checked.
2014///
2015/// ## Zero-length bindings
2016///
2017/// Some back ends cannot tolerate zero-length regions; for example, see
2018/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2019/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2020/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2021/// previously stated that a `BufferBinding` must have `offset` strictly less
2022/// than the size of the buffer, but this restriction was not honored elsewhere
2023/// in the code, so has been removed. However, it remains the case that
2024/// some backends do not support zero-length bindings, so additional
2025/// logic is needed somewhere to handle this properly. See
2026/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2027///
2028/// [`offset`]: BufferBinding::offset
2029/// [`size`]: BufferBinding::size
2030/// [`Storage`]: wgt::BufferBindingType::Storage
2031/// [`Uniform`]: wgt::BufferBindingType::Uniform
2032/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2033/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2034/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2035/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2036#[derive(Debug)]
2037pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2038 /// The buffer being bound.
2039 ///
2040 /// This is not fully `pub` to prevent direct construction of
2041 /// `BufferBinding`s, while still allowing public read access to the `offset`
2042 /// and `size` properties.
2043 pub(crate) buffer: &'a B,
2044
2045 /// The offset at which the bound region starts.
2046 ///
2047 /// This must be less or equal to the size of the buffer.
2048 pub offset: wgt::BufferAddress,
2049
2050 /// The size of the region bound, in bytes.
2051 ///
2052 /// If `None`, the region extends from `offset` to the end of the
2053 /// buffer. Given the restrictions on `offset`, this means that
2054 /// the size is always greater than zero.
2055 pub size: Option<wgt::BufferSize>,
2056}
2057
2058// We must implement this manually because `B` is not necessarily `Clone`.
2059impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2060 fn clone(&self) -> Self {
2061 BufferBinding {
2062 buffer: self.buffer,
2063 offset: self.offset,
2064 size: self.size,
2065 }
2066 }
2067}
2068
2069/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2070/// really wants to be using `NonZeroU64`.
2071/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2072pub trait ShouldBeNonZeroExt {
2073 fn get(&self) -> u64;
2074}
2075
2076impl ShouldBeNonZeroExt for NonZeroU64 {
2077 fn get(&self) -> u64 {
2078 NonZeroU64::get(*self)
2079 }
2080}
2081
2082impl ShouldBeNonZeroExt for u64 {
2083 fn get(&self) -> u64 {
2084 *self
2085 }
2086}
2087
2088impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2089 fn get(&self) -> u64 {
2090 match *self {
2091 Some(non_zero) => non_zero.get(),
2092 None => 0,
2093 }
2094 }
2095}
2096
2097impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2098 /// Construct a `BufferBinding` with the given contents.
2099 ///
2100 /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2101 /// of this method. `Buffer::binding` validates the size of the binding
2102 /// against the size of the buffer.
2103 ///
2104 /// It is more difficult to provide a validating constructor here, due to
2105 /// not having direct access to the size of a `DynBuffer`.
2106 ///
2107 /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2108 /// bytes starting at `offset` is contained within the buffer.
2109 ///
2110 /// The `S` type parameter is a temporary convenience to allow callers to
2111 /// pass a zero size. When the zero-size binding issue is resolved, the
2112 /// argument should just match the type of the member.
2113 /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2114 pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2115 buffer: &'a B,
2116 offset: wgt::BufferAddress,
2117 size: S,
2118 ) -> Self {
2119 Self {
2120 buffer,
2121 offset,
2122 size: size.into(),
2123 }
2124 }
2125}
2126
2127#[derive(Debug)]
2128pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2129 pub view: &'a T,
2130 pub usage: wgt::TextureUses,
2131}
2132
2133impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2134 fn clone(&self) -> Self {
2135 TextureBinding {
2136 view: self.view,
2137 usage: self.usage,
2138 }
2139 }
2140}
2141
2142#[derive(Debug)]
2143pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2144 pub planes: [TextureBinding<'a, T>; 3],
2145 pub params: BufferBinding<'a, B>,
2146}
2147
2148impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2149 for ExternalTextureBinding<'a, B, T>
2150{
2151 fn clone(&self) -> Self {
2152 ExternalTextureBinding {
2153 planes: self.planes.clone(),
2154 params: self.params.clone(),
2155 }
2156 }
2157}
2158
2159/// cbindgen:ignore
2160#[derive(Clone, Debug)]
2161pub struct BindGroupEntry {
2162 pub binding: u32,
2163 pub resource_index: u32,
2164 pub count: u32,
2165}
2166
2167/// BindGroup descriptor.
2168///
2169/// Valid usage:
2170///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2171///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2172///. - each entry has to be compatible with the `layout`
2173///. - each entry's `BindGroupEntry::resource_index` is within range
2174/// of the corresponding resource array, selected by the relevant
2175/// `BindGroupLayoutEntry`.
2176#[derive(Clone, Debug)]
2177pub struct BindGroupDescriptor<
2178 'a,
2179 Bgl: DynBindGroupLayout + ?Sized,
2180 B: DynBuffer + ?Sized,
2181 S: DynSampler + ?Sized,
2182 T: DynTextureView + ?Sized,
2183 A: DynAccelerationStructure + ?Sized,
2184> {
2185 pub label: Label<'a>,
2186 pub layout: &'a Bgl,
2187 pub buffers: &'a [BufferBinding<'a, B>],
2188 pub samplers: &'a [&'a S],
2189 pub textures: &'a [TextureBinding<'a, T>],
2190 pub entries: &'a [BindGroupEntry],
2191 pub acceleration_structures: &'a [&'a A],
2192 pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2193}
2194
2195#[derive(Clone, Debug)]
2196pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2197 pub label: Label<'a>,
2198 pub queue: &'a Q,
2199}
2200
2201/// Naga shader module.
2202#[derive(Default)]
2203pub struct NagaShader {
2204 /// Shader module IR.
2205 pub module: Cow<'static, naga::Module>,
2206 /// Analysis information of the module.
2207 pub info: naga::valid::ModuleInfo,
2208 /// Source codes for debug
2209 pub debug_source: Option<DebugSource>,
2210}
2211
2212// Custom implementation avoids the need to generate Debug impl code
2213// for the whole Naga module and info.
2214impl fmt::Debug for NagaShader {
2215 fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2216 write!(formatter, "Naga shader")
2217 }
2218}
2219
2220/// Shader input.
2221#[allow(clippy::large_enum_variant)]
2222pub enum ShaderInput<'a> {
2223 Naga(NagaShader),
2224 Msl {
2225 shader: &'a str,
2226 entry_point: String,
2227 num_workgroups: (u32, u32, u32),
2228 },
2229 SpirV(&'a [u32]),
2230 Dxil {
2231 shader: &'a [u8],
2232 entry_point: String,
2233 num_workgroups: (u32, u32, u32),
2234 },
2235 Hlsl {
2236 shader: &'a str,
2237 entry_point: String,
2238 num_workgroups: (u32, u32, u32),
2239 },
2240 Glsl {
2241 shader: &'a str,
2242 entry_point: String,
2243 num_workgroups: (u32, u32, u32),
2244 },
2245}
2246
2247pub struct ShaderModuleDescriptor<'a> {
2248 pub label: Label<'a>,
2249
2250 /// # Safety
2251 ///
2252 /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2253 ///
2254 /// [src]: wgt::ShaderRuntimeChecks
2255 pub runtime_checks: wgt::ShaderRuntimeChecks,
2256}
2257
2258#[derive(Debug, Clone)]
2259pub struct DebugSource {
2260 pub file_name: Cow<'static, str>,
2261 pub source_code: Cow<'static, str>,
2262}
2263
2264/// Describes a programmable pipeline stage.
2265#[derive(Debug)]
2266pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2267 /// The compiled shader module for this stage.
2268 pub module: &'a M,
2269 /// The name of the entry point in the compiled shader. There must be a function with this name
2270 /// in the shader.
2271 pub entry_point: &'a str,
2272 /// Pipeline constants
2273 pub constants: &'a naga::back::PipelineConstants,
2274 /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2275 ///
2276 /// This is required by the WebGPU spec, but may have overhead which can be avoided
2277 /// for cross-platform applications
2278 pub zero_initialize_workgroup_memory: bool,
2279}
2280
2281impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2282 fn clone(&self) -> Self {
2283 Self {
2284 module: self.module,
2285 entry_point: self.entry_point,
2286 constants: self.constants,
2287 zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2288 }
2289 }
2290}
2291
2292/// Describes a compute pipeline.
2293#[derive(Clone, Debug)]
2294pub struct ComputePipelineDescriptor<
2295 'a,
2296 Pl: DynPipelineLayout + ?Sized,
2297 M: DynShaderModule + ?Sized,
2298 Pc: DynPipelineCache + ?Sized,
2299> {
2300 pub label: Label<'a>,
2301 /// The layout of bind groups for this pipeline.
2302 pub layout: &'a Pl,
2303 /// The compiled compute stage and its entry point.
2304 pub stage: ProgrammableStage<'a, M>,
2305 /// The cache which will be used and filled when compiling this pipeline
2306 pub cache: Option<&'a Pc>,
2307}
2308
2309pub struct PipelineCacheDescriptor<'a> {
2310 pub label: Label<'a>,
2311 pub data: Option<&'a [u8]>,
2312}
2313
2314/// Describes how the vertex buffer is interpreted.
2315#[derive(Clone, Debug)]
2316pub struct VertexBufferLayout<'a> {
2317 /// The stride, in bytes, between elements of this buffer.
2318 pub array_stride: wgt::BufferAddress,
2319 /// How often this vertex buffer is "stepped" forward.
2320 pub step_mode: wgt::VertexStepMode,
2321 /// The list of attributes which comprise a single vertex.
2322 pub attributes: &'a [wgt::VertexAttribute],
2323}
2324
2325#[derive(Clone, Debug)]
2326pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2327 Standard {
2328 /// The format of any vertex buffers used with this pipeline.
2329 vertex_buffers: &'a [VertexBufferLayout<'a>],
2330 /// The vertex stage for this pipeline.
2331 vertex_stage: ProgrammableStage<'a, M>,
2332 },
2333 Mesh {
2334 task_stage: Option<ProgrammableStage<'a, M>>,
2335 mesh_stage: ProgrammableStage<'a, M>,
2336 },
2337}
2338
2339/// Describes a render (graphics) pipeline.
2340#[derive(Clone, Debug)]
2341pub struct RenderPipelineDescriptor<
2342 'a,
2343 Pl: DynPipelineLayout + ?Sized,
2344 M: DynShaderModule + ?Sized,
2345 Pc: DynPipelineCache + ?Sized,
2346> {
2347 pub label: Label<'a>,
2348 /// The layout of bind groups for this pipeline.
2349 pub layout: &'a Pl,
2350 /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2351 pub vertex_processor: VertexProcessor<'a, M>,
2352 /// The properties of the pipeline at the primitive assembly and rasterization level.
2353 pub primitive: wgt::PrimitiveState,
2354 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2355 pub depth_stencil: Option<wgt::DepthStencilState>,
2356 /// The multi-sampling properties of the pipeline.
2357 pub multisample: wgt::MultisampleState,
2358 /// The fragment stage for this pipeline.
2359 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2360 /// The effect of draw calls on the color aspect of the output target.
2361 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2362 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2363 /// layers the attachments will have.
2364 pub multiview: Option<NonZeroU32>,
2365 /// The cache which will be used and filled when compiling this pipeline
2366 pub cache: Option<&'a Pc>,
2367}
2368
2369#[derive(Debug, Clone)]
2370pub struct SurfaceConfiguration {
2371 /// Maximum number of queued frames. Must be in
2372 /// `SurfaceCapabilities::maximum_frame_latency` range.
2373 pub maximum_frame_latency: u32,
2374 /// Vertical synchronization mode.
2375 pub present_mode: wgt::PresentMode,
2376 /// Alpha composition mode.
2377 pub composite_alpha_mode: wgt::CompositeAlphaMode,
2378 /// Format of the surface textures.
2379 pub format: wgt::TextureFormat,
2380 /// Requested texture extent. Must be in
2381 /// `SurfaceCapabilities::extents` range.
2382 pub extent: wgt::Extent3d,
2383 /// Allowed usage of surface textures,
2384 pub usage: wgt::TextureUses,
2385 /// Allows views of swapchain texture to have a different format
2386 /// than the texture does.
2387 pub view_formats: Vec<wgt::TextureFormat>,
2388}
2389
2390#[derive(Debug, Clone)]
2391pub struct Rect<T> {
2392 pub x: T,
2393 pub y: T,
2394 pub w: T,
2395 pub h: T,
2396}
2397
2398#[derive(Debug, Clone, PartialEq)]
2399pub struct StateTransition<T> {
2400 pub from: T,
2401 pub to: T,
2402}
2403
2404#[derive(Debug, Clone)]
2405pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2406 pub buffer: &'a B,
2407 pub usage: StateTransition<wgt::BufferUses>,
2408}
2409
2410#[derive(Debug, Clone)]
2411pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2412 pub texture: &'a T,
2413 pub range: wgt::ImageSubresourceRange,
2414 pub usage: StateTransition<wgt::TextureUses>,
2415}
2416
2417#[derive(Clone, Copy, Debug)]
2418pub struct BufferCopy {
2419 pub src_offset: wgt::BufferAddress,
2420 pub dst_offset: wgt::BufferAddress,
2421 pub size: wgt::BufferSize,
2422}
2423
2424#[derive(Clone, Debug)]
2425pub struct TextureCopyBase {
2426 pub mip_level: u32,
2427 pub array_layer: u32,
2428 /// Origin within a texture.
2429 /// Note: for 1D and 2D textures, Z must be 0.
2430 pub origin: wgt::Origin3d,
2431 pub aspect: FormatAspects,
2432}
2433
2434#[derive(Clone, Copy, Debug)]
2435pub struct CopyExtent {
2436 pub width: u32,
2437 pub height: u32,
2438 pub depth: u32,
2439}
2440
2441#[derive(Clone, Debug)]
2442pub struct TextureCopy {
2443 pub src_base: TextureCopyBase,
2444 pub dst_base: TextureCopyBase,
2445 pub size: CopyExtent,
2446}
2447
2448#[derive(Clone, Debug)]
2449pub struct BufferTextureCopy {
2450 pub buffer_layout: wgt::TexelCopyBufferLayout,
2451 pub texture_base: TextureCopyBase,
2452 pub size: CopyExtent,
2453}
2454
2455#[derive(Clone, Debug)]
2456pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2457 pub view: &'a T,
2458 /// Contains either a single mutating usage as a target,
2459 /// or a valid combination of read-only usages.
2460 pub usage: wgt::TextureUses,
2461}
2462
2463#[derive(Clone, Debug)]
2464pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2465 pub target: Attachment<'a, T>,
2466 pub depth_slice: Option<u32>,
2467 pub resolve_target: Option<Attachment<'a, T>>,
2468 pub ops: AttachmentOps,
2469 pub clear_value: wgt::Color,
2470}
2471
2472#[derive(Clone, Debug)]
2473pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2474 pub target: Attachment<'a, T>,
2475 pub depth_ops: AttachmentOps,
2476 pub stencil_ops: AttachmentOps,
2477 pub clear_value: (f32, u32),
2478}
2479
2480#[derive(Clone, Debug)]
2481pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2482 pub query_set: &'a Q,
2483 pub beginning_of_pass_write_index: Option<u32>,
2484 pub end_of_pass_write_index: Option<u32>,
2485}
2486
2487#[derive(Clone, Debug)]
2488pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2489 pub label: Label<'a>,
2490 pub extent: wgt::Extent3d,
2491 pub sample_count: u32,
2492 pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2493 pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2494 pub multiview: Option<NonZeroU32>,
2495 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2496 pub occlusion_query_set: Option<&'a Q>,
2497}
2498
2499#[derive(Clone, Debug)]
2500pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2501 pub label: Label<'a>,
2502 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2503}
2504
2505#[test]
2506fn test_default_limits() {
2507 let limits = wgt::Limits::default();
2508 assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2509}
2510
2511#[derive(Clone, Debug)]
2512pub struct AccelerationStructureDescriptor<'a> {
2513 pub label: Label<'a>,
2514 pub size: wgt::BufferAddress,
2515 pub format: AccelerationStructureFormat,
2516 pub allow_compaction: bool,
2517}
2518
2519#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2520pub enum AccelerationStructureFormat {
2521 TopLevel,
2522 BottomLevel,
2523}
2524
2525#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2526pub enum AccelerationStructureBuildMode {
2527 Build,
2528 Update,
2529}
2530
2531/// Information of the required size for a corresponding entries struct (+ flags)
2532#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2533pub struct AccelerationStructureBuildSizes {
2534 pub acceleration_structure_size: wgt::BufferAddress,
2535 pub update_scratch_size: wgt::BufferAddress,
2536 pub build_scratch_size: wgt::BufferAddress,
2537}
2538
2539/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2540/// For updates, only the data is allowed to change (not the meta data or sizes).
2541#[derive(Clone, Debug)]
2542pub struct BuildAccelerationStructureDescriptor<
2543 'a,
2544 B: DynBuffer + ?Sized,
2545 A: DynAccelerationStructure + ?Sized,
2546> {
2547 pub entries: &'a AccelerationStructureEntries<'a, B>,
2548 pub mode: AccelerationStructureBuildMode,
2549 pub flags: AccelerationStructureBuildFlags,
2550 pub source_acceleration_structure: Option<&'a A>,
2551 pub destination_acceleration_structure: &'a A,
2552 pub scratch_buffer: &'a B,
2553 pub scratch_buffer_offset: wgt::BufferAddress,
2554}
2555
2556/// - All buffers, buffer addresses and offsets will be ignored.
2557/// - The build mode will be ignored.
2558/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2559/// may result in reduced size requirements.
2560/// - Any other change may result in a bigger or smaller size requirement.
2561#[derive(Clone, Debug)]
2562pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2563 pub entries: &'a AccelerationStructureEntries<'a, B>,
2564 pub flags: AccelerationStructureBuildFlags,
2565}
2566
2567/// Entries for a single descriptor
2568/// * `Instances` - Multiple instances for a top level acceleration structure
2569/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2570/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2571#[derive(Debug)]
2572pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2573 Instances(AccelerationStructureInstances<'a, B>),
2574 Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2575 AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2576}
2577
2578/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2579/// * `indices` - optional index buffer with attributes
2580/// * `transform` - optional transform
2581#[derive(Clone, Debug)]
2582pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2583 pub vertex_buffer: Option<&'a B>,
2584 pub vertex_format: wgt::VertexFormat,
2585 pub first_vertex: u32,
2586 pub vertex_count: u32,
2587 pub vertex_stride: wgt::BufferAddress,
2588 pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2589 pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2590 pub flags: AccelerationStructureGeometryFlags,
2591}
2592
2593/// * `offset` - offset in bytes
2594#[derive(Clone, Debug)]
2595pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2596 pub buffer: Option<&'a B>,
2597 pub offset: u32,
2598 pub count: u32,
2599 pub stride: wgt::BufferAddress,
2600 pub flags: AccelerationStructureGeometryFlags,
2601}
2602
2603pub struct AccelerationStructureCopy {
2604 pub copy_flags: wgt::AccelerationStructureCopy,
2605 pub type_flags: wgt::AccelerationStructureType,
2606}
2607
2608/// * `offset` - offset in bytes
2609#[derive(Clone, Debug)]
2610pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2611 pub buffer: Option<&'a B>,
2612 pub offset: u32,
2613 pub count: u32,
2614}
2615
2616/// * `offset` - offset in bytes
2617#[derive(Clone, Debug)]
2618pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2619 pub format: wgt::IndexFormat,
2620 pub buffer: Option<&'a B>,
2621 pub offset: u32,
2622 pub count: u32,
2623}
2624
2625/// * `offset` - offset in bytes
2626#[derive(Clone, Debug)]
2627pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2628 pub buffer: &'a B,
2629 pub offset: u32,
2630}
2631
2632pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2633pub use wgt::AccelerationStructureGeometryFlags;
2634
2635bitflags::bitflags! {
2636 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2637 pub struct AccelerationStructureUses: u8 {
2638 // For blas used as input for tlas
2639 const BUILD_INPUT = 1 << 0;
2640 // Target for acceleration structure build
2641 const BUILD_OUTPUT = 1 << 1;
2642 // Tlas used in a shader
2643 const SHADER_INPUT = 1 << 2;
2644 // Blas used to query compacted size
2645 const QUERY_INPUT = 1 << 3;
2646 // BLAS used as a src for a copy operation
2647 const COPY_SRC = 1 << 4;
2648 // BLAS used as a dst for a copy operation
2649 const COPY_DST = 1 << 5;
2650 }
2651}
2652
2653#[derive(Debug, Clone)]
2654pub struct AccelerationStructureBarrier {
2655 pub usage: StateTransition<AccelerationStructureUses>,
2656}
2657
2658#[derive(Debug, Copy, Clone)]
2659pub struct TlasInstance {
2660 pub transform: [f32; 12],
2661 pub custom_data: u32,
2662 pub mask: u8,
2663 pub blas_address: u64,
2664}