wgpu_hal/
lib.rs

1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//!   should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//!   validation, if any, and incorrect use will often cause undefined behavior.
20//!   This allows us to minimize the overhead we impose over the underlying
21//!   graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//!   safe API for driving `wgpu-hal`, implementing all necessary validation,
23//!   resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//!   use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//!   `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//!   anticipate*, like running out of memory or losing the device. Any errors
29//!   that the user could reasonably anticipate are their responsibility to
30//!   avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//!   not mappable: as the buffer creator, the user should already know if they
32//!   can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//!   generally object-safe. You must select a specific backend type
36//!   like [`vulkan::Api`] or [`metal::Api`], and then use that
37//!   according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//!   taking objects by reference, returning them by value, and so on,
41//!   unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//!   remain mapped on the CPU while the GPU reads or writes to it. You must
45//!   explicitly indicate when data might need to be transferred between CPU and
46//!   GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//!   resource. For example, if a buffer is written to by a compute
50//!   shader, and then used as and index buffer to a draw call, you
51//!   must use [`CommandEncoder::transition_buffers`] between those two
52//!   operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//!   Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//!   store data in particular containers. The implementation doesn't guarantee
59//!   that any of the iterators are drained, unless stated otherwise by the
60//!   function documentation. For this reason, we recommend that iterators don't
61//!   do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//!   Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//!   available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//!   own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//!   creates an instance value, which you can use to enumerate the adapters
121//!   available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//!   returns an instance that can enumerate the Vulkan physical devices on your
123//!   system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//!   particular device from a particular backend. For example, a Vulkan instance
127//!   might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//!   link to a device. You get a device value by calling [`Adapter::open`], and
131//!   then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//!   command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//!   use to build buffers of commands to submit to a queue. This has all the
138//!   methods for drawing and running compute shaders, which is presumably what
139//!   you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//!   swapchain for presenting images on the screen, via interaction with the
143//!   system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208    // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209    clippy::arc_with_non_send_sync,
210    // We don't use syntax sugar where it's not necessary.
211    clippy::match_like_matches_macro,
212    // Redundant matching is more explicit.
213    clippy::redundant_pattern_matching,
214    // Explicit lifetimes are often easier to reason about.
215    clippy::needless_lifetimes,
216    // No need for defaults in the internal types.
217    clippy::new_without_default,
218    // Matches are good and extendable, no need to make an exception here.
219    clippy::single_match,
220    // Push commands are more regular than macros.
221    clippy::vec_init_then_push,
222    // We unsafe impl `Send` for a reason.
223    clippy::non_send_fields_in_send_ty,
224    // TODO!
225    clippy::missing_safety_doc,
226    // It gets in the way a lot and does not prevent bugs in practice.
227    clippy::pattern_type_mismatch,
228    // We should investigate these.
229    clippy::large_enum_variant
230)]
231#![warn(
232    clippy::alloc_instead_of_core,
233    clippy::ptr_as_ptr,
234    clippy::std_instead_of_alloc,
235    clippy::std_instead_of_core,
236    trivial_casts,
237    trivial_numeric_casts,
238    unsafe_op_in_unsafe_fn,
239    unused_extern_crates,
240    unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268    #[cfg(dx12)]
269    pub use super::dx12::Api as Dx12;
270    #[cfg(gles)]
271    pub use super::gles::Api as Gles;
272    #[cfg(metal)]
273    pub use super::metal::Api as Metal;
274    pub use super::noop::Api as Noop;
275    #[cfg(vulkan)]
276    pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288    DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289    DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290    DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291    DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292    DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299    borrow::Borrow,
300    error::Error,
301    fmt,
302    num::{NonZeroU32, NonZeroU64},
303    ops::{Range, RangeInclusive},
304    ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use raw_window_handle::DisplayHandle;
309use thiserror::Error;
310use wgt::WasmNotSendSync;
311
312cfg_if::cfg_if! {
313    if #[cfg(supports_ptr_atomics)] {
314        use alloc::sync::Arc;
315    } else if #[cfg(feature = "portable-atomic")] {
316        use portable_atomic_util::Arc;
317    }
318}
319
320// - Vertex + Fragment
321// - Compute
322// Task + Mesh + Fragment
323pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
324pub const MAX_ANISOTROPY: u8 = 16;
325pub const MAX_BIND_GROUPS: usize = 8;
326pub const MAX_VERTEX_BUFFERS: usize = 16;
327pub const MAX_COLOR_ATTACHMENTS: usize = 8;
328pub const MAX_MIP_LEVELS: u32 = 16;
329/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
330/// cbindgen:ignore
331pub const QUERY_SIZE: wgt::BufferAddress = 8;
332
333pub type Label<'a> = Option<&'a str>;
334pub type MemoryRange = Range<wgt::BufferAddress>;
335pub type FenceValue = u64;
336#[cfg(supports_64bit_atomics)]
337pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
338#[cfg(not(supports_64bit_atomics))]
339pub type AtomicFenceValue = portable_atomic::AtomicU64;
340
341/// A callback to signal that wgpu is no longer using a resource.
342#[cfg(any(gles, vulkan))]
343pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
344
345#[cfg(any(gles, vulkan))]
346pub struct DropGuard {
347    callback: Option<DropCallback>,
348}
349
350#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
351impl DropGuard {
352    fn from_option(callback: Option<DropCallback>) -> Option<Self> {
353        callback.map(|callback| Self {
354            callback: Some(callback),
355        })
356    }
357}
358
359#[cfg(any(gles, vulkan))]
360impl Drop for DropGuard {
361    fn drop(&mut self) {
362        if let Some(cb) = self.callback.take() {
363            (cb)();
364        }
365    }
366}
367
368#[cfg(any(gles, vulkan))]
369impl fmt::Debug for DropGuard {
370    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
371        f.debug_struct("DropGuard").finish()
372    }
373}
374
375#[derive(Clone, Debug, PartialEq, Eq, Error)]
376pub enum DeviceError {
377    #[error("Out of memory")]
378    OutOfMemory,
379    #[error("Device is lost")]
380    Lost,
381    #[error("Unexpected error variant (driver implementation is at fault)")]
382    Unexpected,
383}
384
385#[cfg(any(dx12, vulkan))]
386impl From<gpu_allocator::AllocationError> for DeviceError {
387    fn from(result: gpu_allocator::AllocationError) -> Self {
388        match result {
389            gpu_allocator::AllocationError::OutOfMemory => Self::OutOfMemory,
390            gpu_allocator::AllocationError::FailedToMap(e) => {
391                log::error!("gpu-allocator: Failed to map: {e}");
392                Self::Lost
393            }
394            gpu_allocator::AllocationError::NoCompatibleMemoryTypeFound => {
395                log::error!("gpu-allocator: No Compatible Memory Type Found");
396                Self::Lost
397            }
398            gpu_allocator::AllocationError::InvalidAllocationCreateDesc => {
399                log::error!("gpu-allocator: Invalid Allocation Creation Description");
400                Self::Lost
401            }
402            gpu_allocator::AllocationError::InvalidAllocatorCreateDesc(e) => {
403                log::error!("gpu-allocator: Invalid Allocator Creation Description: {e}");
404                Self::Lost
405            }
406
407            gpu_allocator::AllocationError::Internal(e) => {
408                log::error!("gpu-allocator: Internal Error: {e}");
409                Self::Lost
410            }
411            gpu_allocator::AllocationError::BarrierLayoutNeedsDevice10
412            | gpu_allocator::AllocationError::CastableFormatsRequiresEnhancedBarriers
413            | gpu_allocator::AllocationError::CastableFormatsRequiresAtLeastDevice12 => {
414                unreachable!()
415            }
416        }
417    }
418}
419
420// A copy of gpu_allocator::AllocationSizes, allowing to read the configured value for
421// the dx12 backend, we should instead add getters to gpu_allocator::AllocationSizes
422// and remove this type.
423// https://github.com/Traverse-Research/gpu-allocator/issues/295
424#[cfg_attr(not(any(dx12, vulkan)), expect(dead_code))]
425pub(crate) struct AllocationSizes {
426    pub(crate) min_device_memblock_size: u64,
427    pub(crate) max_device_memblock_size: u64,
428    pub(crate) min_host_memblock_size: u64,
429    pub(crate) max_host_memblock_size: u64,
430}
431
432impl AllocationSizes {
433    #[allow(dead_code)] // may be unused on some platforms
434    pub(crate) fn from_memory_hints(memory_hints: &wgt::MemoryHints) -> Self {
435        // TODO: the allocator's configuration should take hardware capability into
436        // account.
437        const MB: u64 = 1024 * 1024;
438
439        match memory_hints {
440            wgt::MemoryHints::Performance => Self {
441                min_device_memblock_size: 128 * MB,
442                max_device_memblock_size: 256 * MB,
443                min_host_memblock_size: 64 * MB,
444                max_host_memblock_size: 128 * MB,
445            },
446            wgt::MemoryHints::MemoryUsage => Self {
447                min_device_memblock_size: 8 * MB,
448                max_device_memblock_size: 64 * MB,
449                min_host_memblock_size: 4 * MB,
450                max_host_memblock_size: 32 * MB,
451            },
452            wgt::MemoryHints::Manual {
453                suballocated_device_memory_block_size,
454            } => {
455                // TODO: https://github.com/gfx-rs/wgpu/issues/8625
456                // Would it be useful to expose the host size in memory hints
457                // instead of always using half of the device size?
458                let device_size = suballocated_device_memory_block_size;
459                let host_size = device_size.start / 2..device_size.end / 2;
460
461                // gpu_allocator clamps the sizes between 4MiB and 256MiB, but we clamp them ourselves since we use
462                // the sizes when detecting high memory pressure and there is no way to query the values otherwise.
463                Self {
464                    min_device_memblock_size: device_size.start.clamp(4 * MB, 256 * MB),
465                    max_device_memblock_size: device_size.end.clamp(4 * MB, 256 * MB),
466                    min_host_memblock_size: host_size.start.clamp(4 * MB, 256 * MB),
467                    max_host_memblock_size: host_size.end.clamp(4 * MB, 256 * MB),
468                }
469            }
470        }
471    }
472}
473
474#[cfg(any(dx12, vulkan))]
475impl From<AllocationSizes> for gpu_allocator::AllocationSizes {
476    fn from(value: AllocationSizes) -> gpu_allocator::AllocationSizes {
477        gpu_allocator::AllocationSizes::new(
478            value.min_device_memblock_size,
479            value.min_host_memblock_size,
480        )
481        .with_max_device_memblock_size(value.max_device_memblock_size)
482        .with_max_host_memblock_size(value.max_host_memblock_size)
483    }
484}
485
486#[allow(dead_code)] // may be unused on some platforms
487#[cold]
488fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
489    panic!("wgpu-hal invariant was violated (usage error): {txt}")
490}
491
492#[allow(dead_code)] // may be unused on some platforms
493#[cold]
494fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
495    panic!("wgpu-hal ran into a preventable internal error: {txt}")
496}
497
498#[derive(Clone, Debug, Eq, PartialEq, Error)]
499pub enum ShaderError {
500    #[error("Compilation failed: {0:?}")]
501    Compilation(String),
502    #[error(transparent)]
503    Device(#[from] DeviceError),
504}
505
506#[derive(Clone, Debug, Eq, PartialEq, Error)]
507pub enum PipelineError {
508    #[error("Linkage failed for stage {0:?}: {1}")]
509    Linkage(wgt::ShaderStages, String),
510    #[error("Entry point for stage {0:?} is invalid")]
511    EntryPoint(naga::ShaderStage),
512    #[error(transparent)]
513    Device(#[from] DeviceError),
514    #[error("Pipeline constant error for stage {0:?}: {1}")]
515    PipelineConstants(wgt::ShaderStages, String),
516}
517
518#[derive(Clone, Debug, Eq, PartialEq, Error)]
519pub enum PipelineCacheError {
520    #[error(transparent)]
521    Device(#[from] DeviceError),
522}
523
524#[derive(Clone, Debug, Eq, PartialEq, Error)]
525pub enum SurfaceError {
526    #[error("Surface is lost")]
527    Lost,
528    #[error("Surface is outdated, needs to be re-created")]
529    Outdated,
530    #[error(transparent)]
531    Device(#[from] DeviceError),
532    #[error("Other reason: {0}")]
533    Other(&'static str),
534}
535
536/// Error occurring while trying to create an instance, or create a surface from an instance;
537/// typically relating to the state of the underlying graphics API or hardware.
538#[derive(Clone, Debug, Error)]
539#[error("{message}")]
540pub struct InstanceError {
541    /// These errors are very platform specific, so do not attempt to encode them as an enum.
542    ///
543    /// This message should describe the problem in sufficient detail to be useful for a
544    /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
545    /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
546    message: String,
547
548    /// Underlying error value, if any is available.
549    #[source]
550    source: Option<Arc<dyn Error + Send + Sync + 'static>>,
551}
552
553impl InstanceError {
554    #[allow(dead_code)] // may be unused on some platforms
555    pub(crate) fn new(message: String) -> Self {
556        Self {
557            message,
558            source: None,
559        }
560    }
561    #[allow(dead_code)] // may be unused on some platforms
562    pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
563        cfg_if::cfg_if! {
564            if #[cfg(supports_ptr_atomics)] {
565                let source = Arc::new(source);
566            } else {
567                // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
568                let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
569                let source = Arc::from(source);
570            }
571        }
572        Self {
573            message,
574            source: Some(source),
575        }
576    }
577}
578
579/// All the types and methods that make up a implementation on top of a backend.
580///
581/// Only the types that have non-dyn trait bounds have methods on them. Most methods
582/// are either on [`CommandEncoder`] or [`Device`].
583///
584/// The api can either be used through generics (through use of this trait and associated
585/// types) or dynamically through using the `Dyn*` traits.
586pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
587    const VARIANT: wgt::Backend;
588
589    type Instance: DynInstance + Instance<A = Self>;
590    type Surface: DynSurface + Surface<A = Self>;
591    type Adapter: DynAdapter + Adapter<A = Self>;
592    type Device: DynDevice + Device<A = Self>;
593
594    type Queue: DynQueue + Queue<A = Self>;
595    type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
596
597    /// This API's command buffer type.
598    ///
599    /// The only thing you can do with `CommandBuffer`s is build them
600    /// with a [`CommandEncoder`] and then pass them to
601    /// [`Queue::submit`] for execution, or destroy them by passing
602    /// them to [`CommandEncoder::reset_all`].
603    ///
604    /// [`CommandEncoder`]: Api::CommandEncoder
605    type CommandBuffer: DynCommandBuffer;
606
607    type Buffer: DynBuffer;
608    type Texture: DynTexture;
609    type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
610    type TextureView: DynTextureView;
611    type Sampler: DynSampler;
612    type QuerySet: DynQuerySet;
613
614    /// A value you can block on to wait for something to finish.
615    ///
616    /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
617    /// [`Device::wait`] to block until a fence reaches or passes a value you
618    /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
619    /// store in it when the submitted work is complete.
620    ///
621    /// Attempting to set a fence to a value less than its current value has no
622    /// effect.
623    ///
624    /// Waiting on a fence returns as soon as the fence reaches *or passes* the
625    /// requested value. This implies that, in order to reliably determine when
626    /// an operation has completed, operations must finish in order of
627    /// increasing fence values: if a higher-valued operation were to finish
628    /// before a lower-valued operation, then waiting for the fence to reach the
629    /// lower value could return before the lower-valued operation has actually
630    /// finished.
631    type Fence: DynFence;
632
633    type BindGroupLayout: DynBindGroupLayout;
634    type BindGroup: DynBindGroup;
635    type PipelineLayout: DynPipelineLayout;
636    type ShaderModule: DynShaderModule;
637    type RenderPipeline: DynRenderPipeline;
638    type ComputePipeline: DynComputePipeline;
639    type PipelineCache: DynPipelineCache;
640
641    type AccelerationStructure: DynAccelerationStructure + 'static;
642}
643
644pub trait Instance: Sized + WasmNotSendSync {
645    type A: Api;
646
647    unsafe fn init(desc: &InstanceDescriptor<'_>) -> Result<Self, InstanceError>;
648    unsafe fn create_surface(
649        &self,
650        display_handle: raw_window_handle::RawDisplayHandle,
651        window_handle: raw_window_handle::RawWindowHandle,
652    ) -> Result<<Self::A as Api>::Surface, InstanceError>;
653    /// `surface_hint` is only used by the GLES backend targeting WebGL2
654    unsafe fn enumerate_adapters(
655        &self,
656        surface_hint: Option<&<Self::A as Api>::Surface>,
657    ) -> Vec<ExposedAdapter<Self::A>>;
658}
659
660pub trait Surface: WasmNotSendSync {
661    type A: Api;
662
663    /// Configure `self` to use `device`.
664    ///
665    /// # Safety
666    ///
667    /// - All GPU work using `self` must have been completed.
668    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
669    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
670    /// - The surface `self` must not currently be configured to use any other [`Device`].
671    unsafe fn configure(
672        &self,
673        device: &<Self::A as Api>::Device,
674        config: &SurfaceConfiguration,
675    ) -> Result<(), SurfaceError>;
676
677    /// Unconfigure `self` on `device`.
678    ///
679    /// # Safety
680    ///
681    /// - All GPU work that uses `surface` must have been completed.
682    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
683    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
684    /// - The surface `self` must have been configured on `device`.
685    unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
686
687    /// Return the next texture to be presented by `self`, for the caller to draw on.
688    ///
689    /// On success, return an [`AcquiredSurfaceTexture`] representing the
690    /// texture into which the caller should draw the image to be displayed on
691    /// `self`.
692    ///
693    /// If `timeout` elapses before `self` has a texture ready to be acquired,
694    /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
695    /// timeout.
696    ///
697    /// # Using an [`AcquiredSurfaceTexture`]
698    ///
699    /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
700    /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
701    /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
702    /// carries some metadata about that [`SurfaceTexture`].
703    ///
704    /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
705    /// include the [`SurfaceTexture`] in the `surface_textures` argument.
706    ///
707    /// When you are done drawing on the texture, you can display it on `self`
708    /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
709    ///
710    /// If you do not wish to display the texture, you must pass the
711    /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
712    /// by future acquisitions.
713    ///
714    /// # Portability
715    ///
716    /// Some backends can't support a timeout when acquiring a texture. On these
717    /// backends, `timeout` is ignored.
718    ///
719    /// # Safety
720    ///
721    /// - The surface `self` must currently be configured on some [`Device`].
722    ///
723    /// - The `fence` argument must be the same [`Fence`] passed to all calls to
724    ///   [`Queue::submit`] that used [`Texture`]s acquired from this surface.
725    ///
726    /// - You may only have one texture acquired from `self` at a time. When
727    ///   `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
728    ///   [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
729    ///   [`Surface::discard_texture`] before calling `acquire_texture` again.
730    ///
731    /// [`texture`]: AcquiredSurfaceTexture::texture
732    /// [`SurfaceTexture`]: Api::SurfaceTexture
733    /// [`borrow`]: alloc::borrow::Borrow::borrow
734    /// [`Texture`]: Api::Texture
735    /// [`Fence`]: Api::Fence
736    /// [`self.discard_texture`]: Surface::discard_texture
737    unsafe fn acquire_texture(
738        &self,
739        timeout: Option<core::time::Duration>,
740        fence: &<Self::A as Api>::Fence,
741    ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
742
743    /// Relinquish an acquired texture without presenting it.
744    ///
745    /// After this call, the texture underlying [`SurfaceTexture`] may be
746    /// returned by subsequent calls to [`self.acquire_texture`].
747    ///
748    /// # Safety
749    ///
750    /// - The surface `self` must currently be configured on some [`Device`].
751    ///
752    /// - `texture` must be a [`SurfaceTexture`] returned by a call to
753    ///   [`self.acquire_texture`] that has not yet been passed to
754    ///   [`Queue::present`].
755    ///
756    /// [`SurfaceTexture`]: Api::SurfaceTexture
757    /// [`self.acquire_texture`]: Surface::acquire_texture
758    unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
759}
760
761pub trait Adapter: WasmNotSendSync {
762    type A: Api;
763
764    unsafe fn open(
765        &self,
766        features: wgt::Features,
767        limits: &wgt::Limits,
768        memory_hints: &wgt::MemoryHints,
769    ) -> Result<OpenDevice<Self::A>, DeviceError>;
770
771    /// Return the set of supported capabilities for a texture format.
772    unsafe fn texture_format_capabilities(
773        &self,
774        format: wgt::TextureFormat,
775    ) -> TextureFormatCapabilities;
776
777    /// Returns the capabilities of working with a specified surface.
778    ///
779    /// `None` means presentation is not supported for it.
780    unsafe fn surface_capabilities(
781        &self,
782        surface: &<Self::A as Api>::Surface,
783    ) -> Option<SurfaceCapabilities>;
784
785    /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
786    ///
787    /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
788    unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
789}
790
791/// A connection to a GPU and a pool of resources to use with it.
792///
793/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
794/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
795/// used for creating resources. Each `Device` has an associated [`Queue`] used
796/// for command submission.
797///
798/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
799/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
800/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
801/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
802/// `Adapter`.
803///
804/// A `Device`'s life cycle is generally:
805///
806/// 1)  Obtain a `Device` and its associated [`Queue`] by calling
807///     [`Adapter::open`].
808///
809///     Alternatively, the backend-specific types that implement [`Adapter`] often
810///     have methods for creating a `wgpu-hal` `Device` from a platform-specific
811///     handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
812///     [`vulkan::Device`] from an [`ash::Device`].
813///
814/// 1)  Create resources to use on the device by calling methods like
815///     [`Device::create_texture`] or [`Device::create_shader_module`].
816///
817/// 1)  Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
818///     which you can use to build [`CommandBuffer`]s holding commands to be
819///     executed on the GPU.
820///
821/// 1)  Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
822///     [`CommandBuffer`]s for execution on the GPU. If needed, call
823///     [`Device::wait`] to wait for them to finish execution.
824///
825/// 1)  Free resources with methods like [`Device::destroy_texture`] or
826///     [`Device::destroy_shader_module`].
827///
828/// 1)  Drop the device.
829///
830/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
831/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
832/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
833/// [`wgpu_hal::Adapter`]: Adapter
834/// [`wgpu_hal::Device`]: Device
835/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
836/// [`vulkan::Device`]: vulkan/struct.Device.html
837/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
838/// [`CommandBuffer`]: Api::CommandBuffer
839///
840/// # Safety
841///
842/// As with other `wgpu-hal` APIs, [validation] is the caller's
843/// responsibility. Here are the general requirements for all `Device`
844/// methods:
845///
846/// - Any resource passed to a `Device` method must have been created by that
847///   `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
848///   have been created with the `Device` passed as `self`.
849///
850/// - Resources may not be destroyed if they are used by any submitted command
851///   buffers that have not yet finished execution.
852///
853/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
854/// [`Texture`]: Api::Texture
855pub trait Device: WasmNotSendSync {
856    type A: Api;
857
858    /// Creates a new buffer.
859    ///
860    /// The initial usage is `wgt::BufferUses::empty()`.
861    unsafe fn create_buffer(
862        &self,
863        desc: &BufferDescriptor,
864    ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
865
866    /// Free `buffer` and any GPU resources it owns.
867    ///
868    /// Note that backends are allowed to allocate GPU memory for buffers from
869    /// allocation pools, and this call is permitted to simply return `buffer`'s
870    /// storage to that pool, without making it available to other applications.
871    ///
872    /// # Safety
873    ///
874    /// - The given `buffer` must not currently be mapped.
875    unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
876
877    /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
878    unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
879
880    /// Return a pointer to CPU memory mapping the contents of `buffer`.
881    ///
882    /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
883    /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
884    /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
885    /// `wgpu_hal` buffer is also unmapped.)
886    ///
887    /// If this function returns `Ok(mapping)`, then:
888    ///
889    /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
890    ///
891    /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
892    ///   memory are immediately visible on the GPU, and vice versa.
893    ///
894    /// # Safety
895    ///
896    /// - The given `buffer` must have been created with the [`MAP_READ`] or
897    ///   [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
898    ///
899    /// - The given `range` must fall within the size of `buffer`.
900    ///
901    /// - The caller must avoid data races between the CPU and the GPU. A data
902    ///   race is any pair of accesses to a particular byte, one of which is a
903    ///   write, that are not ordered with respect to each other by some sort of
904    ///   synchronization operation.
905    ///
906    /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
907    ///   `false`, then:
908    ///
909    ///   - Every CPU write to a mapped byte followed by a GPU read of that byte
910    ///     must have at least one call to [`Device::flush_mapped_ranges`]
911    ///     covering that byte that occurs between those two accesses.
912    ///
913    ///   - Every GPU write to a mapped byte followed by a CPU read of that byte
914    ///     must have at least one call to [`Device::invalidate_mapped_ranges`]
915    ///     covering that byte that occurs between those two accesses.
916    ///
917    ///   Note that the data race rule above requires that all such access pairs
918    ///   be ordered, so it is meaningful to talk about what must occur
919    ///   "between" them.
920    ///
921    /// - Zero-sized mappings are not allowed.
922    ///
923    /// - The returned [`BufferMapping::ptr`] must not be used after a call to
924    ///   [`Device::unmap_buffer`].
925    ///
926    /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
927    /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
928    unsafe fn map_buffer(
929        &self,
930        buffer: &<Self::A as Api>::Buffer,
931        range: MemoryRange,
932    ) -> Result<BufferMapping, DeviceError>;
933
934    /// Remove the mapping established by the last call to [`Device::map_buffer`].
935    ///
936    /// # Safety
937    ///
938    /// - The given `buffer` must be currently mapped.
939    unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
940
941    /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
942    ///
943    /// # Safety
944    ///
945    /// - The given `buffer` must be currently mapped.
946    ///
947    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
948    unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
949    where
950        I: Iterator<Item = MemoryRange>;
951
952    /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
953    ///
954    /// # Safety
955    ///
956    /// - The given `buffer` must be currently mapped.
957    ///
958    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
959    unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
960    where
961        I: Iterator<Item = MemoryRange>;
962
963    /// Creates a new texture.
964    ///
965    /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
966    unsafe fn create_texture(
967        &self,
968        desc: &TextureDescriptor,
969    ) -> Result<<Self::A as Api>::Texture, DeviceError>;
970    unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
971
972    /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
973    unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
974
975    unsafe fn create_texture_view(
976        &self,
977        texture: &<Self::A as Api>::Texture,
978        desc: &TextureViewDescriptor,
979    ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
980    unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
981    unsafe fn create_sampler(
982        &self,
983        desc: &SamplerDescriptor,
984    ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
985    unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
986
987    /// Create a fresh [`CommandEncoder`].
988    ///
989    /// The new `CommandEncoder` is in the "closed" state.
990    unsafe fn create_command_encoder(
991        &self,
992        desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
993    ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
994
995    /// Creates a bind group layout.
996    unsafe fn create_bind_group_layout(
997        &self,
998        desc: &BindGroupLayoutDescriptor,
999    ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
1000    unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
1001    unsafe fn create_pipeline_layout(
1002        &self,
1003        desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
1004    ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
1005    unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
1006
1007    #[allow(clippy::type_complexity)]
1008    unsafe fn create_bind_group(
1009        &self,
1010        desc: &BindGroupDescriptor<
1011            <Self::A as Api>::BindGroupLayout,
1012            <Self::A as Api>::Buffer,
1013            <Self::A as Api>::Sampler,
1014            <Self::A as Api>::TextureView,
1015            <Self::A as Api>::AccelerationStructure,
1016        >,
1017    ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
1018    unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
1019
1020    unsafe fn create_shader_module(
1021        &self,
1022        desc: &ShaderModuleDescriptor,
1023        shader: ShaderInput,
1024    ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
1025    unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
1026
1027    #[allow(clippy::type_complexity)]
1028    unsafe fn create_render_pipeline(
1029        &self,
1030        desc: &RenderPipelineDescriptor<
1031            <Self::A as Api>::PipelineLayout,
1032            <Self::A as Api>::ShaderModule,
1033            <Self::A as Api>::PipelineCache,
1034        >,
1035    ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
1036    unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
1037
1038    #[allow(clippy::type_complexity)]
1039    unsafe fn create_compute_pipeline(
1040        &self,
1041        desc: &ComputePipelineDescriptor<
1042            <Self::A as Api>::PipelineLayout,
1043            <Self::A as Api>::ShaderModule,
1044            <Self::A as Api>::PipelineCache,
1045        >,
1046    ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
1047    unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
1048
1049    unsafe fn create_pipeline_cache(
1050        &self,
1051        desc: &PipelineCacheDescriptor<'_>,
1052    ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
1053    fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
1054        None
1055    }
1056    unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
1057
1058    unsafe fn create_query_set(
1059        &self,
1060        desc: &wgt::QuerySetDescriptor<Label>,
1061    ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
1062    unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
1063    unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
1064    unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
1065    unsafe fn get_fence_value(
1066        &self,
1067        fence: &<Self::A as Api>::Fence,
1068    ) -> Result<FenceValue, DeviceError>;
1069
1070    /// Wait for `fence` to reach `value`.
1071    ///
1072    /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
1073    /// [`FenceValue`] to store in it, so you can use this `wait` function
1074    /// to wait for a given queue submission to finish execution.
1075    ///
1076    /// The `value` argument must be a value that some actual operation you have
1077    /// already presented to the device is going to store in `fence`. You cannot
1078    /// wait for values yet to be submitted. (This restriction accommodates
1079    /// implementations like the `vulkan` backend's [`FencePool`] that must
1080    /// allocate a distinct synchronization object for each fence value one is
1081    /// able to wait for.)
1082    ///
1083    /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
1084    /// returns immediately.
1085    ///
1086    /// If `timeout` is provided, the function will block indefinitely or until
1087    /// an error is encountered.
1088    ///
1089    /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
1090    ///
1091    /// [`Fence`]: Api::Fence
1092    /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
1093    unsafe fn wait(
1094        &self,
1095        fence: &<Self::A as Api>::Fence,
1096        value: FenceValue,
1097        timeout: Option<core::time::Duration>,
1098    ) -> Result<bool, DeviceError>;
1099
1100    /// Start a graphics debugger capture.
1101    ///
1102    /// # Safety
1103    ///
1104    /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1105    ///
1106    /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1107    unsafe fn start_graphics_debugger_capture(&self) -> bool;
1108
1109    /// Stop a graphics debugger capture.
1110    ///
1111    /// # Safety
1112    ///
1113    /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1114    ///
1115    /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1116    unsafe fn stop_graphics_debugger_capture(&self);
1117
1118    #[allow(unused_variables)]
1119    unsafe fn pipeline_cache_get_data(
1120        &self,
1121        cache: &<Self::A as Api>::PipelineCache,
1122    ) -> Option<Vec<u8>> {
1123        None
1124    }
1125
1126    unsafe fn create_acceleration_structure(
1127        &self,
1128        desc: &AccelerationStructureDescriptor,
1129    ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1130    unsafe fn get_acceleration_structure_build_sizes(
1131        &self,
1132        desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1133    ) -> AccelerationStructureBuildSizes;
1134    unsafe fn get_acceleration_structure_device_address(
1135        &self,
1136        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1137    ) -> wgt::BufferAddress;
1138    unsafe fn destroy_acceleration_structure(
1139        &self,
1140        acceleration_structure: <Self::A as Api>::AccelerationStructure,
1141    );
1142    fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1143
1144    fn get_internal_counters(&self) -> wgt::HalCounters;
1145
1146    fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1147        None
1148    }
1149
1150    fn check_if_oom(&self) -> Result<(), DeviceError>;
1151}
1152
1153pub trait Queue: WasmNotSendSync {
1154    type A: Api;
1155
1156    /// Submit `command_buffers` for execution on GPU.
1157    ///
1158    /// Update `fence` to `value` when the operation is complete. See
1159    /// [`Fence`] for details.
1160    ///
1161    /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1162    /// order they're submitted, with each buffer able to observe the effects of
1163    /// previous buffers' execution. Specifically:
1164    ///
1165    /// - If two calls to `submit` on a single `Queue` occur in a particular
1166    ///   order (that is, they happen on the same thread, or on two threads that
1167    ///   have synchronized to establish an ordering), then the first
1168    ///   submission's commands all complete execution before any of the second
1169    ///   submission's commands begin. All results produced by one submission
1170    ///   are visible to the next.
1171    ///
1172    /// - Within a submission, command buffers execute in the order in which they
1173    ///   appear in `command_buffers`. All results produced by one buffer are
1174    ///   visible to the next.
1175    ///
1176    /// If two calls to `submit` on a single `Queue` from different threads are
1177    /// not synchronized to occur in a particular order, they must pass distinct
1178    /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1179    /// operations to complete is only trustworthy when operations finish in
1180    /// order of increasing fence value, but submissions from different threads
1181    /// cannot determine how to order the fence values if the submissions
1182    /// themselves are unordered. If each thread uses a separate [`Fence`], this
1183    /// problem does not arise.
1184    ///
1185    /// # Safety
1186    ///
1187    /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1188    ///   from a [`CommandEncoder`][ce] that was constructed from the
1189    ///   [`Device`][d] associated with this [`Queue`].
1190    ///
1191    /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1192    ///   commands have finished execution. Since command buffers must not
1193    ///   outlive their encoders, this implies that the encoders must remain
1194    ///   alive as well.
1195    ///
1196    /// - All resources used by a submitted [`CommandBuffer`][cb]
1197    ///   ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1198    ///   on) must remain alive until the command buffer finishes execution.
1199    ///
1200    /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1201    ///   writes to must appear in the `surface_textures` argument.
1202    ///
1203    /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1204    ///   argument more than once.
1205    ///
1206    /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1207    ///   for use with the [`Device`][d] associated with this [`Queue`],
1208    ///   typically by calling [`Surface::configure`].
1209    ///
1210    /// - All calls to this function that include a given [`SurfaceTexture`][st]
1211    ///   in `surface_textures` must use the same [`Fence`].
1212    ///
1213    /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1214    ///   all submissions that will signal it have completed.
1215    ///
1216    /// [`Fence`]: Api::Fence
1217    /// [cb]: Api::CommandBuffer
1218    /// [ce]: Api::CommandEncoder
1219    /// [d]: Api::Device
1220    /// [t]: Api::Texture
1221    /// [bg]: Api::BindGroup
1222    /// [rp]: Api::RenderPipeline
1223    /// [st]: Api::SurfaceTexture
1224    unsafe fn submit(
1225        &self,
1226        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1227        surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1228        signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1229    ) -> Result<(), DeviceError>;
1230    unsafe fn present(
1231        &self,
1232        surface: &<Self::A as Api>::Surface,
1233        texture: <Self::A as Api>::SurfaceTexture,
1234    ) -> Result<(), SurfaceError>;
1235    unsafe fn get_timestamp_period(&self) -> f32;
1236}
1237
1238/// Encoder and allocation pool for `CommandBuffer`s.
1239///
1240/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1241/// acts as the allocation pool that owns the buffers' underlying
1242/// storage. Thus, `CommandBuffer`s must not outlive the
1243/// `CommandEncoder` that created them.
1244///
1245/// The life cycle of a `CommandBuffer` is as follows:
1246///
1247/// - Call [`Device::create_command_encoder`] to create a new
1248///   `CommandEncoder`, in the "closed" state.
1249///
1250/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1251///   recording commands. This puts the `CommandEncoder` in the
1252///   "recording" state.
1253///
1254/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1255///   etc. on a "recording" `CommandEncoder` to add commands to the
1256///   list. (If an error occurs, you must call `discard_encoding`; see
1257///   below.)
1258///
1259/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1260///   encoder and construct a fresh `CommandBuffer` consisting of the
1261///   list of commands recorded up to that point.
1262///
1263/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1264///   the commands recorded thus far and close the encoder. This is
1265///   the only safe thing to do on a `CommandEncoder` if an error has
1266///   occurred while recording commands.
1267///
1268/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1269///   live `CommandBuffers` built from it. All the `CommandBuffer`s
1270///   are destroyed, and their resources are freed.
1271///
1272/// # Safety
1273///
1274/// - The `CommandEncoder` must be in the states described above to
1275///   make the given calls.
1276///
1277/// - A `CommandBuffer` that has been submitted for execution on the
1278///   GPU must live until its execution is complete.
1279///
1280/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1281///   built it.
1282///
1283/// It is the user's responsibility to meet this requirements. This
1284/// allows `CommandEncoder` implementations to keep their state
1285/// tracking to a minimum.
1286pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1287    type A: Api;
1288
1289    /// Begin encoding a new command buffer.
1290    ///
1291    /// This puts this `CommandEncoder` in the "recording" state.
1292    ///
1293    /// # Safety
1294    ///
1295    /// This `CommandEncoder` must be in the "closed" state.
1296    unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1297
1298    /// Discard the command list under construction.
1299    ///
1300    /// If an error has occurred while recording commands, this
1301    /// is the only safe thing to do with the encoder.
1302    ///
1303    /// This puts this `CommandEncoder` in the "closed" state.
1304    ///
1305    /// # Safety
1306    ///
1307    /// This `CommandEncoder` must be in the "recording" state.
1308    ///
1309    /// Callers must not assume that implementations of this
1310    /// function are idempotent, and thus should not call it
1311    /// multiple times in a row.
1312    unsafe fn discard_encoding(&mut self);
1313
1314    /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1315    ///
1316    /// The returned [`CommandBuffer`] holds all the commands recorded
1317    /// on this `CommandEncoder` since the last call to
1318    /// [`begin_encoding`].
1319    ///
1320    /// This puts this `CommandEncoder` in the "closed" state.
1321    ///
1322    /// # Safety
1323    ///
1324    /// This `CommandEncoder` must be in the "recording" state.
1325    ///
1326    /// The returned [`CommandBuffer`] must not outlive this
1327    /// `CommandEncoder`. Implementations are allowed to build
1328    /// `CommandBuffer`s that depend on storage owned by this
1329    /// `CommandEncoder`.
1330    ///
1331    /// [`CommandBuffer`]: Api::CommandBuffer
1332    /// [`begin_encoding`]: CommandEncoder::begin_encoding
1333    unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1334
1335    /// Reclaim all resources belonging to this `CommandEncoder`.
1336    ///
1337    /// # Safety
1338    ///
1339    /// This `CommandEncoder` must be in the "closed" state.
1340    ///
1341    /// The `command_buffers` iterator must produce all the live
1342    /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1343    /// is, every extant `CommandBuffer` returned from `end_encoding`.
1344    ///
1345    /// [`CommandBuffer`]: Api::CommandBuffer
1346    unsafe fn reset_all<I>(&mut self, command_buffers: I)
1347    where
1348        I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1349
1350    unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1351    where
1352        T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1353
1354    unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1355    where
1356        T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1357
1358    // copy operations
1359
1360    unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1361
1362    unsafe fn copy_buffer_to_buffer<T>(
1363        &mut self,
1364        src: &<Self::A as Api>::Buffer,
1365        dst: &<Self::A as Api>::Buffer,
1366        regions: T,
1367    ) where
1368        T: Iterator<Item = BufferCopy>;
1369
1370    /// Copy from an external image to an internal texture.
1371    /// Works with a single array layer.
1372    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1373    /// Note: the copy extent is in physical size (rounded to the block size)
1374    #[cfg(webgl)]
1375    unsafe fn copy_external_image_to_texture<T>(
1376        &mut self,
1377        src: &wgt::CopyExternalImageSourceInfo,
1378        dst: &<Self::A as Api>::Texture,
1379        dst_premultiplication: bool,
1380        regions: T,
1381    ) where
1382        T: Iterator<Item = TextureCopy>;
1383
1384    /// Copy from one texture to another.
1385    /// Works with a single array layer.
1386    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1387    /// Note: the copy extent is in physical size (rounded to the block size)
1388    unsafe fn copy_texture_to_texture<T>(
1389        &mut self,
1390        src: &<Self::A as Api>::Texture,
1391        src_usage: wgt::TextureUses,
1392        dst: &<Self::A as Api>::Texture,
1393        regions: T,
1394    ) where
1395        T: Iterator<Item = TextureCopy>;
1396
1397    /// Copy from buffer to texture.
1398    /// Works with a single array layer.
1399    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1400    /// Note: the copy extent is in physical size (rounded to the block size)
1401    unsafe fn copy_buffer_to_texture<T>(
1402        &mut self,
1403        src: &<Self::A as Api>::Buffer,
1404        dst: &<Self::A as Api>::Texture,
1405        regions: T,
1406    ) where
1407        T: Iterator<Item = BufferTextureCopy>;
1408
1409    /// Copy from texture to buffer.
1410    /// Works with a single array layer.
1411    /// Note: the copy extent is in physical size (rounded to the block size)
1412    unsafe fn copy_texture_to_buffer<T>(
1413        &mut self,
1414        src: &<Self::A as Api>::Texture,
1415        src_usage: wgt::TextureUses,
1416        dst: &<Self::A as Api>::Buffer,
1417        regions: T,
1418    ) where
1419        T: Iterator<Item = BufferTextureCopy>;
1420
1421    unsafe fn copy_acceleration_structure_to_acceleration_structure(
1422        &mut self,
1423        src: &<Self::A as Api>::AccelerationStructure,
1424        dst: &<Self::A as Api>::AccelerationStructure,
1425        copy: wgt::AccelerationStructureCopy,
1426    );
1427    // pass common
1428
1429    /// Sets the bind group at `index` to `group`.
1430    ///
1431    /// If this is not the first call to `set_bind_group` within the current
1432    /// render or compute pass:
1433    ///
1434    /// - If `layout` contains `n` bind group layouts, then any previously set
1435    ///   bind groups at indices `n` or higher are cleared.
1436    ///
1437    /// - If the first `m` bind group layouts of `layout` are equal to those of
1438    ///   the previously passed layout, but no more, then any previously set
1439    ///   bind groups at indices `m` or higher are cleared.
1440    ///
1441    /// It follows from the above that passing the same layout as before doesn't
1442    /// clear any bind groups.
1443    ///
1444    /// # Safety
1445    ///
1446    /// - This [`CommandEncoder`] must be within a render or compute pass.
1447    ///
1448    /// - `index` must be the valid index of some bind group layout in `layout`.
1449    ///   Call this the "relevant bind group layout".
1450    ///
1451    /// - The layout of `group` must be equal to the relevant bind group layout.
1452    ///
1453    /// - The length of `dynamic_offsets` must match the number of buffer
1454    ///   bindings [with dynamic offsets][hdo] in the relevant bind group
1455    ///   layout.
1456    ///
1457    /// - If those buffer bindings are ordered by increasing [`binding` number]
1458    ///   and paired with elements from `dynamic_offsets`, then each offset must
1459    ///   be a valid offset for the binding's corresponding buffer in `group`.
1460    ///
1461    /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1462    /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1463    unsafe fn set_bind_group(
1464        &mut self,
1465        layout: &<Self::A as Api>::PipelineLayout,
1466        index: u32,
1467        group: &<Self::A as Api>::BindGroup,
1468        dynamic_offsets: &[wgt::DynamicOffset],
1469    );
1470
1471    /// Sets a range in immediate data.
1472    ///
1473    /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1474    ///
1475    /// # Safety
1476    ///
1477    /// - `offset_bytes` must be a multiple of 4.
1478    /// - The range of immediates written must be valid for the pipeline layout at draw time.
1479    unsafe fn set_immediates(
1480        &mut self,
1481        layout: &<Self::A as Api>::PipelineLayout,
1482        offset_bytes: u32,
1483        data: &[u32],
1484    );
1485
1486    unsafe fn insert_debug_marker(&mut self, label: &str);
1487    unsafe fn begin_debug_marker(&mut self, group_label: &str);
1488    unsafe fn end_debug_marker(&mut self);
1489
1490    // queries
1491
1492    /// # Safety:
1493    ///
1494    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1495    unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1496    /// # Safety:
1497    ///
1498    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1499    unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1500    unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1501    unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1502    unsafe fn copy_query_results(
1503        &mut self,
1504        set: &<Self::A as Api>::QuerySet,
1505        range: Range<u32>,
1506        buffer: &<Self::A as Api>::Buffer,
1507        offset: wgt::BufferAddress,
1508        stride: wgt::BufferSize,
1509    );
1510
1511    // render passes
1512
1513    /// Begin a new render pass, clearing all active bindings.
1514    ///
1515    /// This clears any bindings established by the following calls:
1516    ///
1517    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1518    /// - [`set_immediates`](CommandEncoder::set_immediates)
1519    /// - [`begin_query`](CommandEncoder::begin_query)
1520    /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1521    /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1522    /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1523    ///
1524    /// # Safety
1525    ///
1526    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1527    ///   by a call to [`end_render_pass`].
1528    ///
1529    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1530    ///   by a call to [`end_compute_pass`].
1531    ///
1532    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1533    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1534    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1535    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1536    unsafe fn begin_render_pass(
1537        &mut self,
1538        desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1539    ) -> Result<(), DeviceError>;
1540
1541    /// End the current render pass.
1542    ///
1543    /// # Safety
1544    ///
1545    /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1546    ///   that has not been followed by a call to [`end_render_pass`].
1547    ///
1548    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1549    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1550    unsafe fn end_render_pass(&mut self);
1551
1552    unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1553
1554    unsafe fn set_index_buffer<'a>(
1555        &mut self,
1556        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1557        format: wgt::IndexFormat,
1558    );
1559    unsafe fn set_vertex_buffer<'a>(
1560        &mut self,
1561        index: u32,
1562        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1563    );
1564    unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1565    unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1566    unsafe fn set_stencil_reference(&mut self, value: u32);
1567    unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1568
1569    unsafe fn draw(
1570        &mut self,
1571        first_vertex: u32,
1572        vertex_count: u32,
1573        first_instance: u32,
1574        instance_count: u32,
1575    );
1576    unsafe fn draw_indexed(
1577        &mut self,
1578        first_index: u32,
1579        index_count: u32,
1580        base_vertex: i32,
1581        first_instance: u32,
1582        instance_count: u32,
1583    );
1584    unsafe fn draw_indirect(
1585        &mut self,
1586        buffer: &<Self::A as Api>::Buffer,
1587        offset: wgt::BufferAddress,
1588        draw_count: u32,
1589    );
1590    unsafe fn draw_indexed_indirect(
1591        &mut self,
1592        buffer: &<Self::A as Api>::Buffer,
1593        offset: wgt::BufferAddress,
1594        draw_count: u32,
1595    );
1596    unsafe fn draw_indirect_count(
1597        &mut self,
1598        buffer: &<Self::A as Api>::Buffer,
1599        offset: wgt::BufferAddress,
1600        count_buffer: &<Self::A as Api>::Buffer,
1601        count_offset: wgt::BufferAddress,
1602        max_count: u32,
1603    );
1604    unsafe fn draw_indexed_indirect_count(
1605        &mut self,
1606        buffer: &<Self::A as Api>::Buffer,
1607        offset: wgt::BufferAddress,
1608        count_buffer: &<Self::A as Api>::Buffer,
1609        count_offset: wgt::BufferAddress,
1610        max_count: u32,
1611    );
1612    unsafe fn draw_mesh_tasks(
1613        &mut self,
1614        group_count_x: u32,
1615        group_count_y: u32,
1616        group_count_z: u32,
1617    );
1618    unsafe fn draw_mesh_tasks_indirect(
1619        &mut self,
1620        buffer: &<Self::A as Api>::Buffer,
1621        offset: wgt::BufferAddress,
1622        draw_count: u32,
1623    );
1624    unsafe fn draw_mesh_tasks_indirect_count(
1625        &mut self,
1626        buffer: &<Self::A as Api>::Buffer,
1627        offset: wgt::BufferAddress,
1628        count_buffer: &<Self::A as Api>::Buffer,
1629        count_offset: wgt::BufferAddress,
1630        max_count: u32,
1631    );
1632
1633    // compute passes
1634
1635    /// Begin a new compute pass, clearing all active bindings.
1636    ///
1637    /// This clears any bindings established by the following calls:
1638    ///
1639    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1640    /// - [`set_immediates`](CommandEncoder::set_immediates)
1641    /// - [`begin_query`](CommandEncoder::begin_query)
1642    /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1643    ///
1644    /// # Safety
1645    ///
1646    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1647    ///   by a call to [`end_render_pass`].
1648    ///
1649    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1650    ///   by a call to [`end_compute_pass`].
1651    ///
1652    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1653    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1654    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1655    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1656    unsafe fn begin_compute_pass(
1657        &mut self,
1658        desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1659    );
1660
1661    /// End the current compute pass.
1662    ///
1663    /// # Safety
1664    ///
1665    /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1666    ///   that has not been followed by a call to [`end_compute_pass`].
1667    ///
1668    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1669    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1670    unsafe fn end_compute_pass(&mut self);
1671
1672    unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1673
1674    unsafe fn dispatch(&mut self, count: [u32; 3]);
1675    unsafe fn dispatch_indirect(
1676        &mut self,
1677        buffer: &<Self::A as Api>::Buffer,
1678        offset: wgt::BufferAddress,
1679    );
1680
1681    /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1682    /// All buffers must be synchronized externally
1683    /// All buffer regions, which are written to may only be passed once per function call,
1684    /// with the exception of updates in the same descriptor.
1685    /// Consequences of this limitation:
1686    /// - scratch buffers need to be unique
1687    /// - a tlas can't be build in the same call with a blas it contains
1688    unsafe fn build_acceleration_structures<'a, T>(
1689        &mut self,
1690        descriptor_count: u32,
1691        descriptors: T,
1692    ) where
1693        Self::A: 'a,
1694        T: IntoIterator<
1695            Item = BuildAccelerationStructureDescriptor<
1696                'a,
1697                <Self::A as Api>::Buffer,
1698                <Self::A as Api>::AccelerationStructure,
1699            >,
1700        >;
1701
1702    unsafe fn place_acceleration_structure_barrier(
1703        &mut self,
1704        barrier: AccelerationStructureBarrier,
1705    );
1706    // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1707    unsafe fn read_acceleration_structure_compact_size(
1708        &mut self,
1709        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1710        buf: &<Self::A as Api>::Buffer,
1711    );
1712}
1713
1714bitflags!(
1715    /// Pipeline layout creation flags.
1716    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1717    pub struct PipelineLayoutFlags: u32 {
1718        /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1719        /// via immediates for direct execution.
1720        const FIRST_VERTEX_INSTANCE = 1 << 0;
1721        /// D3D12: Add support for `num_workgroups` builtins via immediates
1722        /// for direct execution.
1723        const NUM_WORK_GROUPS = 1 << 1;
1724        /// D3D12: Add support for the builtins that the other flags enable for
1725        /// indirect execution.
1726        const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1727    }
1728);
1729
1730bitflags!(
1731    /// Pipeline layout creation flags.
1732    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1733    pub struct BindGroupLayoutFlags: u32 {
1734        /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1735        const PARTIALLY_BOUND = 1 << 0;
1736    }
1737);
1738
1739bitflags!(
1740    /// Texture format capability flags.
1741    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1742    pub struct TextureFormatCapabilities: u32 {
1743        /// Format can be sampled.
1744        const SAMPLED = 1 << 0;
1745        /// Format can be sampled with a linear sampler.
1746        const SAMPLED_LINEAR = 1 << 1;
1747        /// Format can be sampled with a min/max reduction sampler.
1748        const SAMPLED_MINMAX = 1 << 2;
1749
1750        /// Format can be used as storage with read-only access.
1751        const STORAGE_READ_ONLY = 1 << 3;
1752        /// Format can be used as storage with write-only access.
1753        const STORAGE_WRITE_ONLY = 1 << 4;
1754        /// Format can be used as storage with both read and write access.
1755        const STORAGE_READ_WRITE = 1 << 5;
1756        /// Format can be used as storage with atomics.
1757        const STORAGE_ATOMIC = 1 << 6;
1758
1759        /// Format can be used as color and input attachment.
1760        const COLOR_ATTACHMENT = 1 << 7;
1761        /// Format can be used as color (with blending) and input attachment.
1762        const COLOR_ATTACHMENT_BLEND = 1 << 8;
1763        /// Format can be used as depth-stencil and input attachment.
1764        const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1765
1766        /// Format can be multisampled by x2.
1767        const MULTISAMPLE_X2   = 1 << 10;
1768        /// Format can be multisampled by x4.
1769        const MULTISAMPLE_X4   = 1 << 11;
1770        /// Format can be multisampled by x8.
1771        const MULTISAMPLE_X8   = 1 << 12;
1772        /// Format can be multisampled by x16.
1773        const MULTISAMPLE_X16  = 1 << 13;
1774
1775        /// Format can be used for render pass resolve targets.
1776        const MULTISAMPLE_RESOLVE = 1 << 14;
1777
1778        /// Format can be copied from.
1779        const COPY_SRC = 1 << 15;
1780        /// Format can be copied to.
1781        const COPY_DST = 1 << 16;
1782    }
1783);
1784
1785bitflags!(
1786    /// Texture format capability flags.
1787    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1788    pub struct FormatAspects: u8 {
1789        const COLOR = 1 << 0;
1790        const DEPTH = 1 << 1;
1791        const STENCIL = 1 << 2;
1792        const PLANE_0 = 1 << 3;
1793        const PLANE_1 = 1 << 4;
1794        const PLANE_2 = 1 << 5;
1795
1796        const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1797    }
1798);
1799
1800impl FormatAspects {
1801    pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1802        let aspect_mask = match aspect {
1803            wgt::TextureAspect::All => Self::all(),
1804            wgt::TextureAspect::DepthOnly => Self::DEPTH,
1805            wgt::TextureAspect::StencilOnly => Self::STENCIL,
1806            wgt::TextureAspect::Plane0 => Self::PLANE_0,
1807            wgt::TextureAspect::Plane1 => Self::PLANE_1,
1808            wgt::TextureAspect::Plane2 => Self::PLANE_2,
1809        };
1810        Self::from(format) & aspect_mask
1811    }
1812
1813    /// Returns `true` if only one flag is set
1814    pub fn is_one(&self) -> bool {
1815        self.bits().is_power_of_two()
1816    }
1817
1818    pub fn map(&self) -> wgt::TextureAspect {
1819        match *self {
1820            Self::COLOR => wgt::TextureAspect::All,
1821            Self::DEPTH => wgt::TextureAspect::DepthOnly,
1822            Self::STENCIL => wgt::TextureAspect::StencilOnly,
1823            Self::PLANE_0 => wgt::TextureAspect::Plane0,
1824            Self::PLANE_1 => wgt::TextureAspect::Plane1,
1825            Self::PLANE_2 => wgt::TextureAspect::Plane2,
1826            _ => unreachable!(),
1827        }
1828    }
1829}
1830
1831impl From<wgt::TextureFormat> for FormatAspects {
1832    fn from(format: wgt::TextureFormat) -> Self {
1833        match format {
1834            wgt::TextureFormat::Stencil8 => Self::STENCIL,
1835            wgt::TextureFormat::Depth16Unorm
1836            | wgt::TextureFormat::Depth32Float
1837            | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1838            wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1839                Self::DEPTH_STENCIL
1840            }
1841            wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1842            _ => Self::COLOR,
1843        }
1844    }
1845}
1846
1847bitflags!(
1848    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1849    pub struct MemoryFlags: u32 {
1850        const TRANSIENT = 1 << 0;
1851        const PREFER_COHERENT = 1 << 1;
1852    }
1853);
1854
1855bitflags!(
1856    /// Attachment load and store operations.
1857    ///
1858    /// There must be at least one flag from the LOAD group and one from the STORE group set.
1859    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1860    pub struct AttachmentOps: u8 {
1861        /// Load the existing contents of the attachment.
1862        const LOAD = 1 << 0;
1863        /// Clear the attachment to a specified value.
1864        const LOAD_CLEAR = 1 << 1;
1865        /// The contents of the attachment are undefined.
1866        const LOAD_DONT_CARE = 1 << 2;
1867        /// Store the contents of the attachment.
1868        const STORE = 1 << 3;
1869        /// The contents of the attachment are undefined after the pass.
1870        const STORE_DISCARD = 1 << 4;
1871    }
1872);
1873
1874#[derive(Debug)]
1875pub struct InstanceDescriptor<'a> {
1876    pub name: &'a str,
1877    pub flags: wgt::InstanceFlags,
1878    pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1879    pub backend_options: wgt::BackendOptions,
1880    pub telemetry: Option<Telemetry>,
1881    /// This is a borrow because the surrounding `core::Instance` keeps the the owned display handle
1882    /// alive already.
1883    pub display: Option<DisplayHandle<'a>>,
1884}
1885
1886#[derive(Clone, Debug)]
1887pub struct Alignments {
1888    /// The alignment of the start of the buffer used as a GPU copy source.
1889    pub buffer_copy_offset: wgt::BufferSize,
1890
1891    /// The alignment of the row pitch of the texture data stored in a buffer that is
1892    /// used in a GPU copy operation.
1893    pub buffer_copy_pitch: wgt::BufferSize,
1894
1895    /// The finest alignment of bound range checking for uniform buffers.
1896    ///
1897    /// When `wgpu_hal` restricts shader references to the [accessible
1898    /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1899    /// is the bind group binding's stated [size], rounded up to the next
1900    /// multiple of this value.
1901    ///
1902    /// We don't need an analogous field for storage buffer bindings, because
1903    /// all our backends promise to enforce the size at least to a four-byte
1904    /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1905    /// of four anyway.
1906    ///
1907    /// [ar]: struct.BufferBinding.html#accessible-region
1908    /// [`Uniform`]: wgt::BufferBindingType::Uniform
1909    /// [size]: BufferBinding::size
1910    pub uniform_bounds_check_alignment: wgt::BufferSize,
1911
1912    /// The size of the raw TLAS instance
1913    pub raw_tlas_instance_size: usize,
1914
1915    /// What the scratch buffer for building an acceleration structure must be aligned to
1916    pub ray_tracing_scratch_buffer_alignment: u32,
1917}
1918
1919#[derive(Clone, Debug)]
1920pub struct Capabilities {
1921    pub limits: wgt::Limits,
1922    pub alignments: Alignments,
1923    pub downlevel: wgt::DownlevelCapabilities,
1924    /// Supported cooperative matrix configurations.
1925    ///
1926    /// Empty if cooperative matrices are not supported.
1927    pub cooperative_matrix_properties: Vec<wgt::CooperativeMatrixProperties>,
1928}
1929
1930/// An adapter with all the information needed to reason about its capabilities.
1931///
1932/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1933/// methods on the backend [`Instance`] or [`Adapter`].
1934#[derive(Debug)]
1935pub struct ExposedAdapter<A: Api> {
1936    pub adapter: A::Adapter,
1937    pub info: wgt::AdapterInfo,
1938    pub features: wgt::Features,
1939    pub capabilities: Capabilities,
1940}
1941
1942/// Describes information about what a `Surface`'s presentation capabilities are.
1943/// Fetch this with [Adapter::surface_capabilities].
1944#[derive(Debug, Clone)]
1945pub struct SurfaceCapabilities {
1946    /// List of supported texture formats.
1947    ///
1948    /// Must be at least one.
1949    pub formats: Vec<wgt::TextureFormat>,
1950
1951    /// Range for the number of queued frames.
1952    ///
1953    /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1954    /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1955    ///
1956    /// - `maximum_frame_latency.start` must be at least 1.
1957    /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1958    pub maximum_frame_latency: RangeInclusive<u32>,
1959
1960    /// Current extent of the surface, if known.
1961    pub current_extent: Option<wgt::Extent3d>,
1962
1963    /// Supported texture usage flags.
1964    ///
1965    /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1966    pub usage: wgt::TextureUses,
1967
1968    /// List of supported V-sync modes.
1969    ///
1970    /// Must be at least one.
1971    pub present_modes: Vec<wgt::PresentMode>,
1972
1973    /// List of supported alpha composition modes.
1974    ///
1975    /// Must be at least one.
1976    pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1977}
1978
1979#[derive(Debug)]
1980pub struct AcquiredSurfaceTexture<A: Api> {
1981    pub texture: A::SurfaceTexture,
1982    /// The presentation configuration no longer matches
1983    /// the surface properties exactly, but can still be used to present
1984    /// to the surface successfully.
1985    pub suboptimal: bool,
1986}
1987
1988/// An open connection to a device and a queue.
1989///
1990/// This can be created from [`Adapter::open`] or backend
1991/// specific methods on the backend's [`Instance`] or [`Adapter`].
1992#[derive(Debug)]
1993pub struct OpenDevice<A: Api> {
1994    pub device: A::Device,
1995    pub queue: A::Queue,
1996}
1997
1998#[derive(Clone, Debug)]
1999pub struct BufferMapping {
2000    pub ptr: NonNull<u8>,
2001    pub is_coherent: bool,
2002}
2003
2004#[derive(Clone, Debug)]
2005pub struct BufferDescriptor<'a> {
2006    pub label: Label<'a>,
2007    pub size: wgt::BufferAddress,
2008    pub usage: wgt::BufferUses,
2009    pub memory_flags: MemoryFlags,
2010}
2011
2012#[derive(Clone, Debug)]
2013pub struct TextureDescriptor<'a> {
2014    pub label: Label<'a>,
2015    pub size: wgt::Extent3d,
2016    pub mip_level_count: u32,
2017    pub sample_count: u32,
2018    pub dimension: wgt::TextureDimension,
2019    pub format: wgt::TextureFormat,
2020    pub usage: wgt::TextureUses,
2021    pub memory_flags: MemoryFlags,
2022    /// Allows views of this texture to have a different format
2023    /// than the texture does.
2024    pub view_formats: Vec<wgt::TextureFormat>,
2025}
2026
2027impl TextureDescriptor<'_> {
2028    pub fn copy_extent(&self) -> CopyExtent {
2029        CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
2030    }
2031
2032    pub fn is_cube_compatible(&self) -> bool {
2033        self.dimension == wgt::TextureDimension::D2
2034            && self.size.depth_or_array_layers % 6 == 0
2035            && self.sample_count == 1
2036            && self.size.width == self.size.height
2037    }
2038
2039    pub fn array_layer_count(&self) -> u32 {
2040        match self.dimension {
2041            wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
2042            wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
2043        }
2044    }
2045}
2046
2047/// TextureView descriptor.
2048///
2049/// Valid usage:
2050///. - `format` has to be the same as `TextureDescriptor::format`
2051///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
2052///. - `usage` has to be a subset of `TextureDescriptor::usage`
2053///. - `range` has to be a subset of parent texture
2054#[derive(Clone, Debug)]
2055pub struct TextureViewDescriptor<'a> {
2056    pub label: Label<'a>,
2057    pub format: wgt::TextureFormat,
2058    pub dimension: wgt::TextureViewDimension,
2059    pub usage: wgt::TextureUses,
2060    pub range: wgt::ImageSubresourceRange,
2061}
2062
2063#[derive(Clone, Debug)]
2064pub struct SamplerDescriptor<'a> {
2065    pub label: Label<'a>,
2066    pub address_modes: [wgt::AddressMode; 3],
2067    pub mag_filter: wgt::FilterMode,
2068    pub min_filter: wgt::FilterMode,
2069    pub mipmap_filter: wgt::MipmapFilterMode,
2070    pub lod_clamp: Range<f32>,
2071    pub compare: Option<wgt::CompareFunction>,
2072    // Must in the range [1, 16].
2073    //
2074    // Anisotropic filtering must be supported if this is not 1.
2075    pub anisotropy_clamp: u16,
2076    pub border_color: Option<wgt::SamplerBorderColor>,
2077}
2078
2079/// BindGroupLayout descriptor.
2080///
2081/// Valid usage:
2082/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
2083#[derive(Clone, Debug)]
2084pub struct BindGroupLayoutDescriptor<'a> {
2085    pub label: Label<'a>,
2086    pub flags: BindGroupLayoutFlags,
2087    pub entries: &'a [wgt::BindGroupLayoutEntry],
2088}
2089
2090#[derive(Clone, Debug)]
2091pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
2092    pub label: Label<'a>,
2093    pub flags: PipelineLayoutFlags,
2094    pub bind_group_layouts: &'a [&'a B],
2095    pub immediate_size: u32,
2096}
2097
2098/// A region of a buffer made visible to shaders via a [`BindGroup`].
2099///
2100/// [`BindGroup`]: Api::BindGroup
2101///
2102/// ## Construction
2103///
2104/// The recommended way to construct a `BufferBinding` is using the `binding`
2105/// method on a wgpu-core `Buffer`, which will validate the binding size
2106/// against the buffer size. A `new_unchecked` constructor is also provided for
2107/// cases where direct construction is necessary.
2108///
2109/// ## Accessible region
2110///
2111/// `wgpu_hal` guarantees that shaders compiled with
2112/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
2113/// write data via this binding outside the *accessible region* of a buffer:
2114///
2115/// - The accessible region starts at [`offset`].
2116///
2117/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2118///   which must be a multiple of 4.
2119///
2120/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2121///   rounded up to the next multiple of
2122///   [`Alignments::uniform_bounds_check_alignment`].
2123///
2124/// Note that this guarantee is stricter than WGSL's requirements for
2125/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2126/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2127/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2128/// never be read by the application before they are overwritten. This
2129/// optimization consults bind group buffer binding regions to determine which
2130/// parts of which buffers shaders might observe. This optimization is only
2131/// sound if shader access is bounds-checked.
2132///
2133/// ## Zero-length bindings
2134///
2135/// Some back ends cannot tolerate zero-length regions; for example, see
2136/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2137/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2138/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2139/// previously stated that a `BufferBinding` must have `offset` strictly less
2140/// than the size of the buffer, but this restriction was not honored elsewhere
2141/// in the code, so has been removed. However, it remains the case that
2142/// some backends do not support zero-length bindings, so additional
2143/// logic is needed somewhere to handle this properly. See
2144/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2145///
2146/// [`offset`]: BufferBinding::offset
2147/// [`size`]: BufferBinding::size
2148/// [`Storage`]: wgt::BufferBindingType::Storage
2149/// [`Uniform`]: wgt::BufferBindingType::Uniform
2150/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2151/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2152/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2153/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2154#[derive(Debug)]
2155pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2156    /// The buffer being bound.
2157    ///
2158    /// This is not fully `pub` to prevent direct construction of
2159    /// `BufferBinding`s, while still allowing public read access to the `offset`
2160    /// and `size` properties.
2161    pub(crate) buffer: &'a B,
2162
2163    /// The offset at which the bound region starts.
2164    ///
2165    /// This must be less or equal to the size of the buffer.
2166    pub offset: wgt::BufferAddress,
2167
2168    /// The size of the region bound, in bytes.
2169    ///
2170    /// If `None`, the region extends from `offset` to the end of the
2171    /// buffer. Given the restrictions on `offset`, this means that
2172    /// the size is always greater than zero.
2173    pub size: Option<wgt::BufferSize>,
2174}
2175
2176// We must implement this manually because `B` is not necessarily `Clone`.
2177impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2178    fn clone(&self) -> Self {
2179        BufferBinding {
2180            buffer: self.buffer,
2181            offset: self.offset,
2182            size: self.size,
2183        }
2184    }
2185}
2186
2187/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2188/// really wants to be using `NonZeroU64`.
2189/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2190pub trait ShouldBeNonZeroExt {
2191    fn get(&self) -> u64;
2192}
2193
2194impl ShouldBeNonZeroExt for NonZeroU64 {
2195    fn get(&self) -> u64 {
2196        NonZeroU64::get(*self)
2197    }
2198}
2199
2200impl ShouldBeNonZeroExt for u64 {
2201    fn get(&self) -> u64 {
2202        *self
2203    }
2204}
2205
2206impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2207    fn get(&self) -> u64 {
2208        match *self {
2209            Some(non_zero) => non_zero.get(),
2210            None => 0,
2211        }
2212    }
2213}
2214
2215impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2216    /// Construct a `BufferBinding` with the given contents.
2217    ///
2218    /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2219    /// of this method. `Buffer::binding` validates the size of the binding
2220    /// against the size of the buffer.
2221    ///
2222    /// It is more difficult to provide a validating constructor here, due to
2223    /// not having direct access to the size of a `DynBuffer`.
2224    ///
2225    /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2226    /// bytes starting at `offset` is contained within the buffer.
2227    ///
2228    /// The `S` type parameter is a temporary convenience to allow callers to
2229    /// pass a zero size. When the zero-size binding issue is resolved, the
2230    /// argument should just match the type of the member.
2231    /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2232    pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2233        buffer: &'a B,
2234        offset: wgt::BufferAddress,
2235        size: S,
2236    ) -> Self {
2237        Self {
2238            buffer,
2239            offset,
2240            size: size.into(),
2241        }
2242    }
2243}
2244
2245#[derive(Debug)]
2246pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2247    pub view: &'a T,
2248    pub usage: wgt::TextureUses,
2249}
2250
2251impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2252    fn clone(&self) -> Self {
2253        TextureBinding {
2254            view: self.view,
2255            usage: self.usage,
2256        }
2257    }
2258}
2259
2260#[derive(Debug)]
2261pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2262    pub planes: [TextureBinding<'a, T>; 3],
2263    pub params: BufferBinding<'a, B>,
2264}
2265
2266impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2267    for ExternalTextureBinding<'a, B, T>
2268{
2269    fn clone(&self) -> Self {
2270        ExternalTextureBinding {
2271            planes: self.planes.clone(),
2272            params: self.params.clone(),
2273        }
2274    }
2275}
2276
2277/// cbindgen:ignore
2278#[derive(Clone, Debug)]
2279pub struct BindGroupEntry {
2280    pub binding: u32,
2281    pub resource_index: u32,
2282    pub count: u32,
2283}
2284
2285/// BindGroup descriptor.
2286///
2287/// Valid usage:
2288///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2289///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2290///. - each entry has to be compatible with the `layout`
2291///. - each entry's `BindGroupEntry::resource_index` is within range
2292///    of the corresponding resource array, selected by the relevant
2293///    `BindGroupLayoutEntry`.
2294#[derive(Clone, Debug)]
2295pub struct BindGroupDescriptor<
2296    'a,
2297    Bgl: DynBindGroupLayout + ?Sized,
2298    B: DynBuffer + ?Sized,
2299    S: DynSampler + ?Sized,
2300    T: DynTextureView + ?Sized,
2301    A: DynAccelerationStructure + ?Sized,
2302> {
2303    pub label: Label<'a>,
2304    pub layout: &'a Bgl,
2305    pub buffers: &'a [BufferBinding<'a, B>],
2306    pub samplers: &'a [&'a S],
2307    pub textures: &'a [TextureBinding<'a, T>],
2308    pub entries: &'a [BindGroupEntry],
2309    pub acceleration_structures: &'a [&'a A],
2310    pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2311}
2312
2313#[derive(Clone, Debug)]
2314pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2315    pub label: Label<'a>,
2316    pub queue: &'a Q,
2317}
2318
2319/// Naga shader module.
2320#[derive(Default)]
2321pub struct NagaShader {
2322    /// Shader module IR.
2323    pub module: Cow<'static, naga::Module>,
2324    /// Analysis information of the module.
2325    pub info: naga::valid::ModuleInfo,
2326    /// Source codes for debug
2327    pub debug_source: Option<DebugSource>,
2328}
2329
2330// Custom implementation avoids the need to generate Debug impl code
2331// for the whole Naga module and info.
2332impl fmt::Debug for NagaShader {
2333    fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2334        write!(formatter, "Naga shader")
2335    }
2336}
2337
2338/// Shader input.
2339#[allow(clippy::large_enum_variant)]
2340pub enum ShaderInput<'a> {
2341    Naga(NagaShader),
2342    Msl {
2343        shader: &'a str,
2344        entry_point: String,
2345        num_workgroups: (u32, u32, u32),
2346    },
2347    SpirV(&'a [u32]),
2348    Dxil {
2349        shader: &'a [u8],
2350        entry_point: String,
2351        num_workgroups: (u32, u32, u32),
2352    },
2353    Hlsl {
2354        shader: &'a str,
2355        entry_point: String,
2356        num_workgroups: (u32, u32, u32),
2357    },
2358    Glsl {
2359        shader: &'a str,
2360        entry_point: String,
2361        num_workgroups: (u32, u32, u32),
2362    },
2363}
2364
2365pub struct ShaderModuleDescriptor<'a> {
2366    pub label: Label<'a>,
2367
2368    /// # Safety
2369    ///
2370    /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2371    ///
2372    /// [src]: wgt::ShaderRuntimeChecks
2373    pub runtime_checks: wgt::ShaderRuntimeChecks,
2374}
2375
2376#[derive(Debug, Clone)]
2377pub struct DebugSource {
2378    pub file_name: Cow<'static, str>,
2379    pub source_code: Cow<'static, str>,
2380}
2381
2382/// Describes a programmable pipeline stage.
2383#[derive(Debug)]
2384pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2385    /// The compiled shader module for this stage.
2386    pub module: &'a M,
2387    /// The name of the entry point in the compiled shader. There must be a function with this name
2388    ///  in the shader.
2389    pub entry_point: &'a str,
2390    /// Pipeline constants
2391    pub constants: &'a naga::back::PipelineConstants,
2392    /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2393    ///
2394    /// This is required by the WebGPU spec, but may have overhead which can be avoided
2395    /// for cross-platform applications
2396    pub zero_initialize_workgroup_memory: bool,
2397}
2398
2399impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2400    fn clone(&self) -> Self {
2401        Self {
2402            module: self.module,
2403            entry_point: self.entry_point,
2404            constants: self.constants,
2405            zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2406        }
2407    }
2408}
2409
2410/// Describes a compute pipeline.
2411#[derive(Clone, Debug)]
2412pub struct ComputePipelineDescriptor<
2413    'a,
2414    Pl: DynPipelineLayout + ?Sized,
2415    M: DynShaderModule + ?Sized,
2416    Pc: DynPipelineCache + ?Sized,
2417> {
2418    pub label: Label<'a>,
2419    /// The layout of bind groups for this pipeline.
2420    pub layout: &'a Pl,
2421    /// The compiled compute stage and its entry point.
2422    pub stage: ProgrammableStage<'a, M>,
2423    /// The cache which will be used and filled when compiling this pipeline
2424    pub cache: Option<&'a Pc>,
2425}
2426
2427pub struct PipelineCacheDescriptor<'a> {
2428    pub label: Label<'a>,
2429    pub data: Option<&'a [u8]>,
2430}
2431
2432/// Describes how the vertex buffer is interpreted.
2433#[derive(Clone, Debug)]
2434pub struct VertexBufferLayout<'a> {
2435    /// The stride, in bytes, between elements of this buffer.
2436    pub array_stride: wgt::BufferAddress,
2437    /// How often this vertex buffer is "stepped" forward.
2438    pub step_mode: wgt::VertexStepMode,
2439    /// The list of attributes which comprise a single vertex.
2440    pub attributes: &'a [wgt::VertexAttribute],
2441}
2442
2443#[derive(Clone, Debug)]
2444pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2445    Standard {
2446        /// The format of any vertex buffers used with this pipeline.
2447        vertex_buffers: &'a [VertexBufferLayout<'a>],
2448        /// The vertex stage for this pipeline.
2449        vertex_stage: ProgrammableStage<'a, M>,
2450    },
2451    Mesh {
2452        task_stage: Option<ProgrammableStage<'a, M>>,
2453        mesh_stage: ProgrammableStage<'a, M>,
2454    },
2455}
2456
2457/// Describes a render (graphics) pipeline.
2458#[derive(Clone, Debug)]
2459pub struct RenderPipelineDescriptor<
2460    'a,
2461    Pl: DynPipelineLayout + ?Sized,
2462    M: DynShaderModule + ?Sized,
2463    Pc: DynPipelineCache + ?Sized,
2464> {
2465    pub label: Label<'a>,
2466    /// The layout of bind groups for this pipeline.
2467    pub layout: &'a Pl,
2468    /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2469    pub vertex_processor: VertexProcessor<'a, M>,
2470    /// The properties of the pipeline at the primitive assembly and rasterization level.
2471    pub primitive: wgt::PrimitiveState,
2472    /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2473    pub depth_stencil: Option<wgt::DepthStencilState>,
2474    /// The multi-sampling properties of the pipeline.
2475    pub multisample: wgt::MultisampleState,
2476    /// The fragment stage for this pipeline.
2477    pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2478    /// The effect of draw calls on the color aspect of the output target.
2479    pub color_targets: &'a [Option<wgt::ColorTargetState>],
2480    /// If the pipeline will be used with a multiview render pass, this indicates how many array
2481    /// layers the attachments will have.
2482    pub multiview_mask: Option<NonZeroU32>,
2483    /// The cache which will be used and filled when compiling this pipeline
2484    pub cache: Option<&'a Pc>,
2485}
2486
2487#[derive(Debug, Clone)]
2488pub struct SurfaceConfiguration {
2489    /// Maximum number of queued frames. Must be in
2490    /// `SurfaceCapabilities::maximum_frame_latency` range.
2491    pub maximum_frame_latency: u32,
2492    /// Vertical synchronization mode.
2493    pub present_mode: wgt::PresentMode,
2494    /// Alpha composition mode.
2495    pub composite_alpha_mode: wgt::CompositeAlphaMode,
2496    /// Format of the surface textures.
2497    pub format: wgt::TextureFormat,
2498    /// Requested texture extent. Must be in
2499    /// `SurfaceCapabilities::extents` range.
2500    pub extent: wgt::Extent3d,
2501    /// Allowed usage of surface textures,
2502    pub usage: wgt::TextureUses,
2503    /// Allows views of swapchain texture to have a different format
2504    /// than the texture does.
2505    pub view_formats: Vec<wgt::TextureFormat>,
2506}
2507
2508#[derive(Debug, Clone)]
2509pub struct Rect<T> {
2510    pub x: T,
2511    pub y: T,
2512    pub w: T,
2513    pub h: T,
2514}
2515
2516#[derive(Debug, Clone, PartialEq)]
2517pub struct StateTransition<T> {
2518    pub from: T,
2519    pub to: T,
2520}
2521
2522#[derive(Debug, Clone)]
2523pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2524    pub buffer: &'a B,
2525    pub usage: StateTransition<wgt::BufferUses>,
2526}
2527
2528#[derive(Debug, Clone)]
2529pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2530    pub texture: &'a T,
2531    pub range: wgt::ImageSubresourceRange,
2532    pub usage: StateTransition<wgt::TextureUses>,
2533}
2534
2535#[derive(Clone, Copy, Debug)]
2536pub struct BufferCopy {
2537    pub src_offset: wgt::BufferAddress,
2538    pub dst_offset: wgt::BufferAddress,
2539    pub size: wgt::BufferSize,
2540}
2541
2542#[derive(Clone, Debug)]
2543pub struct TextureCopyBase {
2544    pub mip_level: u32,
2545    pub array_layer: u32,
2546    /// Origin within a texture.
2547    /// Note: for 1D and 2D textures, Z must be 0.
2548    pub origin: wgt::Origin3d,
2549    pub aspect: FormatAspects,
2550}
2551
2552#[derive(Clone, Copy, Debug)]
2553pub struct CopyExtent {
2554    pub width: u32,
2555    pub height: u32,
2556    pub depth: u32,
2557}
2558
2559impl From<wgt::Extent3d> for CopyExtent {
2560    fn from(value: wgt::Extent3d) -> Self {
2561        let wgt::Extent3d {
2562            width,
2563            height,
2564            depth_or_array_layers,
2565        } = value;
2566        Self {
2567            width,
2568            height,
2569            depth: depth_or_array_layers,
2570        }
2571    }
2572}
2573
2574impl From<CopyExtent> for wgt::Extent3d {
2575    fn from(value: CopyExtent) -> Self {
2576        let CopyExtent {
2577            width,
2578            height,
2579            depth,
2580        } = value;
2581        Self {
2582            width,
2583            height,
2584            depth_or_array_layers: depth,
2585        }
2586    }
2587}
2588
2589#[derive(Clone, Debug)]
2590pub struct TextureCopy {
2591    pub src_base: TextureCopyBase,
2592    pub dst_base: TextureCopyBase,
2593    pub size: CopyExtent,
2594}
2595
2596#[derive(Clone, Debug)]
2597pub struct BufferTextureCopy {
2598    pub buffer_layout: wgt::TexelCopyBufferLayout,
2599    pub texture_base: TextureCopyBase,
2600    pub size: CopyExtent,
2601}
2602
2603#[derive(Clone, Debug)]
2604pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2605    pub view: &'a T,
2606    /// Contains either a single mutating usage as a target,
2607    /// or a valid combination of read-only usages.
2608    pub usage: wgt::TextureUses,
2609}
2610
2611#[derive(Clone, Debug)]
2612pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2613    pub target: Attachment<'a, T>,
2614    pub depth_slice: Option<u32>,
2615    pub resolve_target: Option<Attachment<'a, T>>,
2616    pub ops: AttachmentOps,
2617    pub clear_value: wgt::Color,
2618}
2619
2620#[derive(Clone, Debug)]
2621pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2622    pub target: Attachment<'a, T>,
2623    pub depth_ops: AttachmentOps,
2624    pub stencil_ops: AttachmentOps,
2625    pub clear_value: (f32, u32),
2626}
2627
2628#[derive(Clone, Debug)]
2629pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2630    pub query_set: &'a Q,
2631    pub beginning_of_pass_write_index: Option<u32>,
2632    pub end_of_pass_write_index: Option<u32>,
2633}
2634
2635#[derive(Clone, Debug)]
2636pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2637    pub label: Label<'a>,
2638    pub extent: wgt::Extent3d,
2639    pub sample_count: u32,
2640    pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2641    pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2642    pub multiview_mask: Option<NonZeroU32>,
2643    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2644    pub occlusion_query_set: Option<&'a Q>,
2645}
2646
2647#[derive(Clone, Debug)]
2648pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2649    pub label: Label<'a>,
2650    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2651}
2652
2653#[test]
2654fn test_default_limits() {
2655    let limits = wgt::Limits::default();
2656    assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2657}
2658
2659#[derive(Clone, Debug)]
2660pub struct AccelerationStructureDescriptor<'a> {
2661    pub label: Label<'a>,
2662    pub size: wgt::BufferAddress,
2663    pub format: AccelerationStructureFormat,
2664    pub allow_compaction: bool,
2665}
2666
2667#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2668pub enum AccelerationStructureFormat {
2669    TopLevel,
2670    BottomLevel,
2671}
2672
2673#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2674pub enum AccelerationStructureBuildMode {
2675    Build,
2676    Update,
2677}
2678
2679/// Information of the required size for a corresponding entries struct (+ flags)
2680#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2681pub struct AccelerationStructureBuildSizes {
2682    pub acceleration_structure_size: wgt::BufferAddress,
2683    pub update_scratch_size: wgt::BufferAddress,
2684    pub build_scratch_size: wgt::BufferAddress,
2685}
2686
2687/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2688/// For updates, only the data is allowed to change (not the meta data or sizes).
2689#[derive(Clone, Debug)]
2690pub struct BuildAccelerationStructureDescriptor<
2691    'a,
2692    B: DynBuffer + ?Sized,
2693    A: DynAccelerationStructure + ?Sized,
2694> {
2695    pub entries: &'a AccelerationStructureEntries<'a, B>,
2696    pub mode: AccelerationStructureBuildMode,
2697    pub flags: AccelerationStructureBuildFlags,
2698    pub source_acceleration_structure: Option<&'a A>,
2699    pub destination_acceleration_structure: &'a A,
2700    pub scratch_buffer: &'a B,
2701    pub scratch_buffer_offset: wgt::BufferAddress,
2702}
2703
2704/// - All buffers, buffer addresses and offsets will be ignored.
2705/// - The build mode will be ignored.
2706/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2707///   may result in reduced size requirements.
2708/// - Any other change may result in a bigger or smaller size requirement.
2709#[derive(Clone, Debug)]
2710pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2711    pub entries: &'a AccelerationStructureEntries<'a, B>,
2712    pub flags: AccelerationStructureBuildFlags,
2713}
2714
2715/// Entries for a single descriptor
2716/// * `Instances` - Multiple instances for a top level acceleration structure
2717/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2718/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2719#[derive(Debug)]
2720pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2721    Instances(AccelerationStructureInstances<'a, B>),
2722    Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2723    AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2724}
2725
2726/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2727/// * `indices` - optional index buffer with attributes
2728/// * `transform` - optional transform
2729#[derive(Clone, Debug)]
2730pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2731    pub vertex_buffer: Option<&'a B>,
2732    pub vertex_format: wgt::VertexFormat,
2733    pub first_vertex: u32,
2734    pub vertex_count: u32,
2735    pub vertex_stride: wgt::BufferAddress,
2736    pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2737    pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2738    pub flags: AccelerationStructureGeometryFlags,
2739}
2740
2741/// * `offset` - offset in bytes
2742#[derive(Clone, Debug)]
2743pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2744    pub buffer: Option<&'a B>,
2745    pub offset: u32,
2746    pub count: u32,
2747    pub stride: wgt::BufferAddress,
2748    pub flags: AccelerationStructureGeometryFlags,
2749}
2750
2751pub struct AccelerationStructureCopy {
2752    pub copy_flags: wgt::AccelerationStructureCopy,
2753    pub type_flags: wgt::AccelerationStructureType,
2754}
2755
2756/// * `offset` - offset in bytes
2757#[derive(Clone, Debug)]
2758pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2759    pub buffer: Option<&'a B>,
2760    pub offset: u32,
2761    pub count: u32,
2762}
2763
2764/// * `offset` - offset in bytes
2765#[derive(Clone, Debug)]
2766pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2767    pub format: wgt::IndexFormat,
2768    pub buffer: Option<&'a B>,
2769    pub offset: u32,
2770    pub count: u32,
2771}
2772
2773/// * `offset` - offset in bytes
2774#[derive(Clone, Debug)]
2775pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2776    pub buffer: &'a B,
2777    pub offset: u32,
2778}
2779
2780pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2781pub use wgt::AccelerationStructureGeometryFlags;
2782
2783bitflags::bitflags! {
2784    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2785    pub struct AccelerationStructureUses: u8 {
2786        // For blas used as input for tlas
2787        const BUILD_INPUT = 1 << 0;
2788        // Target for acceleration structure build
2789        const BUILD_OUTPUT = 1 << 1;
2790        // Tlas used in a shader
2791        const SHADER_INPUT = 1 << 2;
2792        // Blas used to query compacted size
2793        const QUERY_INPUT = 1 << 3;
2794        // BLAS used as a src for a copy operation
2795        const COPY_SRC = 1 << 4;
2796        // BLAS used as a dst for a copy operation
2797        const COPY_DST = 1 << 5;
2798    }
2799}
2800
2801#[derive(Debug, Clone)]
2802pub struct AccelerationStructureBarrier {
2803    pub usage: StateTransition<AccelerationStructureUses>,
2804}
2805
2806#[derive(Debug, Copy, Clone)]
2807pub struct TlasInstance {
2808    pub transform: [f32; 12],
2809    pub custom_data: u32,
2810    pub mask: u8,
2811    pub blas_address: u64,
2812}
2813
2814#[cfg(dx12)]
2815pub enum D3D12ExposeAdapterResult {
2816    CreateDeviceError(dx12::CreateDeviceError),
2817    ResourceBindingTier2Requirement,
2818    ShaderModel6Requirement,
2819    Success(dx12::FeatureLevel, dx12::ShaderModel),
2820}
2821
2822/// Pluggable telemetry, mainly to be used by Firefox.
2823#[derive(Debug, Clone, Copy)]
2824pub struct Telemetry {
2825    #[cfg(dx12)]
2826    pub d3d12_expose_adapter: fn(
2827        desc: &windows::Win32::Graphics::Dxgi::DXGI_ADAPTER_DESC2,
2828        driver_version: [u16; 4],
2829        result: D3D12ExposeAdapterResult,
2830    ),
2831}