wgpu_hal/
lib.rs

1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//!   should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//!   validation, if any, and incorrect use will often cause undefined behavior.
20//!   This allows us to minimize the overhead we impose over the underlying
21//!   graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//!   safe API for driving `wgpu-hal`, implementing all necessary validation,
23//!   resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//!   use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//!   `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//!   anticipate*, like running out of memory or losing the device. Any errors
29//!   that the user could reasonably anticipate are their responsibility to
30//!   avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//!   not mappable: as the buffer creator, the user should already know if they
32//!   can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//!   generally object-safe. You must select a specific backend type
36//!   like [`vulkan::Api`] or [`metal::Api`], and then use that
37//!   according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//!   taking objects by reference, returning them by value, and so on,
41//!   unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//!   remain mapped on the CPU while the GPU reads or writes to it. You must
45//!   explicitly indicate when data might need to be transferred between CPU and
46//!   GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//!   resource. For example, if a buffer is written to by a compute
50//!   shader, and then used as and index buffer to a draw call, you
51//!   must use [`CommandEncoder::transition_buffers`] between those two
52//!   operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//!   Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//!   store data in particular containers. The implementation doesn't guarantee
59//!   that any of the iterators are drained, unless stated otherwise by the
60//!   function documentation. For this reason, we recommend that iterators don't
61//!   do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//!   Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//!   available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//!   own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//!   creates an instance value, which you can use to enumerate the adapters
121//!   available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//!   returns an instance that can enumerate the Vulkan physical devices on your
123//!   system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//!   particular device from a particular backend. For example, a Vulkan instance
127//!   might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//!   link to a device. You get a device value by calling [`Adapter::open`], and
131//!   then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//!   command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//!   use to build buffers of commands to submit to a queue. This has all the
138//!   methods for drawing and running compute shaders, which is presumably what
139//!   you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//!   swapchain for presenting images on the screen, via interaction with the
143//!   system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208    // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209    clippy::arc_with_non_send_sync,
210    // We don't use syntax sugar where it's not necessary.
211    clippy::match_like_matches_macro,
212    // Redundant matching is more explicit.
213    clippy::redundant_pattern_matching,
214    // Explicit lifetimes are often easier to reason about.
215    clippy::needless_lifetimes,
216    // No need for defaults in the internal types.
217    clippy::new_without_default,
218    // Matches are good and extendable, no need to make an exception here.
219    clippy::single_match,
220    // Push commands are more regular than macros.
221    clippy::vec_init_then_push,
222    // We unsafe impl `Send` for a reason.
223    clippy::non_send_fields_in_send_ty,
224    // TODO!
225    clippy::missing_safety_doc,
226    // It gets in the way a lot and does not prevent bugs in practice.
227    clippy::pattern_type_mismatch,
228    // We should investigate these.
229    clippy::large_enum_variant
230)]
231#![warn(
232    clippy::alloc_instead_of_core,
233    clippy::ptr_as_ptr,
234    clippy::std_instead_of_alloc,
235    clippy::std_instead_of_core,
236    trivial_casts,
237    trivial_numeric_casts,
238    unsafe_op_in_unsafe_fn,
239    unused_extern_crates,
240    unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268    #[cfg(dx12)]
269    pub use super::dx12::Api as Dx12;
270    #[cfg(gles)]
271    pub use super::gles::Api as Gles;
272    #[cfg(metal)]
273    pub use super::metal::Api as Metal;
274    pub use super::noop::Api as Noop;
275    #[cfg(vulkan)]
276    pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288    DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289    DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290    DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291    DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292    DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299    borrow::Borrow,
300    error::Error,
301    fmt,
302    num::{NonZeroU32, NonZeroU64},
303    ops::{Range, RangeInclusive},
304    ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use raw_window_handle::DisplayHandle;
309use thiserror::Error;
310use wgt::WasmNotSendSync;
311
312cfg_if::cfg_if! {
313    if #[cfg(supports_ptr_atomics)] {
314        use alloc::sync::Arc;
315    } else if #[cfg(feature = "portable-atomic")] {
316        use portable_atomic_util::Arc;
317    }
318}
319
320// - Vertex + Fragment
321// - Compute
322// Task + Mesh + Fragment
323pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
324pub const MAX_ANISOTROPY: u8 = 16;
325pub const MAX_BIND_GROUPS: usize = 8;
326pub const MAX_VERTEX_BUFFERS: usize = 16;
327pub const MAX_COLOR_ATTACHMENTS: usize = 8;
328pub const MAX_MIP_LEVELS: u32 = 16;
329/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
330/// cbindgen:ignore
331pub const QUERY_SIZE: wgt::BufferAddress = 8;
332
333pub type Label<'a> = Option<&'a str>;
334pub type MemoryRange = Range<wgt::BufferAddress>;
335pub type FenceValue = u64;
336#[cfg(supports_64bit_atomics)]
337pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
338#[cfg(not(supports_64bit_atomics))]
339pub type AtomicFenceValue = portable_atomic::AtomicU64;
340
341/// A callback to signal that wgpu is no longer using a resource.
342#[cfg(any(gles, vulkan))]
343pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
344
345#[cfg(any(gles, vulkan))]
346pub struct DropGuard {
347    callback: Option<DropCallback>,
348}
349
350#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
351impl DropGuard {
352    fn from_option(callback: Option<DropCallback>) -> Option<Self> {
353        callback.map(Self::new)
354    }
355
356    fn new(callback: DropCallback) -> Self {
357        Self {
358            callback: Some(callback),
359        }
360    }
361}
362
363#[cfg(any(gles, vulkan))]
364impl Drop for DropGuard {
365    fn drop(&mut self) {
366        if let Some(cb) = self.callback.take() {
367            (cb)();
368        }
369    }
370}
371
372#[cfg(any(gles, vulkan))]
373impl fmt::Debug for DropGuard {
374    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
375        f.debug_struct("DropGuard").finish()
376    }
377}
378
379#[derive(Clone, Debug, PartialEq, Eq, Error)]
380pub enum DeviceError {
381    #[error("Out of memory")]
382    OutOfMemory,
383    #[error("Device is lost")]
384    Lost,
385    #[error("Unexpected error variant (driver implementation is at fault)")]
386    Unexpected,
387}
388
389#[cfg(any(dx12, vulkan))]
390impl From<gpu_allocator::AllocationError> for DeviceError {
391    fn from(result: gpu_allocator::AllocationError) -> Self {
392        match result {
393            gpu_allocator::AllocationError::OutOfMemory => Self::OutOfMemory,
394            gpu_allocator::AllocationError::FailedToMap(e) => {
395                log::error!("gpu-allocator: Failed to map: {e}");
396                Self::Lost
397            }
398            gpu_allocator::AllocationError::NoCompatibleMemoryTypeFound => {
399                log::error!("gpu-allocator: No Compatible Memory Type Found");
400                Self::Lost
401            }
402            gpu_allocator::AllocationError::InvalidAllocationCreateDesc => {
403                log::error!("gpu-allocator: Invalid Allocation Creation Description");
404                Self::Lost
405            }
406            gpu_allocator::AllocationError::InvalidAllocatorCreateDesc(e) => {
407                log::error!("gpu-allocator: Invalid Allocator Creation Description: {e}");
408                Self::Lost
409            }
410
411            gpu_allocator::AllocationError::Internal(e) => {
412                log::error!("gpu-allocator: Internal Error: {e}");
413                Self::Lost
414            }
415            gpu_allocator::AllocationError::BarrierLayoutNeedsDevice10
416            | gpu_allocator::AllocationError::CastableFormatsRequiresEnhancedBarriers
417            | gpu_allocator::AllocationError::CastableFormatsRequiresAtLeastDevice12 => {
418                unreachable!()
419            }
420        }
421    }
422}
423
424// A copy of gpu_allocator::AllocationSizes, allowing to read the configured value for
425// the dx12 backend, we should instead add getters to gpu_allocator::AllocationSizes
426// and remove this type.
427// https://github.com/Traverse-Research/gpu-allocator/issues/295
428#[cfg_attr(not(any(dx12, vulkan)), expect(dead_code))]
429pub(crate) struct AllocationSizes {
430    pub(crate) min_device_memblock_size: u64,
431    pub(crate) max_device_memblock_size: u64,
432    pub(crate) min_host_memblock_size: u64,
433    pub(crate) max_host_memblock_size: u64,
434}
435
436impl AllocationSizes {
437    #[allow(dead_code, reason = "may be unused on some platforms")]
438    pub(crate) fn from_memory_hints(memory_hints: &wgt::MemoryHints) -> Self {
439        // TODO: the allocator's configuration should take hardware capability into
440        // account.
441        const MB: u64 = 1024 * 1024;
442
443        match memory_hints {
444            wgt::MemoryHints::Performance => Self {
445                min_device_memblock_size: 128 * MB,
446                max_device_memblock_size: 256 * MB,
447                min_host_memblock_size: 64 * MB,
448                max_host_memblock_size: 128 * MB,
449            },
450            wgt::MemoryHints::MemoryUsage => Self {
451                min_device_memblock_size: 8 * MB,
452                max_device_memblock_size: 64 * MB,
453                min_host_memblock_size: 4 * MB,
454                max_host_memblock_size: 32 * MB,
455            },
456            wgt::MemoryHints::Manual {
457                suballocated_device_memory_block_size,
458            } => {
459                // TODO: https://github.com/gfx-rs/wgpu/issues/8625
460                // Would it be useful to expose the host size in memory hints
461                // instead of always using half of the device size?
462                let device_size = suballocated_device_memory_block_size;
463                let host_size = device_size.start / 2..device_size.end / 2;
464
465                // gpu_allocator clamps the sizes between 4MiB and 256MiB, but we clamp them ourselves since we use
466                // the sizes when detecting high memory pressure and there is no way to query the values otherwise.
467                Self {
468                    min_device_memblock_size: device_size.start.clamp(4 * MB, 256 * MB),
469                    max_device_memblock_size: device_size.end.clamp(4 * MB, 256 * MB),
470                    min_host_memblock_size: host_size.start.clamp(4 * MB, 256 * MB),
471                    max_host_memblock_size: host_size.end.clamp(4 * MB, 256 * MB),
472                }
473            }
474        }
475    }
476}
477
478#[cfg(any(dx12, vulkan))]
479impl From<AllocationSizes> for gpu_allocator::AllocationSizes {
480    fn from(value: AllocationSizes) -> gpu_allocator::AllocationSizes {
481        gpu_allocator::AllocationSizes::new(
482            value.min_device_memblock_size,
483            value.min_host_memblock_size,
484        )
485        .with_max_device_memblock_size(value.max_device_memblock_size)
486        .with_max_host_memblock_size(value.max_host_memblock_size)
487    }
488}
489
490#[allow(dead_code, reason = "may be unused on some platforms")]
491#[cold]
492fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
493    panic!("wgpu-hal invariant was violated (usage error): {txt}")
494}
495
496#[allow(dead_code, reason = "may be unused on some platforms")]
497#[cold]
498fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
499    panic!("wgpu-hal ran into a preventable internal error: {txt}")
500}
501
502#[derive(Clone, Debug, Eq, PartialEq, Error)]
503pub enum ShaderError {
504    #[error("Compilation failed: {0:?}")]
505    Compilation(String),
506    #[error(transparent)]
507    Device(#[from] DeviceError),
508}
509
510#[derive(Clone, Debug, Eq, PartialEq, Error)]
511pub enum PipelineError {
512    #[error("Linkage failed for stage {0:?}: {1}")]
513    Linkage(wgt::ShaderStages, String),
514    #[error("Entry point for stage {0:?} is invalid")]
515    EntryPoint(naga::ShaderStage),
516    #[error(transparent)]
517    Device(#[from] DeviceError),
518    #[error("Pipeline constant error for stage {0:?}: {1}")]
519    PipelineConstants(wgt::ShaderStages, String),
520}
521
522#[derive(Clone, Debug, Eq, PartialEq, Error)]
523pub enum PipelineCacheError {
524    #[error(transparent)]
525    Device(#[from] DeviceError),
526}
527
528#[derive(Clone, Debug, Eq, PartialEq, Error)]
529pub enum SurfaceError {
530    #[error("Surface is lost")]
531    Lost,
532    #[error("Surface is outdated, needs to be re-created")]
533    Outdated,
534    #[error("Timed out waiting for a surface texture")]
535    Timeout,
536    #[error("The window is occluded (e.g. minimized or behind another window). Try again once the window is no longer occluded.")]
537    Occluded,
538    #[error(transparent)]
539    Device(#[from] DeviceError),
540    #[error("Other reason: {0}")]
541    Other(&'static str),
542}
543
544/// Error occurring while trying to create an instance, or create a surface from an instance;
545/// typically relating to the state of the underlying graphics API or hardware.
546#[derive(Clone, Debug, Error)]
547#[error("{message}")]
548pub struct InstanceError {
549    /// These errors are very platform specific, so do not attempt to encode them as an enum.
550    ///
551    /// This message should describe the problem in sufficient detail to be useful for a
552    /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
553    /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
554    message: String,
555
556    /// Underlying error value, if any is available.
557    #[source]
558    source: Option<Arc<dyn Error + Send + Sync + 'static>>,
559}
560
561impl InstanceError {
562    #[allow(dead_code, reason = "may be unused on some platforms")]
563    pub(crate) fn new(message: String) -> Self {
564        Self {
565            message,
566            source: None,
567        }
568    }
569    #[allow(dead_code, reason = "may be unused on some platforms")]
570    pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
571        cfg_if::cfg_if! {
572            if #[cfg(supports_ptr_atomics)] {
573                let source = Arc::new(source);
574            } else {
575                // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
576                let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
577                let source = Arc::from(source);
578            }
579        }
580        Self {
581            message,
582            source: Some(source),
583        }
584    }
585}
586
587/// All the types and methods that make up a implementation on top of a backend.
588///
589/// Only the types that have non-dyn trait bounds have methods on them. Most methods
590/// are either on [`CommandEncoder`] or [`Device`].
591///
592/// The api can either be used through generics (through use of this trait and associated
593/// types) or dynamically through using the `Dyn*` traits.
594pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
595    const VARIANT: wgt::Backend;
596
597    type Instance: DynInstance + Instance<A = Self>;
598    type Surface: DynSurface + Surface<A = Self>;
599    type Adapter: DynAdapter + Adapter<A = Self>;
600    type Device: DynDevice + Device<A = Self>;
601
602    type Queue: DynQueue + Queue<A = Self>;
603    type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
604
605    /// This API's command buffer type.
606    ///
607    /// The only thing you can do with `CommandBuffer`s is build them
608    /// with a [`CommandEncoder`] and then pass them to
609    /// [`Queue::submit`] for execution, or destroy them by passing
610    /// them to [`CommandEncoder::reset_all`].
611    ///
612    /// [`CommandEncoder`]: Api::CommandEncoder
613    type CommandBuffer: DynCommandBuffer;
614
615    type Buffer: DynBuffer;
616    type Texture: DynTexture;
617    type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
618    type TextureView: DynTextureView;
619    type Sampler: DynSampler;
620    type QuerySet: DynQuerySet;
621
622    /// A value you can block on to wait for something to finish.
623    ///
624    /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
625    /// [`Device::wait`] to block until a fence reaches or passes a value you
626    /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
627    /// store in it when the submitted work is complete.
628    ///
629    /// Attempting to set a fence to a value less than its current value has no
630    /// effect.
631    ///
632    /// Waiting on a fence returns as soon as the fence reaches *or passes* the
633    /// requested value. This implies that, in order to reliably determine when
634    /// an operation has completed, operations must finish in order of
635    /// increasing fence values: if a higher-valued operation were to finish
636    /// before a lower-valued operation, then waiting for the fence to reach the
637    /// lower value could return before the lower-valued operation has actually
638    /// finished.
639    type Fence: DynFence;
640
641    type BindGroupLayout: DynBindGroupLayout;
642    type BindGroup: DynBindGroup;
643    type PipelineLayout: DynPipelineLayout;
644    type ShaderModule: DynShaderModule;
645    type RenderPipeline: DynRenderPipeline;
646    type ComputePipeline: DynComputePipeline;
647    type PipelineCache: DynPipelineCache;
648
649    type AccelerationStructure: DynAccelerationStructure + 'static;
650}
651
652pub trait Instance: Sized + WasmNotSendSync {
653    type A: Api;
654
655    unsafe fn init(desc: &InstanceDescriptor<'_>) -> Result<Self, InstanceError>;
656    unsafe fn create_surface(
657        &self,
658        display_handle: raw_window_handle::RawDisplayHandle,
659        window_handle: raw_window_handle::RawWindowHandle,
660    ) -> Result<<Self::A as Api>::Surface, InstanceError>;
661    /// `surface_hint` is only used by the GLES backend targeting WebGL2
662    unsafe fn enumerate_adapters(
663        &self,
664        surface_hint: Option<&<Self::A as Api>::Surface>,
665    ) -> Vec<ExposedAdapter<Self::A>>;
666}
667
668pub trait Surface: WasmNotSendSync {
669    type A: Api;
670
671    /// Configure `self` to use `device`.
672    ///
673    /// # Safety
674    ///
675    /// - All GPU work using `self` must have been completed.
676    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
677    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
678    /// - The surface `self` must not currently be configured to use any other [`Device`].
679    unsafe fn configure(
680        &self,
681        device: &<Self::A as Api>::Device,
682        config: &SurfaceConfiguration,
683    ) -> Result<(), SurfaceError>;
684
685    /// Unconfigure `self` on `device`.
686    ///
687    /// # Safety
688    ///
689    /// - All GPU work that uses `surface` must have been completed.
690    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
691    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
692    /// - The surface `self` must have been configured on `device`.
693    unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
694
695    /// Return the next texture to be presented by `self`, for the caller to draw on.
696    ///
697    /// On success, return an [`AcquiredSurfaceTexture`] representing the
698    /// texture into which the caller should draw the image to be displayed on
699    /// `self`.
700    ///
701    /// If `timeout` elapses before `self` has a texture ready to be acquired,
702    /// return `Err(SurfaceError::Timeout)`. If `timeout` is `None`, wait
703    /// indefinitely, with no timeout.
704    ///
705    /// # Using an [`AcquiredSurfaceTexture`]
706    ///
707    /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
708    /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
709    /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
710    /// carries some metadata about that [`SurfaceTexture`].
711    ///
712    /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
713    /// include the [`SurfaceTexture`] in the `surface_textures` argument.
714    ///
715    /// When you are done drawing on the texture, you can display it on `self`
716    /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
717    ///
718    /// If you do not wish to display the texture, you must pass the
719    /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
720    /// by future acquisitions.
721    ///
722    /// # Portability
723    ///
724    /// Some backends can't support a timeout when acquiring a texture. On these
725    /// backends, `timeout` is ignored.
726    ///
727    /// On macOS, this returns `Err(SurfaceError::Timeout)` when the window is
728    /// not visible (minimized, fully occluded, or on another virtual desktop)
729    /// to avoid blocking in `CAMetalLayer.nextDrawable()`.
730    ///
731    /// # Safety
732    ///
733    /// - The surface `self` must currently be configured on some [`Device`].
734    ///
735    /// - The `fence` argument must be the same [`Fence`] passed to all calls to
736    ///   [`Queue::submit`] that used [`Texture`]s acquired from this surface.
737    ///
738    /// - You may only have one texture acquired from `self` at a time. When
739    ///   `acquire_texture` returns `Ok(ast)`, you must pass the returned
740    ///   [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
741    ///   [`Surface::discard_texture`] before calling `acquire_texture` again.
742    ///
743    /// [`texture`]: AcquiredSurfaceTexture::texture
744    /// [`SurfaceTexture`]: Api::SurfaceTexture
745    /// [`borrow`]: alloc::borrow::Borrow::borrow
746    /// [`Texture`]: Api::Texture
747    /// [`Fence`]: Api::Fence
748    /// [`self.discard_texture`]: Surface::discard_texture
749    unsafe fn acquire_texture(
750        &self,
751        timeout: Option<core::time::Duration>,
752        fence: &<Self::A as Api>::Fence,
753    ) -> Result<AcquiredSurfaceTexture<Self::A>, SurfaceError>;
754
755    /// Relinquish an acquired texture without presenting it.
756    ///
757    /// After this call, the texture underlying [`SurfaceTexture`] may be
758    /// returned by subsequent calls to [`self.acquire_texture`].
759    ///
760    /// # Safety
761    ///
762    /// - The surface `self` must currently be configured on some [`Device`].
763    ///
764    /// - `texture` must be a [`SurfaceTexture`] returned by a call to
765    ///   [`self.acquire_texture`] that has not yet been passed to
766    ///   [`Queue::present`].
767    ///
768    /// [`SurfaceTexture`]: Api::SurfaceTexture
769    /// [`self.acquire_texture`]: Surface::acquire_texture
770    unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
771}
772
773pub trait Adapter: WasmNotSendSync {
774    type A: Api;
775
776    unsafe fn open(
777        &self,
778        features: wgt::Features,
779        limits: &wgt::Limits,
780        memory_hints: &wgt::MemoryHints,
781    ) -> Result<OpenDevice<Self::A>, DeviceError>;
782
783    /// Return the set of supported capabilities for a texture format.
784    unsafe fn texture_format_capabilities(
785        &self,
786        format: wgt::TextureFormat,
787    ) -> TextureFormatCapabilities;
788
789    /// Returns the capabilities of working with a specified surface.
790    ///
791    /// `None` means presentation is not supported for it.
792    unsafe fn surface_capabilities(
793        &self,
794        surface: &<Self::A as Api>::Surface,
795    ) -> Option<SurfaceCapabilities>;
796
797    /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
798    ///
799    /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
800    unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
801
802    /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
803    /// If a usage is ordered, then if the buffer state doesn't change between draw calls,
804    /// there are no barriers needed for synchronization.
805    fn get_ordered_buffer_usages(&self) -> wgt::BufferUses;
806
807    /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
808    /// If a usage is ordered, then if the buffer state doesn't change between draw calls,
809    /// there are no barriers needed for synchronization.
810    fn get_ordered_texture_usages(&self) -> wgt::TextureUses;
811}
812
813/// A connection to a GPU and a pool of resources to use with it.
814///
815/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
816/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
817/// used for creating resources. Each `Device` has an associated [`Queue`] used
818/// for command submission.
819///
820/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
821/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
822/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
823/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
824/// `Adapter`.
825///
826/// A `Device`'s life cycle is generally:
827///
828/// 1)  Obtain a `Device` and its associated [`Queue`] by calling
829///     [`Adapter::open`].
830///
831///     Alternatively, the backend-specific types that implement [`Adapter`] often
832///     have methods for creating a `wgpu-hal` `Device` from a platform-specific
833///     handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
834///     [`vulkan::Device`] from an [`ash::Device`].
835///
836/// 1)  Create resources to use on the device by calling methods like
837///     [`Device::create_texture`] or [`Device::create_shader_module`].
838///
839/// 1)  Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
840///     which you can use to build [`CommandBuffer`]s holding commands to be
841///     executed on the GPU.
842///
843/// 1)  Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
844///     [`CommandBuffer`]s for execution on the GPU. If needed, call
845///     [`Device::wait`] to wait for them to finish execution.
846///
847/// 1)  Free resources with methods like [`Device::destroy_texture`] or
848///     [`Device::destroy_shader_module`].
849///
850/// 1)  Drop the device.
851///
852/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
853/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
854/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
855/// [`wgpu_hal::Adapter`]: Adapter
856/// [`wgpu_hal::Device`]: Device
857/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
858/// [`vulkan::Device`]: vulkan/struct.Device.html
859/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
860/// [`CommandBuffer`]: Api::CommandBuffer
861///
862/// # Safety
863///
864/// As with other `wgpu-hal` APIs, [validation] is the caller's
865/// responsibility. Here are the general requirements for all `Device`
866/// methods:
867///
868/// - Any resource passed to a `Device` method must have been created by that
869///   `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
870///   have been created with the `Device` passed as `self`.
871///
872/// - Resources may not be destroyed if they are used by any submitted command
873///   buffers that have not yet finished execution.
874///
875/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
876/// [`Texture`]: Api::Texture
877pub trait Device: WasmNotSendSync {
878    type A: Api;
879
880    /// Creates a new buffer.
881    ///
882    /// The initial usage is `wgt::BufferUses::empty()`.
883    unsafe fn create_buffer(
884        &self,
885        desc: &BufferDescriptor,
886    ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
887
888    /// Free `buffer` and any GPU resources it owns.
889    ///
890    /// Note that backends are allowed to allocate GPU memory for buffers from
891    /// allocation pools, and this call is permitted to simply return `buffer`'s
892    /// storage to that pool, without making it available to other applications.
893    ///
894    /// # Safety
895    ///
896    /// - The given `buffer` must not currently be mapped.
897    unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
898
899    /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
900    unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
901
902    /// Return a pointer to CPU memory mapping the contents of `buffer`.
903    ///
904    /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
905    /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
906    /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
907    /// `wgpu_hal` buffer is also unmapped.)
908    ///
909    /// If this function returns `Ok(mapping)`, then:
910    ///
911    /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
912    ///
913    /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
914    ///   memory are immediately visible on the GPU, and vice versa.
915    ///
916    /// # Safety
917    ///
918    /// - The given `buffer` must have been created with the [`MAP_READ`] or
919    ///   [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
920    ///
921    /// - The given `range` must fall within the size of `buffer`.
922    ///
923    /// - The caller must avoid data races between the CPU and the GPU. A data
924    ///   race is any pair of accesses to a particular byte, one of which is a
925    ///   write, that are not ordered with respect to each other by some sort of
926    ///   synchronization operation.
927    ///
928    /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
929    ///   `false`, then:
930    ///
931    ///   - Every CPU write to a mapped byte followed by a GPU read of that byte
932    ///     must have at least one call to [`Device::flush_mapped_ranges`]
933    ///     covering that byte that occurs between those two accesses.
934    ///
935    ///   - Every GPU write to a mapped byte followed by a CPU read of that byte
936    ///     must have at least one call to [`Device::invalidate_mapped_ranges`]
937    ///     covering that byte that occurs between those two accesses.
938    ///
939    ///   Note that the data race rule above requires that all such access pairs
940    ///   be ordered, so it is meaningful to talk about what must occur
941    ///   "between" them.
942    ///
943    /// - Zero-sized mappings are not allowed.
944    ///
945    /// - The returned [`BufferMapping::ptr`] must not be used after a call to
946    ///   [`Device::unmap_buffer`].
947    ///
948    /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
949    /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
950    unsafe fn map_buffer(
951        &self,
952        buffer: &<Self::A as Api>::Buffer,
953        range: MemoryRange,
954    ) -> Result<BufferMapping, DeviceError>;
955
956    /// Remove the mapping established by the last call to [`Device::map_buffer`].
957    ///
958    /// # Safety
959    ///
960    /// - The given `buffer` must be currently mapped.
961    unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
962
963    /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
964    ///
965    /// # Safety
966    ///
967    /// - The given `buffer` must be currently mapped.
968    ///
969    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
970    unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
971    where
972        I: Iterator<Item = MemoryRange>;
973
974    /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
975    ///
976    /// # Safety
977    ///
978    /// - The given `buffer` must be currently mapped.
979    ///
980    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
981    unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
982    where
983        I: Iterator<Item = MemoryRange>;
984
985    /// Creates a new texture.
986    ///
987    /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
988    unsafe fn create_texture(
989        &self,
990        desc: &TextureDescriptor,
991    ) -> Result<<Self::A as Api>::Texture, DeviceError>;
992    unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
993
994    /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
995    unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
996
997    unsafe fn create_texture_view(
998        &self,
999        texture: &<Self::A as Api>::Texture,
1000        desc: &TextureViewDescriptor,
1001    ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
1002    unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
1003    unsafe fn create_sampler(
1004        &self,
1005        desc: &SamplerDescriptor,
1006    ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
1007    unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
1008
1009    /// Create a fresh [`CommandEncoder`].
1010    ///
1011    /// The new `CommandEncoder` is in the "closed" state.
1012    unsafe fn create_command_encoder(
1013        &self,
1014        desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
1015    ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
1016
1017    /// Creates a bind group layout.
1018    unsafe fn create_bind_group_layout(
1019        &self,
1020        desc: &BindGroupLayoutDescriptor,
1021    ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
1022    unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
1023    unsafe fn create_pipeline_layout(
1024        &self,
1025        desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
1026    ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
1027    unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
1028
1029    #[allow(clippy::type_complexity)]
1030    unsafe fn create_bind_group(
1031        &self,
1032        desc: &BindGroupDescriptor<
1033            <Self::A as Api>::BindGroupLayout,
1034            <Self::A as Api>::Buffer,
1035            <Self::A as Api>::Sampler,
1036            <Self::A as Api>::TextureView,
1037            <Self::A as Api>::AccelerationStructure,
1038        >,
1039    ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
1040    unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
1041
1042    unsafe fn create_shader_module(
1043        &self,
1044        desc: &ShaderModuleDescriptor,
1045        shader: ShaderInput,
1046    ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
1047    unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
1048
1049    #[allow(clippy::type_complexity)]
1050    unsafe fn create_render_pipeline(
1051        &self,
1052        desc: &RenderPipelineDescriptor<
1053            <Self::A as Api>::PipelineLayout,
1054            <Self::A as Api>::ShaderModule,
1055            <Self::A as Api>::PipelineCache,
1056        >,
1057    ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
1058    unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
1059
1060    #[allow(clippy::type_complexity)]
1061    unsafe fn create_compute_pipeline(
1062        &self,
1063        desc: &ComputePipelineDescriptor<
1064            <Self::A as Api>::PipelineLayout,
1065            <Self::A as Api>::ShaderModule,
1066            <Self::A as Api>::PipelineCache,
1067        >,
1068    ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
1069    unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
1070
1071    unsafe fn create_pipeline_cache(
1072        &self,
1073        desc: &PipelineCacheDescriptor<'_>,
1074    ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
1075    fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
1076        None
1077    }
1078    unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
1079
1080    unsafe fn create_query_set(
1081        &self,
1082        desc: &wgt::QuerySetDescriptor<Label>,
1083    ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
1084    unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
1085    unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
1086    unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
1087    unsafe fn get_fence_value(
1088        &self,
1089        fence: &<Self::A as Api>::Fence,
1090    ) -> Result<FenceValue, DeviceError>;
1091
1092    /// Wait for `fence` to reach `value`.
1093    ///
1094    /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
1095    /// [`FenceValue`] to store in it, so you can use this `wait` function
1096    /// to wait for a given queue submission to finish execution.
1097    ///
1098    /// The `value` argument must be a value that some actual operation you have
1099    /// already presented to the device is going to store in `fence`. You cannot
1100    /// wait for values yet to be submitted. (This restriction accommodates
1101    /// implementations like the `vulkan` backend's [`FencePool`] that must
1102    /// allocate a distinct synchronization object for each fence value one is
1103    /// able to wait for.)
1104    ///
1105    /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
1106    /// returns immediately.
1107    ///
1108    /// If `timeout` is not provided, the function will block indefinitely or until
1109    /// an error is encountered.
1110    ///
1111    /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
1112    ///
1113    /// [`Fence`]: Api::Fence
1114    /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
1115    unsafe fn wait(
1116        &self,
1117        fence: &<Self::A as Api>::Fence,
1118        value: FenceValue,
1119        timeout: Option<core::time::Duration>,
1120    ) -> Result<bool, DeviceError>;
1121
1122    /// Start a graphics debugger capture.
1123    ///
1124    /// # Safety
1125    ///
1126    /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1127    ///
1128    /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1129    unsafe fn start_graphics_debugger_capture(&self) -> bool;
1130
1131    /// Stop a graphics debugger capture.
1132    ///
1133    /// # Safety
1134    ///
1135    /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1136    ///
1137    /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1138    unsafe fn stop_graphics_debugger_capture(&self);
1139
1140    #[allow(unused_variables)]
1141    unsafe fn pipeline_cache_get_data(
1142        &self,
1143        cache: &<Self::A as Api>::PipelineCache,
1144    ) -> Option<Vec<u8>> {
1145        None
1146    }
1147
1148    unsafe fn create_acceleration_structure(
1149        &self,
1150        desc: &AccelerationStructureDescriptor,
1151    ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1152    unsafe fn get_acceleration_structure_build_sizes(
1153        &self,
1154        desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1155    ) -> AccelerationStructureBuildSizes;
1156    unsafe fn get_acceleration_structure_device_address(
1157        &self,
1158        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1159    ) -> wgt::BufferAddress;
1160    unsafe fn destroy_acceleration_structure(
1161        &self,
1162        acceleration_structure: <Self::A as Api>::AccelerationStructure,
1163    );
1164    fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1165
1166    fn get_internal_counters(&self) -> wgt::HalCounters;
1167
1168    fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1169        None
1170    }
1171
1172    fn check_if_oom(&self) -> Result<(), DeviceError>;
1173}
1174
1175pub trait Queue: WasmNotSendSync {
1176    type A: Api;
1177
1178    /// Submit `command_buffers` for execution on GPU.
1179    ///
1180    /// Update `fence` to `value` when the operation is complete. See
1181    /// [`Fence`] for details.
1182    ///
1183    /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1184    /// order they're submitted, with each buffer able to observe the effects of
1185    /// previous buffers' execution. Specifically:
1186    ///
1187    /// - If two calls to `submit` on a single `Queue` occur in a particular
1188    ///   order (that is, they happen on the same thread, or on two threads that
1189    ///   have synchronized to establish an ordering), then the first
1190    ///   submission's commands all complete execution before any of the second
1191    ///   submission's commands begin. All results produced by one submission
1192    ///   are visible to the next.
1193    ///
1194    /// - Within a submission, command buffers execute in the order in which they
1195    ///   appear in `command_buffers`. All results produced by one buffer are
1196    ///   visible to the next.
1197    ///
1198    /// If two calls to `submit` on a single `Queue` from different threads are
1199    /// not synchronized to occur in a particular order, they must pass distinct
1200    /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1201    /// operations to complete is only trustworthy when operations finish in
1202    /// order of increasing fence value, but submissions from different threads
1203    /// cannot determine how to order the fence values if the submissions
1204    /// themselves are unordered. If each thread uses a separate [`Fence`], this
1205    /// problem does not arise.
1206    ///
1207    /// # Safety
1208    ///
1209    /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1210    ///   from a [`CommandEncoder`][ce] that was constructed from the
1211    ///   [`Device`][d] associated with this [`Queue`].
1212    ///
1213    /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1214    ///   commands have finished execution. Since command buffers must not
1215    ///   outlive their encoders, this implies that the encoders must remain
1216    ///   alive as well.
1217    ///
1218    /// - All resources used by a submitted [`CommandBuffer`][cb]
1219    ///   ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1220    ///   on) must remain alive until the command buffer finishes execution.
1221    ///
1222    /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1223    ///   writes to must appear in the `surface_textures` argument.
1224    ///
1225    /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1226    ///   argument more than once.
1227    ///
1228    /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1229    ///   for use with the [`Device`][d] associated with this [`Queue`],
1230    ///   typically by calling [`Surface::configure`].
1231    ///
1232    /// - All calls to this function that include a given [`SurfaceTexture`][st]
1233    ///   in `surface_textures` must use the same [`Fence`].
1234    ///
1235    /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1236    ///   all submissions that will signal it have completed.
1237    ///
1238    /// [`Fence`]: Api::Fence
1239    /// [cb]: Api::CommandBuffer
1240    /// [ce]: Api::CommandEncoder
1241    /// [d]: Api::Device
1242    /// [t]: Api::Texture
1243    /// [bg]: Api::BindGroup
1244    /// [rp]: Api::RenderPipeline
1245    /// [st]: Api::SurfaceTexture
1246    unsafe fn submit(
1247        &self,
1248        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1249        surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1250        signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1251    ) -> Result<(), DeviceError>;
1252    /// Present a surface texture to the screen.
1253    ///
1254    /// This consumes the surface texture, returning it to the swapchain.
1255    ///
1256    /// # Safety
1257    ///
1258    /// - `texture` must have been acquired from `surface` via
1259    ///   [`Surface::acquire_texture`] and not yet presented or discarded.
1260    /// - `surface` must be configured for use with the [`Device`][d] associated
1261    ///   with this [`Queue`].
1262    /// - `texture` must be in the "present" state. Either:
1263    ///   - It was passed in [`submit`][s]'s `surface_textures` argument
1264    ///     (which transitions it to the present state), or
1265    ///   - The caller has otherwise transitioned it (e.g. via a clear +
1266    ///     barrier to `PRESENT` for textures that were never rendered to).
1267    /// - Any command buffers that write to `texture` must have been submitted
1268    ///   via [`submit`][s] before this call. The submissions do not need to
1269    ///   have completed on the GPU; platform-level synchronization handles the
1270    ///   ordering between rendering and display.
1271    /// - Must be externally synchronized with all other queue operations
1272    ///   ([`submit`][s], [`present`][Queue::present],
1273    ///   [`wait_for_idle`][Queue::wait_for_idle]) on the same queue.
1274    ///
1275    /// [d]: Api::Device
1276    /// [s]: Queue::submit
1277    unsafe fn present(
1278        &self,
1279        surface: &<Self::A as Api>::Surface,
1280        texture: <Self::A as Api>::SurfaceTexture,
1281    ) -> Result<(), SurfaceError>;
1282    /// Block until all previously submitted work on this queue has completed,
1283    /// including any pending presentations.
1284    ///
1285    /// # Safety
1286    ///
1287    /// - Must be externally synchronized with all other queue operations
1288    ///   ([`submit`][Queue::submit], [`present`][Queue::present],
1289    ///   [`wait_for_idle`][Queue::wait_for_idle]) on the same queue.
1290    unsafe fn wait_for_idle(&self) -> Result<(), DeviceError>;
1291    unsafe fn get_timestamp_period(&self) -> f32;
1292}
1293
1294/// Encoder and allocation pool for `CommandBuffer`s.
1295///
1296/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1297/// acts as the allocation pool that owns the buffers' underlying
1298/// storage. Thus, `CommandBuffer`s must not outlive the
1299/// `CommandEncoder` that created them.
1300///
1301/// The life cycle of a `CommandBuffer` is as follows:
1302///
1303/// - Call [`Device::create_command_encoder`] to create a new
1304///   `CommandEncoder`, in the "closed" state.
1305///
1306/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1307///   recording commands. This puts the `CommandEncoder` in the
1308///   "recording" state.
1309///
1310/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1311///   etc. on a "recording" `CommandEncoder` to add commands to the
1312///   list. (If an error occurs, you must call `discard_encoding`; see
1313///   below.)
1314///
1315/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1316///   encoder and construct a fresh `CommandBuffer` consisting of the
1317///   list of commands recorded up to that point.
1318///
1319/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1320///   the commands recorded thus far and close the encoder. This is
1321///   the only safe thing to do on a `CommandEncoder` if an error has
1322///   occurred while recording commands.
1323///
1324/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1325///   live `CommandBuffers` built from it. All the `CommandBuffer`s
1326///   are destroyed, and their resources are freed.
1327///
1328/// # Safety
1329///
1330/// - The `CommandEncoder` must be in the states described above to
1331///   make the given calls.
1332///
1333/// - A `CommandBuffer` that has been submitted for execution on the
1334///   GPU must live until its execution is complete.
1335///
1336/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1337///   built it.
1338///
1339/// It is the user's responsibility to meet this requirements. This
1340/// allows `CommandEncoder` implementations to keep their state
1341/// tracking to a minimum.
1342pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1343    type A: Api;
1344
1345    /// Begin encoding a new command buffer.
1346    ///
1347    /// This puts this `CommandEncoder` in the "recording" state.
1348    ///
1349    /// # Safety
1350    ///
1351    /// This `CommandEncoder` must be in the "closed" state.
1352    unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1353
1354    /// Discard the command list under construction.
1355    ///
1356    /// If an error has occurred while recording commands, this
1357    /// is the only safe thing to do with the encoder.
1358    ///
1359    /// This puts this `CommandEncoder` in the "closed" state.
1360    ///
1361    /// # Safety
1362    ///
1363    /// This `CommandEncoder` must be in the "recording" state.
1364    ///
1365    /// Callers must not assume that implementations of this
1366    /// function are idempotent, and thus should not call it
1367    /// multiple times in a row.
1368    unsafe fn discard_encoding(&mut self);
1369
1370    /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1371    ///
1372    /// The returned [`CommandBuffer`] holds all the commands recorded
1373    /// on this `CommandEncoder` since the last call to
1374    /// [`begin_encoding`].
1375    ///
1376    /// This puts this `CommandEncoder` in the "closed" state.
1377    ///
1378    /// # Safety
1379    ///
1380    /// This `CommandEncoder` must be in the "recording" state.
1381    ///
1382    /// The returned [`CommandBuffer`] must not outlive this
1383    /// `CommandEncoder`. Implementations are allowed to build
1384    /// `CommandBuffer`s that depend on storage owned by this
1385    /// `CommandEncoder`.
1386    ///
1387    /// [`CommandBuffer`]: Api::CommandBuffer
1388    /// [`begin_encoding`]: CommandEncoder::begin_encoding
1389    unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1390
1391    /// Reclaim all resources belonging to this `CommandEncoder`.
1392    ///
1393    /// # Safety
1394    ///
1395    /// This `CommandEncoder` must be in the "closed" state.
1396    ///
1397    /// The `command_buffers` iterator must produce all the live
1398    /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1399    /// is, every extant `CommandBuffer` returned from `end_encoding`.
1400    ///
1401    /// [`CommandBuffer`]: Api::CommandBuffer
1402    unsafe fn reset_all<I>(&mut self, command_buffers: I)
1403    where
1404        I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1405
1406    unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1407    where
1408        T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1409
1410    unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1411    where
1412        T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1413
1414    // copy operations
1415
1416    unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1417
1418    unsafe fn copy_buffer_to_buffer<T>(
1419        &mut self,
1420        src: &<Self::A as Api>::Buffer,
1421        dst: &<Self::A as Api>::Buffer,
1422        regions: T,
1423    ) where
1424        T: Iterator<Item = BufferCopy>;
1425
1426    /// Copy from an external image to an internal texture.
1427    /// Works with a single array layer.
1428    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1429    /// Note: the copy extent is in physical size (rounded to the block size)
1430    #[cfg(webgl)]
1431    unsafe fn copy_external_image_to_texture<T>(
1432        &mut self,
1433        src: &wgt::CopyExternalImageSourceInfo,
1434        dst: &<Self::A as Api>::Texture,
1435        dst_premultiplication: bool,
1436        regions: T,
1437    ) where
1438        T: Iterator<Item = TextureCopy>;
1439
1440    /// Copy from one texture to another.
1441    /// Works with a single array layer.
1442    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1443    /// Note: the copy extent is in physical size (rounded to the block size)
1444    unsafe fn copy_texture_to_texture<T>(
1445        &mut self,
1446        src: &<Self::A as Api>::Texture,
1447        src_usage: wgt::TextureUses,
1448        dst: &<Self::A as Api>::Texture,
1449        regions: T,
1450    ) where
1451        T: Iterator<Item = TextureCopy>;
1452
1453    /// Copy from buffer to texture.
1454    /// Works with a single array layer.
1455    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1456    /// Note: the copy extent is in physical size (rounded to the block size)
1457    unsafe fn copy_buffer_to_texture<T>(
1458        &mut self,
1459        src: &<Self::A as Api>::Buffer,
1460        dst: &<Self::A as Api>::Texture,
1461        regions: T,
1462    ) where
1463        T: Iterator<Item = BufferTextureCopy>;
1464
1465    /// Copy from texture to buffer.
1466    /// Works with a single array layer.
1467    /// Note: the copy extent is in physical size (rounded to the block size)
1468    unsafe fn copy_texture_to_buffer<T>(
1469        &mut self,
1470        src: &<Self::A as Api>::Texture,
1471        src_usage: wgt::TextureUses,
1472        dst: &<Self::A as Api>::Buffer,
1473        regions: T,
1474    ) where
1475        T: Iterator<Item = BufferTextureCopy>;
1476
1477    unsafe fn copy_acceleration_structure_to_acceleration_structure(
1478        &mut self,
1479        src: &<Self::A as Api>::AccelerationStructure,
1480        dst: &<Self::A as Api>::AccelerationStructure,
1481        copy: wgt::AccelerationStructureCopy,
1482    );
1483    // pass common
1484
1485    /// Sets the bind group at `index` to `group`.
1486    ///
1487    /// If this is not the first call to `set_bind_group` within the current
1488    /// render or compute pass:
1489    ///
1490    /// - If `layout` contains `n` bind group layouts, then any previously set
1491    ///   bind groups at indices `n` or higher are cleared.
1492    ///
1493    /// - If the first `m` bind group layouts of `layout` are equal to those of
1494    ///   the previously passed layout, but no more, then any previously set
1495    ///   bind groups at indices `m` or higher are cleared.
1496    ///
1497    /// It follows from the above that passing the same layout as before doesn't
1498    /// clear any bind groups.
1499    ///
1500    /// # Safety
1501    ///
1502    /// - This [`CommandEncoder`] must be within a render or compute pass.
1503    ///
1504    /// - `index` must be the valid index of some bind group layout in `layout`.
1505    ///   Call this the "relevant bind group layout".
1506    ///
1507    /// - The layout of `group` must be equal to the relevant bind group layout.
1508    ///
1509    /// - The length of `dynamic_offsets` must match the number of buffer
1510    ///   bindings [with dynamic offsets][hdo] in the relevant bind group
1511    ///   layout.
1512    ///
1513    /// - If those buffer bindings are ordered by increasing [`binding` number]
1514    ///   and paired with elements from `dynamic_offsets`, then each offset must
1515    ///   be a valid offset for the binding's corresponding buffer in `group`.
1516    ///
1517    /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1518    /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1519    unsafe fn set_bind_group(
1520        &mut self,
1521        layout: &<Self::A as Api>::PipelineLayout,
1522        index: u32,
1523        group: &<Self::A as Api>::BindGroup,
1524        dynamic_offsets: &[wgt::DynamicOffset],
1525    );
1526
1527    /// Sets a range in immediate data.
1528    ///
1529    /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1530    ///
1531    /// # Safety
1532    ///
1533    /// - `offset_bytes` must be a multiple of 4.
1534    /// - The range of immediates written must be valid for the pipeline layout at draw time.
1535    unsafe fn set_immediates(
1536        &mut self,
1537        layout: &<Self::A as Api>::PipelineLayout,
1538        offset_bytes: u32,
1539        data: &[u32],
1540    );
1541
1542    unsafe fn insert_debug_marker(&mut self, label: &str);
1543    unsafe fn begin_debug_marker(&mut self, group_label: &str);
1544    unsafe fn end_debug_marker(&mut self);
1545
1546    // queries
1547
1548    /// # Safety:
1549    ///
1550    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1551    unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1552    /// # Safety:
1553    ///
1554    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1555    unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1556    unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1557    unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1558    unsafe fn copy_query_results(
1559        &mut self,
1560        set: &<Self::A as Api>::QuerySet,
1561        range: Range<u32>,
1562        buffer: &<Self::A as Api>::Buffer,
1563        offset: wgt::BufferAddress,
1564        stride: wgt::BufferSize,
1565    );
1566
1567    // render passes
1568
1569    /// Begin a new render pass, clearing all active bindings.
1570    ///
1571    /// This clears any bindings established by the following calls:
1572    ///
1573    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1574    /// - [`set_immediates`](CommandEncoder::set_immediates)
1575    /// - [`begin_query`](CommandEncoder::begin_query)
1576    /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1577    /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1578    /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1579    ///
1580    /// # Safety
1581    ///
1582    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1583    ///   by a call to [`end_render_pass`].
1584    ///
1585    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1586    ///   by a call to [`end_compute_pass`].
1587    ///
1588    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1589    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1590    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1591    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1592    unsafe fn begin_render_pass(
1593        &mut self,
1594        desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1595    ) -> Result<(), DeviceError>;
1596
1597    /// End the current render pass.
1598    ///
1599    /// # Safety
1600    ///
1601    /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1602    ///   that has not been followed by a call to [`end_render_pass`].
1603    ///
1604    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1605    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1606    unsafe fn end_render_pass(&mut self);
1607
1608    unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1609
1610    unsafe fn set_index_buffer<'a>(
1611        &mut self,
1612        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1613        format: wgt::IndexFormat,
1614    );
1615    unsafe fn set_vertex_buffer<'a>(
1616        &mut self,
1617        index: u32,
1618        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1619    );
1620    unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1621    unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1622    unsafe fn set_stencil_reference(&mut self, value: u32);
1623    unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1624
1625    unsafe fn draw(
1626        &mut self,
1627        first_vertex: u32,
1628        vertex_count: u32,
1629        first_instance: u32,
1630        instance_count: u32,
1631    );
1632    unsafe fn draw_indexed(
1633        &mut self,
1634        first_index: u32,
1635        index_count: u32,
1636        base_vertex: i32,
1637        first_instance: u32,
1638        instance_count: u32,
1639    );
1640    unsafe fn draw_indirect(
1641        &mut self,
1642        buffer: &<Self::A as Api>::Buffer,
1643        offset: wgt::BufferAddress,
1644        draw_count: u32,
1645    );
1646    unsafe fn draw_indexed_indirect(
1647        &mut self,
1648        buffer: &<Self::A as Api>::Buffer,
1649        offset: wgt::BufferAddress,
1650        draw_count: u32,
1651    );
1652    unsafe fn draw_indirect_count(
1653        &mut self,
1654        buffer: &<Self::A as Api>::Buffer,
1655        offset: wgt::BufferAddress,
1656        count_buffer: &<Self::A as Api>::Buffer,
1657        count_offset: wgt::BufferAddress,
1658        max_count: u32,
1659    );
1660    unsafe fn draw_indexed_indirect_count(
1661        &mut self,
1662        buffer: &<Self::A as Api>::Buffer,
1663        offset: wgt::BufferAddress,
1664        count_buffer: &<Self::A as Api>::Buffer,
1665        count_offset: wgt::BufferAddress,
1666        max_count: u32,
1667    );
1668    unsafe fn draw_mesh_tasks(
1669        &mut self,
1670        group_count_x: u32,
1671        group_count_y: u32,
1672        group_count_z: u32,
1673    );
1674    unsafe fn draw_mesh_tasks_indirect(
1675        &mut self,
1676        buffer: &<Self::A as Api>::Buffer,
1677        offset: wgt::BufferAddress,
1678        draw_count: u32,
1679    );
1680    unsafe fn draw_mesh_tasks_indirect_count(
1681        &mut self,
1682        buffer: &<Self::A as Api>::Buffer,
1683        offset: wgt::BufferAddress,
1684        count_buffer: &<Self::A as Api>::Buffer,
1685        count_offset: wgt::BufferAddress,
1686        max_count: u32,
1687    );
1688
1689    // compute passes
1690
1691    /// Begin a new compute pass, clearing all active bindings.
1692    ///
1693    /// This clears any bindings established by the following calls:
1694    ///
1695    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1696    /// - [`set_immediates`](CommandEncoder::set_immediates)
1697    /// - [`begin_query`](CommandEncoder::begin_query)
1698    /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1699    ///
1700    /// # Safety
1701    ///
1702    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1703    ///   by a call to [`end_render_pass`].
1704    ///
1705    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1706    ///   by a call to [`end_compute_pass`].
1707    ///
1708    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1709    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1710    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1711    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1712    unsafe fn begin_compute_pass(
1713        &mut self,
1714        desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1715    );
1716
1717    /// End the current compute pass.
1718    ///
1719    /// # Safety
1720    ///
1721    /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1722    ///   that has not been followed by a call to [`end_compute_pass`].
1723    ///
1724    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1725    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1726    unsafe fn end_compute_pass(&mut self);
1727
1728    unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1729
1730    unsafe fn dispatch_workgroups(&mut self, count: [u32; 3]);
1731    unsafe fn dispatch_workgroups_indirect(
1732        &mut self,
1733        buffer: &<Self::A as Api>::Buffer,
1734        offset: wgt::BufferAddress,
1735    );
1736
1737    /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1738    /// All buffers must be synchronized externally
1739    /// All buffer regions, which are written to may only be passed once per function call,
1740    /// with the exception of updates in the same descriptor.
1741    /// Consequences of this limitation:
1742    /// - scratch buffers need to be unique
1743    /// - a tlas can't be build in the same call with a blas it contains
1744    unsafe fn build_acceleration_structures<'a, T>(
1745        &mut self,
1746        descriptor_count: u32,
1747        descriptors: T,
1748    ) where
1749        Self::A: 'a,
1750        T: IntoIterator<
1751            Item = BuildAccelerationStructureDescriptor<
1752                'a,
1753                <Self::A as Api>::Buffer,
1754                <Self::A as Api>::AccelerationStructure,
1755            >,
1756        >;
1757    unsafe fn place_acceleration_structure_barrier(
1758        &mut self,
1759        barrier: AccelerationStructureBarrier,
1760    );
1761    // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1762    unsafe fn read_acceleration_structure_compact_size(
1763        &mut self,
1764        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1765        buf: &<Self::A as Api>::Buffer,
1766    );
1767    unsafe fn set_acceleration_structure_dependencies(
1768        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1769        dependencies: &[&<Self::A as Api>::AccelerationStructure],
1770    );
1771}
1772
1773bitflags!(
1774    /// Pipeline layout creation flags.
1775    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1776    pub struct PipelineLayoutFlags: u32 {
1777        /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1778        /// via immediates for direct execution.
1779        const FIRST_VERTEX_INSTANCE = 1 << 0;
1780        /// D3D12: Add support for `num_workgroups` builtins via immediates
1781        /// for direct execution.
1782        const NUM_WORK_GROUPS = 1 << 1;
1783        /// D3D12: Add support for the builtins that the other flags enable for
1784        /// indirect execution.
1785        const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1786    }
1787);
1788
1789bitflags!(
1790    /// Pipeline layout creation flags.
1791    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1792    pub struct BindGroupLayoutFlags: u32 {
1793        /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1794        const PARTIALLY_BOUND = 1 << 0;
1795    }
1796);
1797
1798bitflags!(
1799    /// Texture format capability flags.
1800    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1801    pub struct TextureFormatCapabilities: u32 {
1802        /// Format can be sampled.
1803        const SAMPLED = 1 << 0;
1804        /// Format can be sampled with a linear sampler.
1805        const SAMPLED_LINEAR = 1 << 1;
1806        /// Format can be sampled with a min/max reduction sampler.
1807        const SAMPLED_MINMAX = 1 << 2;
1808
1809        /// Format can be used as storage with read-only access.
1810        const STORAGE_READ_ONLY = 1 << 3;
1811        /// Format can be used as storage with write-only access.
1812        const STORAGE_WRITE_ONLY = 1 << 4;
1813        /// Format can be used as storage with both read and write access.
1814        const STORAGE_READ_WRITE = 1 << 5;
1815        /// Format can be used as storage with atomics.
1816        const STORAGE_ATOMIC = 1 << 6;
1817
1818        /// Format can be used as color and input attachment.
1819        const COLOR_ATTACHMENT = 1 << 7;
1820        /// Format can be used as color (with blending) and input attachment.
1821        const COLOR_ATTACHMENT_BLEND = 1 << 8;
1822        /// Format can be used as depth-stencil and input attachment.
1823        const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1824
1825        /// Format can be multisampled by x2.
1826        const MULTISAMPLE_X2   = 1 << 10;
1827        /// Format can be multisampled by x4.
1828        const MULTISAMPLE_X4   = 1 << 11;
1829        /// Format can be multisampled by x8.
1830        const MULTISAMPLE_X8   = 1 << 12;
1831        /// Format can be multisampled by x16.
1832        const MULTISAMPLE_X16  = 1 << 13;
1833
1834        /// Format can be used for render pass resolve targets.
1835        const MULTISAMPLE_RESOLVE = 1 << 14;
1836
1837        /// Format can be copied from.
1838        const COPY_SRC = 1 << 15;
1839        /// Format can be copied to.
1840        const COPY_DST = 1 << 16;
1841    }
1842);
1843
1844bitflags!(
1845    /// Texture format capability flags.
1846    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1847    pub struct FormatAspects: u8 {
1848        const COLOR = 1 << 0;
1849        const DEPTH = 1 << 1;
1850        const STENCIL = 1 << 2;
1851        const PLANE_0 = 1 << 3;
1852        const PLANE_1 = 1 << 4;
1853        const PLANE_2 = 1 << 5;
1854
1855        const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1856    }
1857);
1858
1859impl FormatAspects {
1860    pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1861        let aspect_mask = match aspect {
1862            wgt::TextureAspect::All => Self::all(),
1863            wgt::TextureAspect::DepthOnly => Self::DEPTH,
1864            wgt::TextureAspect::StencilOnly => Self::STENCIL,
1865            wgt::TextureAspect::Plane0 => Self::PLANE_0,
1866            wgt::TextureAspect::Plane1 => Self::PLANE_1,
1867            wgt::TextureAspect::Plane2 => Self::PLANE_2,
1868        };
1869        Self::from(format) & aspect_mask
1870    }
1871
1872    /// Returns `true` if only one flag is set
1873    pub fn is_one(&self) -> bool {
1874        self.bits().is_power_of_two()
1875    }
1876
1877    pub fn map(&self) -> wgt::TextureAspect {
1878        match *self {
1879            Self::COLOR => wgt::TextureAspect::All,
1880            Self::DEPTH => wgt::TextureAspect::DepthOnly,
1881            Self::STENCIL => wgt::TextureAspect::StencilOnly,
1882            Self::PLANE_0 => wgt::TextureAspect::Plane0,
1883            Self::PLANE_1 => wgt::TextureAspect::Plane1,
1884            Self::PLANE_2 => wgt::TextureAspect::Plane2,
1885            _ => unreachable!(),
1886        }
1887    }
1888}
1889
1890impl From<wgt::TextureFormat> for FormatAspects {
1891    fn from(format: wgt::TextureFormat) -> Self {
1892        match format {
1893            wgt::TextureFormat::Stencil8 => Self::STENCIL,
1894            wgt::TextureFormat::Depth16Unorm
1895            | wgt::TextureFormat::Depth32Float
1896            | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1897            wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1898                Self::DEPTH_STENCIL
1899            }
1900            wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1901            _ => Self::COLOR,
1902        }
1903    }
1904}
1905
1906bitflags!(
1907    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1908    pub struct MemoryFlags: u32 {
1909        const TRANSIENT = 1 << 0;
1910        const PREFER_COHERENT = 1 << 1;
1911    }
1912);
1913
1914bitflags!(
1915    /// Attachment load and store operations.
1916    ///
1917    /// There must be at least one flag from the LOAD group and one from the STORE group set.
1918    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1919    pub struct AttachmentOps: u8 {
1920        /// Load the existing contents of the attachment.
1921        const LOAD = 1 << 0;
1922        /// Clear the attachment to a specified value.
1923        const LOAD_CLEAR = 1 << 1;
1924        /// The contents of the attachment are undefined.
1925        const LOAD_DONT_CARE = 1 << 2;
1926        /// Store the contents of the attachment.
1927        const STORE = 1 << 3;
1928        /// The contents of the attachment are undefined after the pass.
1929        const STORE_DISCARD = 1 << 4;
1930    }
1931);
1932
1933#[derive(Debug)]
1934pub struct InstanceDescriptor<'a> {
1935    pub name: &'a str,
1936    pub flags: wgt::InstanceFlags,
1937    pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1938    pub backend_options: wgt::BackendOptions,
1939    pub telemetry: Option<Telemetry>,
1940    /// This is a borrow because the surrounding `core::Instance` keeps the the owned display handle
1941    /// alive already.
1942    pub display: Option<DisplayHandle<'a>>,
1943}
1944
1945#[derive(Clone, Debug)]
1946pub struct Alignments {
1947    /// The alignment of the start of the buffer used as a GPU copy source.
1948    pub buffer_copy_offset: wgt::BufferSize,
1949
1950    /// The alignment of the row pitch of the texture data stored in a buffer that is
1951    /// used in a GPU copy operation.
1952    pub buffer_copy_pitch: wgt::BufferSize,
1953
1954    /// The finest alignment of bound range checking for uniform buffers.
1955    ///
1956    /// When `wgpu_hal` restricts shader references to the [accessible
1957    /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1958    /// is the bind group binding's stated [size], rounded up to the next
1959    /// multiple of this value.
1960    ///
1961    /// We don't need an analogous field for storage buffer bindings, because
1962    /// all our backends promise to enforce the size at least to a four-byte
1963    /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1964    /// of four anyway.
1965    ///
1966    /// [ar]: struct.BufferBinding.html#accessible-region
1967    /// [`Uniform`]: wgt::BufferBindingType::Uniform
1968    /// [size]: BufferBinding::size
1969    pub uniform_bounds_check_alignment: wgt::BufferSize,
1970
1971    /// The size of the raw TLAS instance
1972    pub raw_tlas_instance_size: u32,
1973
1974    /// What the scratch buffer for building an acceleration structure must be aligned to
1975    pub ray_tracing_scratch_buffer_alignment: u32,
1976}
1977
1978#[derive(Clone, Debug)]
1979pub struct Capabilities {
1980    pub limits: wgt::Limits,
1981    pub alignments: Alignments,
1982    pub downlevel: wgt::DownlevelCapabilities,
1983    /// Supported cooperative matrix configurations.
1984    ///
1985    /// Empty if cooperative matrices are not supported.
1986    pub cooperative_matrix_properties: Vec<wgt::CooperativeMatrixProperties>,
1987}
1988
1989/// An adapter with all the information needed to reason about its capabilities.
1990///
1991/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1992/// methods on the backend [`Instance`] or [`Adapter`].
1993#[derive(Debug)]
1994pub struct ExposedAdapter<A: Api> {
1995    pub adapter: A::Adapter,
1996    pub info: wgt::AdapterInfo,
1997    pub features: wgt::Features,
1998    pub capabilities: Capabilities,
1999}
2000
2001/// Describes information about what a `Surface`'s presentation capabilities are.
2002/// Fetch this with [Adapter::surface_capabilities].
2003#[derive(Debug, Clone)]
2004pub struct SurfaceCapabilities {
2005    /// List of supported texture formats.
2006    ///
2007    /// Must be at least one.
2008    pub formats: Vec<wgt::TextureFormat>,
2009
2010    /// Range for the number of queued frames.
2011    ///
2012    /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
2013    /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
2014    ///
2015    /// - `maximum_frame_latency.start` must be at least 1.
2016    /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
2017    pub maximum_frame_latency: RangeInclusive<u32>,
2018
2019    /// Current extent of the surface, if known.
2020    pub current_extent: Option<wgt::Extent3d>,
2021
2022    /// Supported texture usage flags.
2023    ///
2024    /// Must have at least `wgt::TextureUses::COLOR_TARGET`
2025    pub usage: wgt::TextureUses,
2026
2027    /// List of supported V-sync modes.
2028    ///
2029    /// Must be at least one.
2030    pub present_modes: Vec<wgt::PresentMode>,
2031
2032    /// List of supported alpha composition modes.
2033    ///
2034    /// Must be at least one.
2035    pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
2036}
2037
2038#[derive(Debug)]
2039pub struct AcquiredSurfaceTexture<A: Api> {
2040    pub texture: A::SurfaceTexture,
2041    /// The presentation configuration no longer matches
2042    /// the surface properties exactly, but can still be used to present
2043    /// to the surface successfully.
2044    pub suboptimal: bool,
2045}
2046
2047/// An open connection to a device and a queue.
2048///
2049/// This can be created from [`Adapter::open`] or backend
2050/// specific methods on the backend's [`Instance`] or [`Adapter`].
2051#[derive(Debug)]
2052pub struct OpenDevice<A: Api> {
2053    pub device: A::Device,
2054    pub queue: A::Queue,
2055}
2056
2057#[derive(Clone, Debug)]
2058pub struct BufferMapping {
2059    pub ptr: NonNull<u8>,
2060    pub is_coherent: bool,
2061}
2062
2063#[derive(Clone, Debug)]
2064pub struct BufferDescriptor<'a> {
2065    pub label: Label<'a>,
2066    pub size: wgt::BufferAddress,
2067    pub usage: wgt::BufferUses,
2068    pub memory_flags: MemoryFlags,
2069}
2070
2071#[derive(Clone, Debug)]
2072pub struct TextureDescriptor<'a> {
2073    pub label: Label<'a>,
2074    pub size: wgt::Extent3d,
2075    pub mip_level_count: u32,
2076    pub sample_count: u32,
2077    pub dimension: wgt::TextureDimension,
2078    pub format: wgt::TextureFormat,
2079    pub usage: wgt::TextureUses,
2080    pub memory_flags: MemoryFlags,
2081    /// Allows views of this texture to have a different format
2082    /// than the texture does.
2083    pub view_formats: Vec<wgt::TextureFormat>,
2084}
2085
2086impl TextureDescriptor<'_> {
2087    pub fn copy_extent(&self) -> CopyExtent {
2088        CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
2089    }
2090
2091    pub fn is_cube_compatible(&self) -> bool {
2092        self.dimension == wgt::TextureDimension::D2
2093            && self.size.depth_or_array_layers.is_multiple_of(6)
2094            && self.sample_count == 1
2095            && self.size.width == self.size.height
2096    }
2097
2098    pub fn array_layer_count(&self) -> u32 {
2099        match self.dimension {
2100            wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
2101            wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
2102        }
2103    }
2104}
2105
2106/// TextureView descriptor.
2107///
2108/// Valid usage:
2109///. - `format` has to be the same as `TextureDescriptor::format`
2110///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
2111///. - `usage` has to be a subset of `TextureDescriptor::usage`
2112///. - `range` has to be a subset of parent texture
2113#[derive(Clone, Debug)]
2114pub struct TextureViewDescriptor<'a> {
2115    pub label: Label<'a>,
2116    pub format: wgt::TextureFormat,
2117    pub dimension: wgt::TextureViewDimension,
2118    pub usage: wgt::TextureUses,
2119    pub range: wgt::ImageSubresourceRange,
2120}
2121
2122#[derive(Clone, Debug)]
2123pub struct SamplerDescriptor<'a> {
2124    pub label: Label<'a>,
2125    pub address_modes: [wgt::AddressMode; 3],
2126    pub mag_filter: wgt::FilterMode,
2127    pub min_filter: wgt::FilterMode,
2128    pub mipmap_filter: wgt::MipmapFilterMode,
2129    pub lod_clamp: Range<f32>,
2130    pub compare: Option<wgt::CompareFunction>,
2131    // Must in the range [1, 16].
2132    //
2133    // Anisotropic filtering must be supported if this is not 1.
2134    pub anisotropy_clamp: u16,
2135    pub border_color: Option<wgt::SamplerBorderColor>,
2136}
2137
2138/// BindGroupLayout descriptor.
2139///
2140/// Valid usage:
2141/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
2142#[derive(Clone, Debug)]
2143pub struct BindGroupLayoutDescriptor<'a> {
2144    pub label: Label<'a>,
2145    pub flags: BindGroupLayoutFlags,
2146    pub entries: &'a [wgt::BindGroupLayoutEntry],
2147}
2148
2149#[derive(Clone, Debug)]
2150pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
2151    pub label: Label<'a>,
2152    pub flags: PipelineLayoutFlags,
2153    pub bind_group_layouts: &'a [Option<&'a B>],
2154    pub immediate_size: u32,
2155}
2156
2157/// A region of a buffer made visible to shaders via a [`BindGroup`].
2158///
2159/// [`BindGroup`]: Api::BindGroup
2160///
2161/// ## Construction
2162///
2163/// The recommended way to construct a `BufferBinding` is using the `binding`
2164/// method on a wgpu-core `Buffer`, which will validate the binding size
2165/// against the buffer size. A `new_unchecked` constructor is also provided for
2166/// cases where direct construction is necessary.
2167///
2168/// ## Accessible region
2169///
2170/// `wgpu_hal` guarantees that shaders compiled with
2171/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
2172/// write data via this binding outside the *accessible region* of a buffer:
2173///
2174/// - The accessible region starts at [`offset`].
2175///
2176/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2177///   which must be a multiple of 4.
2178///
2179/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2180///   rounded up to the next multiple of
2181///   [`Alignments::uniform_bounds_check_alignment`].
2182///
2183/// Note that this guarantee is stricter than WGSL's requirements for
2184/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2185/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2186/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2187/// never be read by the application before they are overwritten. This
2188/// optimization consults bind group buffer binding regions to determine which
2189/// parts of which buffers shaders might observe. This optimization is only
2190/// sound if shader access is bounds-checked.
2191///
2192/// ## Zero-length bindings
2193///
2194/// Some back ends cannot tolerate zero-length regions; for example, see
2195/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2196/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2197/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2198/// previously stated that a `BufferBinding` must have `offset` strictly less
2199/// than the size of the buffer, but this restriction was not honored elsewhere
2200/// in the code, so has been removed. However, it remains the case that
2201/// some backends do not support zero-length bindings, so additional
2202/// logic is needed somewhere to handle this properly. See
2203/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2204///
2205/// [`offset`]: BufferBinding::offset
2206/// [`size`]: BufferBinding::size
2207/// [`Storage`]: wgt::BufferBindingType::Storage
2208/// [`Uniform`]: wgt::BufferBindingType::Uniform
2209/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2210/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2211/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2212/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2213#[derive(Debug)]
2214pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2215    /// The buffer being bound.
2216    ///
2217    /// This is not fully `pub` to prevent direct construction of
2218    /// `BufferBinding`s, while still allowing public read access to the `offset`
2219    /// and `size` properties.
2220    pub(crate) buffer: &'a B,
2221
2222    /// The offset at which the bound region starts.
2223    ///
2224    /// This must be less or equal to the size of the buffer.
2225    pub offset: wgt::BufferAddress,
2226
2227    /// The size of the region bound, in bytes.
2228    ///
2229    /// If `None`, the region extends from `offset` to the end of the
2230    /// buffer. Given the restrictions on `offset`, this means that
2231    /// the size is always greater than zero.
2232    pub size: Option<wgt::BufferSize>,
2233}
2234
2235// We must implement this manually because `B` is not necessarily `Clone`.
2236impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2237    fn clone(&self) -> Self {
2238        BufferBinding {
2239            buffer: self.buffer,
2240            offset: self.offset,
2241            size: self.size,
2242        }
2243    }
2244}
2245
2246/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2247/// really wants to be using `NonZeroU64`.
2248/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2249pub trait ShouldBeNonZeroExt {
2250    fn get(&self) -> u64;
2251}
2252
2253impl ShouldBeNonZeroExt for NonZeroU64 {
2254    fn get(&self) -> u64 {
2255        NonZeroU64::get(*self)
2256    }
2257}
2258
2259impl ShouldBeNonZeroExt for u64 {
2260    fn get(&self) -> u64 {
2261        *self
2262    }
2263}
2264
2265impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2266    fn get(&self) -> u64 {
2267        match *self {
2268            Some(non_zero) => non_zero.get(),
2269            None => 0,
2270        }
2271    }
2272}
2273
2274impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2275    /// Construct a `BufferBinding` with the given contents.
2276    ///
2277    /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2278    /// of this method. `Buffer::binding` validates the size of the binding
2279    /// against the size of the buffer.
2280    ///
2281    /// It is more difficult to provide a validating constructor here, due to
2282    /// not having direct access to the size of a `DynBuffer`.
2283    ///
2284    /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2285    /// bytes starting at `offset` is contained within the buffer.
2286    ///
2287    /// The `S` type parameter is a temporary convenience to allow callers to
2288    /// pass a zero size. When the zero-size binding issue is resolved, the
2289    /// argument should just match the type of the member.
2290    /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2291    pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2292        buffer: &'a B,
2293        offset: wgt::BufferAddress,
2294        size: S,
2295    ) -> Self {
2296        Self {
2297            buffer,
2298            offset,
2299            size: size.into(),
2300        }
2301    }
2302}
2303
2304#[derive(Debug)]
2305pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2306    pub view: &'a T,
2307    pub usage: wgt::TextureUses,
2308}
2309
2310impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2311    fn clone(&self) -> Self {
2312        TextureBinding {
2313            view: self.view,
2314            usage: self.usage,
2315        }
2316    }
2317}
2318
2319#[derive(Debug)]
2320pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2321    pub planes: [TextureBinding<'a, T>; 3],
2322    pub params: BufferBinding<'a, B>,
2323}
2324
2325impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2326    for ExternalTextureBinding<'a, B, T>
2327{
2328    fn clone(&self) -> Self {
2329        ExternalTextureBinding {
2330            planes: self.planes.clone(),
2331            params: self.params.clone(),
2332        }
2333    }
2334}
2335
2336/// cbindgen:ignore
2337#[derive(Clone, Debug)]
2338pub struct BindGroupEntry {
2339    pub binding: u32,
2340    pub resource_index: u32,
2341    pub count: u32,
2342}
2343
2344/// BindGroup descriptor.
2345///
2346/// Valid usage:
2347///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2348///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2349///. - each entry has to be compatible with the `layout`
2350///. - each entry's `BindGroupEntry::resource_index` is within range
2351///    of the corresponding resource array, selected by the relevant
2352///    `BindGroupLayoutEntry`.
2353#[derive(Clone, Debug)]
2354pub struct BindGroupDescriptor<
2355    'a,
2356    Bgl: DynBindGroupLayout + ?Sized,
2357    B: DynBuffer + ?Sized,
2358    S: DynSampler + ?Sized,
2359    T: DynTextureView + ?Sized,
2360    A: DynAccelerationStructure + ?Sized,
2361> {
2362    pub label: Label<'a>,
2363    pub layout: &'a Bgl,
2364    pub buffers: &'a [BufferBinding<'a, B>],
2365    pub samplers: &'a [&'a S],
2366    pub textures: &'a [TextureBinding<'a, T>],
2367    pub entries: &'a [BindGroupEntry],
2368    pub acceleration_structures: &'a [&'a A],
2369    pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2370}
2371
2372#[derive(Clone, Debug)]
2373pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2374    pub label: Label<'a>,
2375    pub queue: &'a Q,
2376}
2377
2378/// Naga shader module.
2379#[derive(Default)]
2380pub struct NagaShader {
2381    /// Shader module IR.
2382    pub module: Cow<'static, naga::Module>,
2383    /// Analysis information of the module.
2384    pub info: naga::valid::ModuleInfo,
2385    /// Source codes for debug
2386    pub debug_source: Option<DebugSource>,
2387}
2388
2389// Custom implementation avoids the need to generate Debug impl code
2390// for the whole Naga module and info.
2391impl fmt::Debug for NagaShader {
2392    fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2393        write!(formatter, "Naga shader")
2394    }
2395}
2396
2397/// Shader input.
2398pub enum ShaderInput<'a> {
2399    Naga(NagaShader),
2400    MetalLib {
2401        file: &'a [u8],
2402        num_workgroups: hashbrown::HashMap<String, (u32, u32, u32)>,
2403    },
2404    Msl {
2405        shader: &'a str,
2406        num_workgroups: hashbrown::HashMap<String, (u32, u32, u32)>,
2407    },
2408    SpirV(&'a [u32]),
2409    Dxil {
2410        shader: &'a [u8],
2411    },
2412    Hlsl {
2413        shader: &'a str,
2414    },
2415    Glsl {
2416        shader: &'a str,
2417    },
2418}
2419
2420pub struct ShaderModuleDescriptor<'a> {
2421    pub label: Label<'a>,
2422
2423    /// # Safety
2424    ///
2425    /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2426    ///
2427    /// [src]: wgt::ShaderRuntimeChecks
2428    pub runtime_checks: wgt::ShaderRuntimeChecks,
2429}
2430
2431#[derive(Debug, Clone)]
2432pub struct DebugSource {
2433    pub file_name: Cow<'static, str>,
2434    pub source_code: Cow<'static, str>,
2435}
2436
2437/// Describes a programmable pipeline stage.
2438#[derive(Debug)]
2439pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2440    /// The compiled shader module for this stage.
2441    pub module: &'a M,
2442    /// The name of the entry point in the compiled shader. There must be a function with this name
2443    ///  in the shader.
2444    pub entry_point: &'a str,
2445    /// Pipeline constants
2446    pub constants: &'a naga::back::PipelineConstants,
2447    /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2448    ///
2449    /// This is required by the WebGPU spec, but may have overhead which can be avoided
2450    /// for cross-platform applications
2451    pub zero_initialize_workgroup_memory: bool,
2452}
2453
2454impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2455    fn clone(&self) -> Self {
2456        Self {
2457            module: self.module,
2458            entry_point: self.entry_point,
2459            constants: self.constants,
2460            zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2461        }
2462    }
2463}
2464
2465/// Describes a compute pipeline.
2466#[derive(Clone, Debug)]
2467pub struct ComputePipelineDescriptor<
2468    'a,
2469    Pl: DynPipelineLayout + ?Sized,
2470    M: DynShaderModule + ?Sized,
2471    Pc: DynPipelineCache + ?Sized,
2472> {
2473    pub label: Label<'a>,
2474    /// The layout of bind groups for this pipeline.
2475    pub layout: &'a Pl,
2476    /// The compiled compute stage and its entry point.
2477    pub stage: ProgrammableStage<'a, M>,
2478    /// The cache which will be used and filled when compiling this pipeline
2479    pub cache: Option<&'a Pc>,
2480}
2481
2482pub struct PipelineCacheDescriptor<'a> {
2483    pub label: Label<'a>,
2484    pub data: Option<&'a [u8]>,
2485}
2486
2487/// Describes how the vertex buffer is interpreted.
2488#[derive(Clone, Debug)]
2489pub struct VertexBufferLayout<'a> {
2490    /// The stride, in bytes, between elements of this buffer.
2491    pub array_stride: wgt::BufferAddress,
2492    /// How often this vertex buffer is "stepped" forward.
2493    pub step_mode: wgt::VertexStepMode,
2494    /// The list of attributes which comprise a single vertex.
2495    pub attributes: &'a [wgt::VertexAttribute],
2496}
2497
2498#[derive(Clone, Debug)]
2499pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2500    Standard {
2501        /// The format of any vertex buffers used with this pipeline.
2502        vertex_buffers: &'a [Option<VertexBufferLayout<'a>>],
2503        /// The vertex stage for this pipeline.
2504        vertex_stage: ProgrammableStage<'a, M>,
2505    },
2506    Mesh {
2507        task_stage: Option<ProgrammableStage<'a, M>>,
2508        mesh_stage: ProgrammableStage<'a, M>,
2509    },
2510}
2511
2512/// Describes a render (graphics) pipeline.
2513#[derive(Clone, Debug)]
2514pub struct RenderPipelineDescriptor<
2515    'a,
2516    Pl: DynPipelineLayout + ?Sized,
2517    M: DynShaderModule + ?Sized,
2518    Pc: DynPipelineCache + ?Sized,
2519> {
2520    pub label: Label<'a>,
2521    /// The layout of bind groups for this pipeline.
2522    pub layout: &'a Pl,
2523    /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2524    pub vertex_processor: VertexProcessor<'a, M>,
2525    /// The properties of the pipeline at the primitive assembly and rasterization level.
2526    pub primitive: wgt::PrimitiveState,
2527    /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2528    pub depth_stencil: Option<wgt::DepthStencilState>,
2529    /// The multi-sampling properties of the pipeline.
2530    pub multisample: wgt::MultisampleState,
2531    /// The fragment stage for this pipeline.
2532    pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2533    /// The effect of draw calls on the color aspect of the output target.
2534    pub color_targets: &'a [Option<wgt::ColorTargetState>],
2535    /// If the pipeline will be used with a multiview render pass, this indicates how many array
2536    /// layers the attachments will have.
2537    pub multiview_mask: Option<NonZeroU32>,
2538    /// The cache which will be used and filled when compiling this pipeline
2539    pub cache: Option<&'a Pc>,
2540}
2541
2542#[derive(Debug, Clone)]
2543pub struct SurfaceConfiguration {
2544    /// Maximum number of queued frames. Must be in
2545    /// `SurfaceCapabilities::maximum_frame_latency` range.
2546    pub maximum_frame_latency: u32,
2547    /// Vertical synchronization mode.
2548    pub present_mode: wgt::PresentMode,
2549    /// Alpha composition mode.
2550    pub composite_alpha_mode: wgt::CompositeAlphaMode,
2551    /// Format of the surface textures.
2552    pub format: wgt::TextureFormat,
2553    /// Requested texture extent. Must be in
2554    /// `SurfaceCapabilities::extents` range.
2555    pub extent: wgt::Extent3d,
2556    /// Allowed usage of surface textures,
2557    pub usage: wgt::TextureUses,
2558    /// Allows views of swapchain texture to have a different format
2559    /// than the texture does.
2560    pub view_formats: Vec<wgt::TextureFormat>,
2561}
2562
2563#[derive(Debug, Clone)]
2564pub struct Rect<T> {
2565    pub x: T,
2566    pub y: T,
2567    pub w: T,
2568    pub h: T,
2569}
2570
2571#[derive(Debug, Clone, PartialEq)]
2572pub struct StateTransition<T> {
2573    pub from: T,
2574    pub to: T,
2575}
2576
2577#[derive(Debug, Clone)]
2578pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2579    pub buffer: &'a B,
2580    pub usage: StateTransition<wgt::BufferUses>,
2581}
2582
2583#[derive(Debug, Clone)]
2584pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2585    pub texture: &'a T,
2586    pub range: wgt::ImageSubresourceRange,
2587    pub usage: StateTransition<wgt::TextureUses>,
2588}
2589
2590#[derive(Clone, Copy, Debug)]
2591pub struct BufferCopy {
2592    pub src_offset: wgt::BufferAddress,
2593    pub dst_offset: wgt::BufferAddress,
2594    pub size: wgt::BufferSize,
2595}
2596
2597#[derive(Clone, Debug)]
2598pub struct TextureCopyBase {
2599    pub mip_level: u32,
2600    pub array_layer: u32,
2601    /// Origin within a texture.
2602    /// Note: for 1D and 2D textures, Z must be 0.
2603    pub origin: wgt::Origin3d,
2604    pub aspect: FormatAspects,
2605}
2606
2607#[derive(Clone, Copy, Debug)]
2608pub struct CopyExtent {
2609    pub width: u32,
2610    pub height: u32,
2611    pub depth: u32,
2612}
2613
2614impl From<wgt::Extent3d> for CopyExtent {
2615    fn from(value: wgt::Extent3d) -> Self {
2616        let wgt::Extent3d {
2617            width,
2618            height,
2619            depth_or_array_layers,
2620        } = value;
2621        Self {
2622            width,
2623            height,
2624            depth: depth_or_array_layers,
2625        }
2626    }
2627}
2628
2629impl From<CopyExtent> for wgt::Extent3d {
2630    fn from(value: CopyExtent) -> Self {
2631        let CopyExtent {
2632            width,
2633            height,
2634            depth,
2635        } = value;
2636        Self {
2637            width,
2638            height,
2639            depth_or_array_layers: depth,
2640        }
2641    }
2642}
2643
2644#[derive(Clone, Debug)]
2645pub struct TextureCopy {
2646    pub src_base: TextureCopyBase,
2647    pub dst_base: TextureCopyBase,
2648    pub size: CopyExtent,
2649}
2650
2651#[derive(Clone, Debug)]
2652pub struct BufferTextureCopy {
2653    pub buffer_layout: wgt::TexelCopyBufferLayout,
2654    pub texture_base: TextureCopyBase,
2655    pub size: CopyExtent,
2656}
2657
2658#[derive(Clone, Debug)]
2659pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2660    pub view: &'a T,
2661    /// Contains either a single mutating usage as a target,
2662    /// or a valid combination of read-only usages.
2663    pub usage: wgt::TextureUses,
2664}
2665
2666#[derive(Clone, Debug)]
2667pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2668    pub target: Attachment<'a, T>,
2669    pub depth_slice: Option<u32>,
2670    pub resolve_target: Option<Attachment<'a, T>>,
2671    pub ops: AttachmentOps,
2672    pub clear_value: wgt::Color,
2673}
2674
2675#[derive(Clone, Debug)]
2676pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2677    pub target: Attachment<'a, T>,
2678    pub depth_ops: AttachmentOps,
2679    pub stencil_ops: AttachmentOps,
2680    pub clear_value: (f32, u32),
2681}
2682
2683#[derive(Clone, Debug)]
2684pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2685    pub query_set: &'a Q,
2686    pub beginning_of_pass_write_index: Option<u32>,
2687    pub end_of_pass_write_index: Option<u32>,
2688}
2689
2690#[derive(Clone, Debug)]
2691pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2692    pub label: Label<'a>,
2693    pub extent: wgt::Extent3d,
2694    pub sample_count: u32,
2695    pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2696    pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2697    pub multiview_mask: Option<NonZeroU32>,
2698    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2699    pub occlusion_query_set: Option<&'a Q>,
2700}
2701
2702#[derive(Clone, Debug)]
2703pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2704    pub label: Label<'a>,
2705    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2706}
2707
2708#[test]
2709fn test_default_limits() {
2710    let limits = wgt::Limits::default();
2711    assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2712}
2713
2714#[derive(Clone, Debug)]
2715pub struct AccelerationStructureDescriptor<'a> {
2716    pub label: Label<'a>,
2717    pub size: wgt::BufferAddress,
2718    pub format: AccelerationStructureFormat,
2719    pub allow_compaction: bool,
2720}
2721
2722#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2723pub enum AccelerationStructureFormat {
2724    TopLevel,
2725    BottomLevel,
2726}
2727
2728#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2729pub enum AccelerationStructureBuildMode {
2730    Build,
2731    Update,
2732}
2733
2734/// Information of the required size for a corresponding entries struct (+ flags)
2735#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2736pub struct AccelerationStructureBuildSizes {
2737    pub acceleration_structure_size: wgt::BufferAddress,
2738    pub update_scratch_size: wgt::BufferAddress,
2739    pub build_scratch_size: wgt::BufferAddress,
2740}
2741
2742/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2743/// For updates, only the data is allowed to change (not the meta data or sizes).
2744#[derive(Clone, Debug)]
2745pub struct BuildAccelerationStructureDescriptor<
2746    'a,
2747    B: DynBuffer + ?Sized,
2748    A: DynAccelerationStructure + ?Sized,
2749> {
2750    pub entries: &'a AccelerationStructureEntries<'a, B>,
2751    pub mode: AccelerationStructureBuildMode,
2752    pub flags: AccelerationStructureBuildFlags,
2753    pub source_acceleration_structure: Option<&'a A>,
2754    pub destination_acceleration_structure: &'a A,
2755    pub scratch_buffer: &'a B,
2756    pub scratch_buffer_offset: wgt::BufferAddress,
2757}
2758
2759/// - All buffers, buffer addresses and offsets will be ignored.
2760/// - The build mode will be ignored.
2761/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2762///   may result in reduced size requirements.
2763/// - Any other change may result in a bigger or smaller size requirement.
2764#[derive(Clone, Debug)]
2765pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2766    pub entries: &'a AccelerationStructureEntries<'a, B>,
2767    pub flags: AccelerationStructureBuildFlags,
2768}
2769
2770/// Entries for a single descriptor
2771/// * `Instances` - Multiple instances for a top level acceleration structure
2772/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2773/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2774#[derive(Debug)]
2775pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2776    Instances(AccelerationStructureInstances<'a, B>),
2777    Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2778    AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2779}
2780
2781/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2782/// * `indices` - optional index buffer with attributes
2783/// * `transform` - optional transform
2784#[derive(Clone, Debug)]
2785pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2786    pub vertex_buffer: Option<&'a B>,
2787    pub vertex_format: wgt::VertexFormat,
2788    pub first_vertex: u32,
2789    pub vertex_count: u32,
2790    pub vertex_stride: wgt::BufferAddress,
2791    pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2792    pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2793    pub flags: AccelerationStructureGeometryFlags,
2794}
2795
2796/// * `offset` - offset in bytes
2797#[derive(Clone, Debug)]
2798pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2799    pub buffer: Option<&'a B>,
2800    pub offset: u32,
2801    pub count: u32,
2802    pub stride: wgt::BufferAddress,
2803    pub flags: AccelerationStructureGeometryFlags,
2804}
2805
2806pub struct AccelerationStructureCopy {
2807    pub copy_flags: wgt::AccelerationStructureCopy,
2808    pub type_flags: wgt::AccelerationStructureType,
2809}
2810
2811/// * `offset` - offset in bytes
2812#[derive(Clone, Debug)]
2813pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2814    pub buffer: Option<&'a B>,
2815    pub offset: u32,
2816    pub count: u32,
2817}
2818
2819/// * `offset` - offset in bytes
2820#[derive(Clone, Debug)]
2821pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2822    pub format: wgt::IndexFormat,
2823    pub buffer: Option<&'a B>,
2824    pub offset: u32,
2825    pub count: u32,
2826}
2827
2828/// * `offset` - offset in bytes
2829#[derive(Clone, Debug)]
2830pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2831    pub buffer: &'a B,
2832    pub offset: u32,
2833}
2834
2835pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2836pub use wgt::AccelerationStructureGeometryFlags;
2837
2838bitflags::bitflags! {
2839    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2840    pub struct AccelerationStructureUses: u8 {
2841        // For blas used as input for tlas
2842        const BUILD_INPUT = 1 << 0;
2843        // Target for acceleration structure build
2844        const BUILD_OUTPUT = 1 << 1;
2845        // Tlas used in a shader
2846        const SHADER_INPUT = 1 << 2;
2847        // Blas used to query compacted size
2848        const QUERY_INPUT = 1 << 3;
2849        // BLAS used as a src for a copy operation
2850        const COPY_SRC = 1 << 4;
2851        // BLAS used as a dst for a copy operation
2852        const COPY_DST = 1 << 5;
2853    }
2854}
2855
2856#[derive(Debug, Clone)]
2857pub struct AccelerationStructureBarrier {
2858    pub usage: StateTransition<AccelerationStructureUses>,
2859}
2860
2861#[derive(Debug, Copy, Clone)]
2862pub struct TlasInstance {
2863    pub transform: [f32; 12],
2864    pub custom_data: u32,
2865    pub mask: u8,
2866    pub blas_address: u64,
2867}
2868
2869#[cfg(dx12)]
2870pub enum D3D12ExposeAdapterResult {
2871    CreateDeviceError(dx12::CreateDeviceError),
2872    UnknownFeatureLevel(i32),
2873    ResourceBindingTier2Requirement,
2874    ShaderModel6Requirement,
2875    Success(dx12::FeatureLevel, dx12::ShaderModel),
2876}
2877
2878/// Pluggable telemetry, mainly to be used by Firefox.
2879#[derive(Debug, Clone, Copy)]
2880pub struct Telemetry {
2881    #[cfg(dx12)]
2882    pub d3d12_expose_adapter: fn(
2883        desc: &windows::Win32::Graphics::Dxgi::DXGI_ADAPTER_DESC2,
2884        driver_version: Result<[u16; 4], windows_core::HRESULT>,
2885        result: D3D12ExposeAdapterResult,
2886    ),
2887}