wgpu_hal/
lib.rs

1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//!   should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//!   validation, if any, and incorrect use will often cause undefined behavior.
20//!   This allows us to minimize the overhead we impose over the underlying
21//!   graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//!   safe API for driving `wgpu-hal`, implementing all necessary validation,
23//!   resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//!   use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//!   `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//!   anticipate*, like running out of memory or losing the device. Any errors
29//!   that the user could reasonably anticipate are their responsibility to
30//!   avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//!   not mappable: as the buffer creator, the user should already know if they
32//!   can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//!   generally object-safe. You must select a specific backend type
36//!   like [`vulkan::Api`] or [`metal::Api`], and then use that
37//!   according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//!   taking objects by reference, returning them by value, and so on,
41//!   unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//!   remain mapped on the CPU while the GPU reads or writes to it. You must
45//!   explicitly indicate when data might need to be transferred between CPU and
46//!   GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//!   resource. For example, if a buffer is written to by a compute
50//!   shader, and then used as and index buffer to a draw call, you
51//!   must use [`CommandEncoder::transition_buffers`] between those two
52//!   operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//!   Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//!   store data in particular containers. The implementation doesn't guarantee
59//!   that any of the iterators are drained, unless stated otherwise by the
60//!   function documentation. For this reason, we recommend that iterators don't
61//!   do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//!   Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//!   available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//!   own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//!   creates an instance value, which you can use to enumerate the adapters
121//!   available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//!   returns an instance that can enumerate the Vulkan physical devices on your
123//!   system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//!   particular device from a particular backend. For example, a Vulkan instance
127//!   might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//!   link to a device. You get a device value by calling [`Adapter::open`], and
131//!   then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//!   command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//!   use to build buffers of commands to submit to a queue. This has all the
138//!   methods for drawing and running compute shaders, which is presumably what
139//!   you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//!   swapchain for presenting images on the screen, via interaction with the
143//!   system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208    // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209    clippy::arc_with_non_send_sync,
210    // We don't use syntax sugar where it's not necessary.
211    clippy::match_like_matches_macro,
212    // Redundant matching is more explicit.
213    clippy::redundant_pattern_matching,
214    // Explicit lifetimes are often easier to reason about.
215    clippy::needless_lifetimes,
216    // No need for defaults in the internal types.
217    clippy::new_without_default,
218    // Matches are good and extendable, no need to make an exception here.
219    clippy::single_match,
220    // Push commands are more regular than macros.
221    clippy::vec_init_then_push,
222    // We unsafe impl `Send` for a reason.
223    clippy::non_send_fields_in_send_ty,
224    // TODO!
225    clippy::missing_safety_doc,
226    // It gets in the way a lot and does not prevent bugs in practice.
227    clippy::pattern_type_mismatch,
228    // We should investigate these.
229    clippy::large_enum_variant
230)]
231#![warn(
232    clippy::alloc_instead_of_core,
233    clippy::ptr_as_ptr,
234    clippy::std_instead_of_alloc,
235    clippy::std_instead_of_core,
236    trivial_casts,
237    trivial_numeric_casts,
238    unsafe_op_in_unsafe_fn,
239    unused_extern_crates,
240    unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268    #[cfg(dx12)]
269    pub use super::dx12::Api as Dx12;
270    #[cfg(gles)]
271    pub use super::gles::Api as Gles;
272    #[cfg(metal)]
273    pub use super::metal::Api as Metal;
274    pub use super::noop::Api as Noop;
275    #[cfg(vulkan)]
276    pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288    DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289    DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290    DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291    DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292    DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299    borrow::Borrow,
300    error::Error,
301    fmt,
302    num::{NonZeroU32, NonZeroU64},
303    ops::{Range, RangeInclusive},
304    ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use raw_window_handle::DisplayHandle;
309use thiserror::Error;
310use wgt::WasmNotSendSync;
311
312cfg_if::cfg_if! {
313    if #[cfg(supports_ptr_atomics)] {
314        use alloc::sync::Arc;
315    } else if #[cfg(feature = "portable-atomic")] {
316        use portable_atomic_util::Arc;
317    }
318}
319
320// - Vertex + Fragment
321// - Compute
322// Task + Mesh + Fragment
323pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
324pub const MAX_ANISOTROPY: u8 = 16;
325pub const MAX_BIND_GROUPS: usize = 8;
326pub const MAX_VERTEX_BUFFERS: usize = 16;
327pub const MAX_COLOR_ATTACHMENTS: usize = 8;
328pub const MAX_MIP_LEVELS: u32 = 16;
329/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
330/// cbindgen:ignore
331pub const QUERY_SIZE: wgt::BufferAddress = 8;
332
333pub type Label<'a> = Option<&'a str>;
334pub type MemoryRange = Range<wgt::BufferAddress>;
335pub type FenceValue = u64;
336#[cfg(supports_64bit_atomics)]
337pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
338#[cfg(not(supports_64bit_atomics))]
339pub type AtomicFenceValue = portable_atomic::AtomicU64;
340
341/// A callback to signal that wgpu is no longer using a resource.
342#[cfg(any(gles, vulkan))]
343pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
344
345#[cfg(any(gles, vulkan))]
346pub struct DropGuard {
347    callback: Option<DropCallback>,
348}
349
350#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
351impl DropGuard {
352    fn from_option(callback: Option<DropCallback>) -> Option<Self> {
353        callback.map(|callback| Self {
354            callback: Some(callback),
355        })
356    }
357}
358
359#[cfg(any(gles, vulkan))]
360impl Drop for DropGuard {
361    fn drop(&mut self) {
362        if let Some(cb) = self.callback.take() {
363            (cb)();
364        }
365    }
366}
367
368#[cfg(any(gles, vulkan))]
369impl fmt::Debug for DropGuard {
370    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
371        f.debug_struct("DropGuard").finish()
372    }
373}
374
375#[derive(Clone, Debug, PartialEq, Eq, Error)]
376pub enum DeviceError {
377    #[error("Out of memory")]
378    OutOfMemory,
379    #[error("Device is lost")]
380    Lost,
381    #[error("Unexpected error variant (driver implementation is at fault)")]
382    Unexpected,
383}
384
385#[cfg(any(dx12, vulkan))]
386impl From<gpu_allocator::AllocationError> for DeviceError {
387    fn from(result: gpu_allocator::AllocationError) -> Self {
388        match result {
389            gpu_allocator::AllocationError::OutOfMemory => Self::OutOfMemory,
390            gpu_allocator::AllocationError::FailedToMap(e) => {
391                log::error!("gpu-allocator: Failed to map: {e}");
392                Self::Lost
393            }
394            gpu_allocator::AllocationError::NoCompatibleMemoryTypeFound => {
395                log::error!("gpu-allocator: No Compatible Memory Type Found");
396                Self::Lost
397            }
398            gpu_allocator::AllocationError::InvalidAllocationCreateDesc => {
399                log::error!("gpu-allocator: Invalid Allocation Creation Description");
400                Self::Lost
401            }
402            gpu_allocator::AllocationError::InvalidAllocatorCreateDesc(e) => {
403                log::error!("gpu-allocator: Invalid Allocator Creation Description: {e}");
404                Self::Lost
405            }
406
407            gpu_allocator::AllocationError::Internal(e) => {
408                log::error!("gpu-allocator: Internal Error: {e}");
409                Self::Lost
410            }
411            gpu_allocator::AllocationError::BarrierLayoutNeedsDevice10
412            | gpu_allocator::AllocationError::CastableFormatsRequiresEnhancedBarriers
413            | gpu_allocator::AllocationError::CastableFormatsRequiresAtLeastDevice12 => {
414                unreachable!()
415            }
416        }
417    }
418}
419
420// A copy of gpu_allocator::AllocationSizes, allowing to read the configured value for
421// the dx12 backend, we should instead add getters to gpu_allocator::AllocationSizes
422// and remove this type.
423// https://github.com/Traverse-Research/gpu-allocator/issues/295
424#[cfg_attr(not(any(dx12, vulkan)), expect(dead_code))]
425pub(crate) struct AllocationSizes {
426    pub(crate) min_device_memblock_size: u64,
427    pub(crate) max_device_memblock_size: u64,
428    pub(crate) min_host_memblock_size: u64,
429    pub(crate) max_host_memblock_size: u64,
430}
431
432impl AllocationSizes {
433    #[allow(dead_code, reason = "may be unused on some platforms")]
434    pub(crate) fn from_memory_hints(memory_hints: &wgt::MemoryHints) -> Self {
435        // TODO: the allocator's configuration should take hardware capability into
436        // account.
437        const MB: u64 = 1024 * 1024;
438
439        match memory_hints {
440            wgt::MemoryHints::Performance => Self {
441                min_device_memblock_size: 128 * MB,
442                max_device_memblock_size: 256 * MB,
443                min_host_memblock_size: 64 * MB,
444                max_host_memblock_size: 128 * MB,
445            },
446            wgt::MemoryHints::MemoryUsage => Self {
447                min_device_memblock_size: 8 * MB,
448                max_device_memblock_size: 64 * MB,
449                min_host_memblock_size: 4 * MB,
450                max_host_memblock_size: 32 * MB,
451            },
452            wgt::MemoryHints::Manual {
453                suballocated_device_memory_block_size,
454            } => {
455                // TODO: https://github.com/gfx-rs/wgpu/issues/8625
456                // Would it be useful to expose the host size in memory hints
457                // instead of always using half of the device size?
458                let device_size = suballocated_device_memory_block_size;
459                let host_size = device_size.start / 2..device_size.end / 2;
460
461                // gpu_allocator clamps the sizes between 4MiB and 256MiB, but we clamp them ourselves since we use
462                // the sizes when detecting high memory pressure and there is no way to query the values otherwise.
463                Self {
464                    min_device_memblock_size: device_size.start.clamp(4 * MB, 256 * MB),
465                    max_device_memblock_size: device_size.end.clamp(4 * MB, 256 * MB),
466                    min_host_memblock_size: host_size.start.clamp(4 * MB, 256 * MB),
467                    max_host_memblock_size: host_size.end.clamp(4 * MB, 256 * MB),
468                }
469            }
470        }
471    }
472}
473
474#[cfg(any(dx12, vulkan))]
475impl From<AllocationSizes> for gpu_allocator::AllocationSizes {
476    fn from(value: AllocationSizes) -> gpu_allocator::AllocationSizes {
477        gpu_allocator::AllocationSizes::new(
478            value.min_device_memblock_size,
479            value.min_host_memblock_size,
480        )
481        .with_max_device_memblock_size(value.max_device_memblock_size)
482        .with_max_host_memblock_size(value.max_host_memblock_size)
483    }
484}
485
486#[allow(dead_code, reason = "may be unused on some platforms")]
487#[cold]
488fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
489    panic!("wgpu-hal invariant was violated (usage error): {txt}")
490}
491
492#[allow(dead_code, reason = "may be unused on some platforms")]
493#[cold]
494fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
495    panic!("wgpu-hal ran into a preventable internal error: {txt}")
496}
497
498#[derive(Clone, Debug, Eq, PartialEq, Error)]
499pub enum ShaderError {
500    #[error("Compilation failed: {0:?}")]
501    Compilation(String),
502    #[error(transparent)]
503    Device(#[from] DeviceError),
504}
505
506#[derive(Clone, Debug, Eq, PartialEq, Error)]
507pub enum PipelineError {
508    #[error("Linkage failed for stage {0:?}: {1}")]
509    Linkage(wgt::ShaderStages, String),
510    #[error("Entry point for stage {0:?} is invalid")]
511    EntryPoint(naga::ShaderStage),
512    #[error(transparent)]
513    Device(#[from] DeviceError),
514    #[error("Pipeline constant error for stage {0:?}: {1}")]
515    PipelineConstants(wgt::ShaderStages, String),
516}
517
518#[derive(Clone, Debug, Eq, PartialEq, Error)]
519pub enum PipelineCacheError {
520    #[error(transparent)]
521    Device(#[from] DeviceError),
522}
523
524#[derive(Clone, Debug, Eq, PartialEq, Error)]
525pub enum SurfaceError {
526    #[error("Surface is lost")]
527    Lost,
528    #[error("Surface is outdated, needs to be re-created")]
529    Outdated,
530    #[error("Timed out waiting for a surface texture")]
531    Timeout,
532    #[error("The window is occluded (e.g. minimized or behind another window). Try again once the window is no longer occluded.")]
533    Occluded,
534    #[error(transparent)]
535    Device(#[from] DeviceError),
536    #[error("Other reason: {0}")]
537    Other(&'static str),
538}
539
540/// Error occurring while trying to create an instance, or create a surface from an instance;
541/// typically relating to the state of the underlying graphics API or hardware.
542#[derive(Clone, Debug, Error)]
543#[error("{message}")]
544pub struct InstanceError {
545    /// These errors are very platform specific, so do not attempt to encode them as an enum.
546    ///
547    /// This message should describe the problem in sufficient detail to be useful for a
548    /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
549    /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
550    message: String,
551
552    /// Underlying error value, if any is available.
553    #[source]
554    source: Option<Arc<dyn Error + Send + Sync + 'static>>,
555}
556
557impl InstanceError {
558    #[allow(dead_code, reason = "may be unused on some platforms")]
559    pub(crate) fn new(message: String) -> Self {
560        Self {
561            message,
562            source: None,
563        }
564    }
565    #[allow(dead_code, reason = "may be unused on some platforms")]
566    pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
567        cfg_if::cfg_if! {
568            if #[cfg(supports_ptr_atomics)] {
569                let source = Arc::new(source);
570            } else {
571                // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
572                let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
573                let source = Arc::from(source);
574            }
575        }
576        Self {
577            message,
578            source: Some(source),
579        }
580    }
581}
582
583/// All the types and methods that make up a implementation on top of a backend.
584///
585/// Only the types that have non-dyn trait bounds have methods on them. Most methods
586/// are either on [`CommandEncoder`] or [`Device`].
587///
588/// The api can either be used through generics (through use of this trait and associated
589/// types) or dynamically through using the `Dyn*` traits.
590pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
591    const VARIANT: wgt::Backend;
592
593    type Instance: DynInstance + Instance<A = Self>;
594    type Surface: DynSurface + Surface<A = Self>;
595    type Adapter: DynAdapter + Adapter<A = Self>;
596    type Device: DynDevice + Device<A = Self>;
597
598    type Queue: DynQueue + Queue<A = Self>;
599    type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
600
601    /// This API's command buffer type.
602    ///
603    /// The only thing you can do with `CommandBuffer`s is build them
604    /// with a [`CommandEncoder`] and then pass them to
605    /// [`Queue::submit`] for execution, or destroy them by passing
606    /// them to [`CommandEncoder::reset_all`].
607    ///
608    /// [`CommandEncoder`]: Api::CommandEncoder
609    type CommandBuffer: DynCommandBuffer;
610
611    type Buffer: DynBuffer;
612    type Texture: DynTexture;
613    type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
614    type TextureView: DynTextureView;
615    type Sampler: DynSampler;
616    type QuerySet: DynQuerySet;
617
618    /// A value you can block on to wait for something to finish.
619    ///
620    /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
621    /// [`Device::wait`] to block until a fence reaches or passes a value you
622    /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
623    /// store in it when the submitted work is complete.
624    ///
625    /// Attempting to set a fence to a value less than its current value has no
626    /// effect.
627    ///
628    /// Waiting on a fence returns as soon as the fence reaches *or passes* the
629    /// requested value. This implies that, in order to reliably determine when
630    /// an operation has completed, operations must finish in order of
631    /// increasing fence values: if a higher-valued operation were to finish
632    /// before a lower-valued operation, then waiting for the fence to reach the
633    /// lower value could return before the lower-valued operation has actually
634    /// finished.
635    type Fence: DynFence;
636
637    type BindGroupLayout: DynBindGroupLayout;
638    type BindGroup: DynBindGroup;
639    type PipelineLayout: DynPipelineLayout;
640    type ShaderModule: DynShaderModule;
641    type RenderPipeline: DynRenderPipeline;
642    type ComputePipeline: DynComputePipeline;
643    type PipelineCache: DynPipelineCache;
644
645    type AccelerationStructure: DynAccelerationStructure + 'static;
646}
647
648pub trait Instance: Sized + WasmNotSendSync {
649    type A: Api;
650
651    unsafe fn init(desc: &InstanceDescriptor<'_>) -> Result<Self, InstanceError>;
652    unsafe fn create_surface(
653        &self,
654        display_handle: raw_window_handle::RawDisplayHandle,
655        window_handle: raw_window_handle::RawWindowHandle,
656    ) -> Result<<Self::A as Api>::Surface, InstanceError>;
657    /// `surface_hint` is only used by the GLES backend targeting WebGL2
658    unsafe fn enumerate_adapters(
659        &self,
660        surface_hint: Option<&<Self::A as Api>::Surface>,
661    ) -> Vec<ExposedAdapter<Self::A>>;
662}
663
664pub trait Surface: WasmNotSendSync {
665    type A: Api;
666
667    /// Configure `self` to use `device`.
668    ///
669    /// # Safety
670    ///
671    /// - All GPU work using `self` must have been completed.
672    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
673    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
674    /// - The surface `self` must not currently be configured to use any other [`Device`].
675    unsafe fn configure(
676        &self,
677        device: &<Self::A as Api>::Device,
678        config: &SurfaceConfiguration,
679    ) -> Result<(), SurfaceError>;
680
681    /// Unconfigure `self` on `device`.
682    ///
683    /// # Safety
684    ///
685    /// - All GPU work that uses `surface` must have been completed.
686    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
687    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
688    /// - The surface `self` must have been configured on `device`.
689    unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
690
691    /// Return the next texture to be presented by `self`, for the caller to draw on.
692    ///
693    /// On success, return an [`AcquiredSurfaceTexture`] representing the
694    /// texture into which the caller should draw the image to be displayed on
695    /// `self`.
696    ///
697    /// If `timeout` elapses before `self` has a texture ready to be acquired,
698    /// return `Err(SurfaceError::Timeout)`. If `timeout` is `None`, wait
699    /// indefinitely, with no timeout.
700    ///
701    /// # Using an [`AcquiredSurfaceTexture`]
702    ///
703    /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
704    /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
705    /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
706    /// carries some metadata about that [`SurfaceTexture`].
707    ///
708    /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
709    /// include the [`SurfaceTexture`] in the `surface_textures` argument.
710    ///
711    /// When you are done drawing on the texture, you can display it on `self`
712    /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
713    ///
714    /// If you do not wish to display the texture, you must pass the
715    /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
716    /// by future acquisitions.
717    ///
718    /// # Portability
719    ///
720    /// Some backends can't support a timeout when acquiring a texture. On these
721    /// backends, `timeout` is ignored.
722    ///
723    /// On macOS, this returns `Err(SurfaceError::Timeout)` when the window is
724    /// not visible (minimized, fully occluded, or on another virtual desktop)
725    /// to avoid blocking in `CAMetalLayer.nextDrawable()`.
726    ///
727    /// # Safety
728    ///
729    /// - The surface `self` must currently be configured on some [`Device`].
730    ///
731    /// - The `fence` argument must be the same [`Fence`] passed to all calls to
732    ///   [`Queue::submit`] that used [`Texture`]s acquired from this surface.
733    ///
734    /// - You may only have one texture acquired from `self` at a time. When
735    ///   `acquire_texture` returns `Ok(ast)`, you must pass the returned
736    ///   [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
737    ///   [`Surface::discard_texture`] before calling `acquire_texture` again.
738    ///
739    /// [`texture`]: AcquiredSurfaceTexture::texture
740    /// [`SurfaceTexture`]: Api::SurfaceTexture
741    /// [`borrow`]: alloc::borrow::Borrow::borrow
742    /// [`Texture`]: Api::Texture
743    /// [`Fence`]: Api::Fence
744    /// [`self.discard_texture`]: Surface::discard_texture
745    unsafe fn acquire_texture(
746        &self,
747        timeout: Option<core::time::Duration>,
748        fence: &<Self::A as Api>::Fence,
749    ) -> Result<AcquiredSurfaceTexture<Self::A>, SurfaceError>;
750
751    /// Relinquish an acquired texture without presenting it.
752    ///
753    /// After this call, the texture underlying [`SurfaceTexture`] may be
754    /// returned by subsequent calls to [`self.acquire_texture`].
755    ///
756    /// # Safety
757    ///
758    /// - The surface `self` must currently be configured on some [`Device`].
759    ///
760    /// - `texture` must be a [`SurfaceTexture`] returned by a call to
761    ///   [`self.acquire_texture`] that has not yet been passed to
762    ///   [`Queue::present`].
763    ///
764    /// [`SurfaceTexture`]: Api::SurfaceTexture
765    /// [`self.acquire_texture`]: Surface::acquire_texture
766    unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
767}
768
769pub trait Adapter: WasmNotSendSync {
770    type A: Api;
771
772    unsafe fn open(
773        &self,
774        features: wgt::Features,
775        limits: &wgt::Limits,
776        memory_hints: &wgt::MemoryHints,
777    ) -> Result<OpenDevice<Self::A>, DeviceError>;
778
779    /// Return the set of supported capabilities for a texture format.
780    unsafe fn texture_format_capabilities(
781        &self,
782        format: wgt::TextureFormat,
783    ) -> TextureFormatCapabilities;
784
785    /// Returns the capabilities of working with a specified surface.
786    ///
787    /// `None` means presentation is not supported for it.
788    unsafe fn surface_capabilities(
789        &self,
790        surface: &<Self::A as Api>::Surface,
791    ) -> Option<SurfaceCapabilities>;
792
793    /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
794    ///
795    /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
796    unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
797
798    /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
799    /// If a usage is ordered, then if the buffer state doesn't change between draw calls,
800    /// there are no barriers needed for synchronization.
801    fn get_ordered_buffer_usages(&self) -> wgt::BufferUses;
802
803    /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
804    /// If a usage is ordered, then if the buffer state doesn't change between draw calls,
805    /// there are no barriers needed for synchronization.
806    fn get_ordered_texture_usages(&self) -> wgt::TextureUses;
807}
808
809/// A connection to a GPU and a pool of resources to use with it.
810///
811/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
812/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
813/// used for creating resources. Each `Device` has an associated [`Queue`] used
814/// for command submission.
815///
816/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
817/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
818/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
819/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
820/// `Adapter`.
821///
822/// A `Device`'s life cycle is generally:
823///
824/// 1)  Obtain a `Device` and its associated [`Queue`] by calling
825///     [`Adapter::open`].
826///
827///     Alternatively, the backend-specific types that implement [`Adapter`] often
828///     have methods for creating a `wgpu-hal` `Device` from a platform-specific
829///     handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
830///     [`vulkan::Device`] from an [`ash::Device`].
831///
832/// 1)  Create resources to use on the device by calling methods like
833///     [`Device::create_texture`] or [`Device::create_shader_module`].
834///
835/// 1)  Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
836///     which you can use to build [`CommandBuffer`]s holding commands to be
837///     executed on the GPU.
838///
839/// 1)  Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
840///     [`CommandBuffer`]s for execution on the GPU. If needed, call
841///     [`Device::wait`] to wait for them to finish execution.
842///
843/// 1)  Free resources with methods like [`Device::destroy_texture`] or
844///     [`Device::destroy_shader_module`].
845///
846/// 1)  Drop the device.
847///
848/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
849/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
850/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
851/// [`wgpu_hal::Adapter`]: Adapter
852/// [`wgpu_hal::Device`]: Device
853/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
854/// [`vulkan::Device`]: vulkan/struct.Device.html
855/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
856/// [`CommandBuffer`]: Api::CommandBuffer
857///
858/// # Safety
859///
860/// As with other `wgpu-hal` APIs, [validation] is the caller's
861/// responsibility. Here are the general requirements for all `Device`
862/// methods:
863///
864/// - Any resource passed to a `Device` method must have been created by that
865///   `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
866///   have been created with the `Device` passed as `self`.
867///
868/// - Resources may not be destroyed if they are used by any submitted command
869///   buffers that have not yet finished execution.
870///
871/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
872/// [`Texture`]: Api::Texture
873pub trait Device: WasmNotSendSync {
874    type A: Api;
875
876    /// Creates a new buffer.
877    ///
878    /// The initial usage is `wgt::BufferUses::empty()`.
879    unsafe fn create_buffer(
880        &self,
881        desc: &BufferDescriptor,
882    ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
883
884    /// Free `buffer` and any GPU resources it owns.
885    ///
886    /// Note that backends are allowed to allocate GPU memory for buffers from
887    /// allocation pools, and this call is permitted to simply return `buffer`'s
888    /// storage to that pool, without making it available to other applications.
889    ///
890    /// # Safety
891    ///
892    /// - The given `buffer` must not currently be mapped.
893    unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
894
895    /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
896    unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
897
898    /// Return a pointer to CPU memory mapping the contents of `buffer`.
899    ///
900    /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
901    /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
902    /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
903    /// `wgpu_hal` buffer is also unmapped.)
904    ///
905    /// If this function returns `Ok(mapping)`, then:
906    ///
907    /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
908    ///
909    /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
910    ///   memory are immediately visible on the GPU, and vice versa.
911    ///
912    /// # Safety
913    ///
914    /// - The given `buffer` must have been created with the [`MAP_READ`] or
915    ///   [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
916    ///
917    /// - The given `range` must fall within the size of `buffer`.
918    ///
919    /// - The caller must avoid data races between the CPU and the GPU. A data
920    ///   race is any pair of accesses to a particular byte, one of which is a
921    ///   write, that are not ordered with respect to each other by some sort of
922    ///   synchronization operation.
923    ///
924    /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
925    ///   `false`, then:
926    ///
927    ///   - Every CPU write to a mapped byte followed by a GPU read of that byte
928    ///     must have at least one call to [`Device::flush_mapped_ranges`]
929    ///     covering that byte that occurs between those two accesses.
930    ///
931    ///   - Every GPU write to a mapped byte followed by a CPU read of that byte
932    ///     must have at least one call to [`Device::invalidate_mapped_ranges`]
933    ///     covering that byte that occurs between those two accesses.
934    ///
935    ///   Note that the data race rule above requires that all such access pairs
936    ///   be ordered, so it is meaningful to talk about what must occur
937    ///   "between" them.
938    ///
939    /// - Zero-sized mappings are not allowed.
940    ///
941    /// - The returned [`BufferMapping::ptr`] must not be used after a call to
942    ///   [`Device::unmap_buffer`].
943    ///
944    /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
945    /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
946    unsafe fn map_buffer(
947        &self,
948        buffer: &<Self::A as Api>::Buffer,
949        range: MemoryRange,
950    ) -> Result<BufferMapping, DeviceError>;
951
952    /// Remove the mapping established by the last call to [`Device::map_buffer`].
953    ///
954    /// # Safety
955    ///
956    /// - The given `buffer` must be currently mapped.
957    unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
958
959    /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
960    ///
961    /// # Safety
962    ///
963    /// - The given `buffer` must be currently mapped.
964    ///
965    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
966    unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
967    where
968        I: Iterator<Item = MemoryRange>;
969
970    /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
971    ///
972    /// # Safety
973    ///
974    /// - The given `buffer` must be currently mapped.
975    ///
976    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
977    unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
978    where
979        I: Iterator<Item = MemoryRange>;
980
981    /// Creates a new texture.
982    ///
983    /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
984    unsafe fn create_texture(
985        &self,
986        desc: &TextureDescriptor,
987    ) -> Result<<Self::A as Api>::Texture, DeviceError>;
988    unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
989
990    /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
991    unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
992
993    unsafe fn create_texture_view(
994        &self,
995        texture: &<Self::A as Api>::Texture,
996        desc: &TextureViewDescriptor,
997    ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
998    unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
999    unsafe fn create_sampler(
1000        &self,
1001        desc: &SamplerDescriptor,
1002    ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
1003    unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
1004
1005    /// Create a fresh [`CommandEncoder`].
1006    ///
1007    /// The new `CommandEncoder` is in the "closed" state.
1008    unsafe fn create_command_encoder(
1009        &self,
1010        desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
1011    ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
1012
1013    /// Creates a bind group layout.
1014    unsafe fn create_bind_group_layout(
1015        &self,
1016        desc: &BindGroupLayoutDescriptor,
1017    ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
1018    unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
1019    unsafe fn create_pipeline_layout(
1020        &self,
1021        desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
1022    ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
1023    unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
1024
1025    #[allow(clippy::type_complexity)]
1026    unsafe fn create_bind_group(
1027        &self,
1028        desc: &BindGroupDescriptor<
1029            <Self::A as Api>::BindGroupLayout,
1030            <Self::A as Api>::Buffer,
1031            <Self::A as Api>::Sampler,
1032            <Self::A as Api>::TextureView,
1033            <Self::A as Api>::AccelerationStructure,
1034        >,
1035    ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
1036    unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
1037
1038    unsafe fn create_shader_module(
1039        &self,
1040        desc: &ShaderModuleDescriptor,
1041        shader: ShaderInput,
1042    ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
1043    unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
1044
1045    #[allow(clippy::type_complexity)]
1046    unsafe fn create_render_pipeline(
1047        &self,
1048        desc: &RenderPipelineDescriptor<
1049            <Self::A as Api>::PipelineLayout,
1050            <Self::A as Api>::ShaderModule,
1051            <Self::A as Api>::PipelineCache,
1052        >,
1053    ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
1054    unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
1055
1056    #[allow(clippy::type_complexity)]
1057    unsafe fn create_compute_pipeline(
1058        &self,
1059        desc: &ComputePipelineDescriptor<
1060            <Self::A as Api>::PipelineLayout,
1061            <Self::A as Api>::ShaderModule,
1062            <Self::A as Api>::PipelineCache,
1063        >,
1064    ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
1065    unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
1066
1067    unsafe fn create_pipeline_cache(
1068        &self,
1069        desc: &PipelineCacheDescriptor<'_>,
1070    ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
1071    fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
1072        None
1073    }
1074    unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
1075
1076    unsafe fn create_query_set(
1077        &self,
1078        desc: &wgt::QuerySetDescriptor<Label>,
1079    ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
1080    unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
1081    unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
1082    unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
1083    unsafe fn get_fence_value(
1084        &self,
1085        fence: &<Self::A as Api>::Fence,
1086    ) -> Result<FenceValue, DeviceError>;
1087
1088    /// Wait for `fence` to reach `value`.
1089    ///
1090    /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
1091    /// [`FenceValue`] to store in it, so you can use this `wait` function
1092    /// to wait for a given queue submission to finish execution.
1093    ///
1094    /// The `value` argument must be a value that some actual operation you have
1095    /// already presented to the device is going to store in `fence`. You cannot
1096    /// wait for values yet to be submitted. (This restriction accommodates
1097    /// implementations like the `vulkan` backend's [`FencePool`] that must
1098    /// allocate a distinct synchronization object for each fence value one is
1099    /// able to wait for.)
1100    ///
1101    /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
1102    /// returns immediately.
1103    ///
1104    /// If `timeout` is not provided, the function will block indefinitely or until
1105    /// an error is encountered.
1106    ///
1107    /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
1108    ///
1109    /// [`Fence`]: Api::Fence
1110    /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
1111    unsafe fn wait(
1112        &self,
1113        fence: &<Self::A as Api>::Fence,
1114        value: FenceValue,
1115        timeout: Option<core::time::Duration>,
1116    ) -> Result<bool, DeviceError>;
1117
1118    /// Start a graphics debugger capture.
1119    ///
1120    /// # Safety
1121    ///
1122    /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1123    ///
1124    /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1125    unsafe fn start_graphics_debugger_capture(&self) -> bool;
1126
1127    /// Stop a graphics debugger capture.
1128    ///
1129    /// # Safety
1130    ///
1131    /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1132    ///
1133    /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1134    unsafe fn stop_graphics_debugger_capture(&self);
1135
1136    #[allow(unused_variables)]
1137    unsafe fn pipeline_cache_get_data(
1138        &self,
1139        cache: &<Self::A as Api>::PipelineCache,
1140    ) -> Option<Vec<u8>> {
1141        None
1142    }
1143
1144    unsafe fn create_acceleration_structure(
1145        &self,
1146        desc: &AccelerationStructureDescriptor,
1147    ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1148    unsafe fn get_acceleration_structure_build_sizes(
1149        &self,
1150        desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1151    ) -> AccelerationStructureBuildSizes;
1152    unsafe fn get_acceleration_structure_device_address(
1153        &self,
1154        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1155    ) -> wgt::BufferAddress;
1156    unsafe fn destroy_acceleration_structure(
1157        &self,
1158        acceleration_structure: <Self::A as Api>::AccelerationStructure,
1159    );
1160    fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1161
1162    fn get_internal_counters(&self) -> wgt::HalCounters;
1163
1164    fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1165        None
1166    }
1167
1168    fn check_if_oom(&self) -> Result<(), DeviceError>;
1169}
1170
1171pub trait Queue: WasmNotSendSync {
1172    type A: Api;
1173
1174    /// Submit `command_buffers` for execution on GPU.
1175    ///
1176    /// Update `fence` to `value` when the operation is complete. See
1177    /// [`Fence`] for details.
1178    ///
1179    /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1180    /// order they're submitted, with each buffer able to observe the effects of
1181    /// previous buffers' execution. Specifically:
1182    ///
1183    /// - If two calls to `submit` on a single `Queue` occur in a particular
1184    ///   order (that is, they happen on the same thread, or on two threads that
1185    ///   have synchronized to establish an ordering), then the first
1186    ///   submission's commands all complete execution before any of the second
1187    ///   submission's commands begin. All results produced by one submission
1188    ///   are visible to the next.
1189    ///
1190    /// - Within a submission, command buffers execute in the order in which they
1191    ///   appear in `command_buffers`. All results produced by one buffer are
1192    ///   visible to the next.
1193    ///
1194    /// If two calls to `submit` on a single `Queue` from different threads are
1195    /// not synchronized to occur in a particular order, they must pass distinct
1196    /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1197    /// operations to complete is only trustworthy when operations finish in
1198    /// order of increasing fence value, but submissions from different threads
1199    /// cannot determine how to order the fence values if the submissions
1200    /// themselves are unordered. If each thread uses a separate [`Fence`], this
1201    /// problem does not arise.
1202    ///
1203    /// # Safety
1204    ///
1205    /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1206    ///   from a [`CommandEncoder`][ce] that was constructed from the
1207    ///   [`Device`][d] associated with this [`Queue`].
1208    ///
1209    /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1210    ///   commands have finished execution. Since command buffers must not
1211    ///   outlive their encoders, this implies that the encoders must remain
1212    ///   alive as well.
1213    ///
1214    /// - All resources used by a submitted [`CommandBuffer`][cb]
1215    ///   ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1216    ///   on) must remain alive until the command buffer finishes execution.
1217    ///
1218    /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1219    ///   writes to must appear in the `surface_textures` argument.
1220    ///
1221    /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1222    ///   argument more than once.
1223    ///
1224    /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1225    ///   for use with the [`Device`][d] associated with this [`Queue`],
1226    ///   typically by calling [`Surface::configure`].
1227    ///
1228    /// - All calls to this function that include a given [`SurfaceTexture`][st]
1229    ///   in `surface_textures` must use the same [`Fence`].
1230    ///
1231    /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1232    ///   all submissions that will signal it have completed.
1233    ///
1234    /// [`Fence`]: Api::Fence
1235    /// [cb]: Api::CommandBuffer
1236    /// [ce]: Api::CommandEncoder
1237    /// [d]: Api::Device
1238    /// [t]: Api::Texture
1239    /// [bg]: Api::BindGroup
1240    /// [rp]: Api::RenderPipeline
1241    /// [st]: Api::SurfaceTexture
1242    unsafe fn submit(
1243        &self,
1244        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1245        surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1246        signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1247    ) -> Result<(), DeviceError>;
1248    /// Present a surface texture to the screen.
1249    ///
1250    /// This consumes the surface texture, returning it to the swapchain.
1251    ///
1252    /// # Safety
1253    ///
1254    /// - `texture` must have been acquired from `surface` via
1255    ///   [`Surface::acquire_texture`] and not yet presented or discarded.
1256    /// - `surface` must be configured for use with the [`Device`][d] associated
1257    ///   with this [`Queue`].
1258    /// - `texture` must be in the "present" state. Either:
1259    ///   - It was passed in [`submit`][s]'s `surface_textures` argument
1260    ///     (which transitions it to the present state), or
1261    ///   - The caller has otherwise transitioned it (e.g. via a clear +
1262    ///     barrier to `PRESENT` for textures that were never rendered to).
1263    /// - Any command buffers that write to `texture` must have been submitted
1264    ///   via [`submit`][s] before this call. The submissions do not need to
1265    ///   have completed on the GPU; platform-level synchronization handles the
1266    ///   ordering between rendering and display.
1267    /// - Must be externally synchronized with all other queue operations
1268    ///   ([`submit`][s], [`present`][Queue::present],
1269    ///   [`wait_for_idle`][Queue::wait_for_idle]) on the same queue.
1270    ///
1271    /// [d]: Api::Device
1272    /// [s]: Queue::submit
1273    unsafe fn present(
1274        &self,
1275        surface: &<Self::A as Api>::Surface,
1276        texture: <Self::A as Api>::SurfaceTexture,
1277    ) -> Result<(), SurfaceError>;
1278    /// Block until all previously submitted work on this queue has completed,
1279    /// including any pending presentations.
1280    ///
1281    /// # Safety
1282    ///
1283    /// - Must be externally synchronized with all other queue operations
1284    ///   ([`submit`][Queue::submit], [`present`][Queue::present],
1285    ///   [`wait_for_idle`][Queue::wait_for_idle]) on the same queue.
1286    unsafe fn wait_for_idle(&self) -> Result<(), DeviceError>;
1287    unsafe fn get_timestamp_period(&self) -> f32;
1288}
1289
1290/// Encoder and allocation pool for `CommandBuffer`s.
1291///
1292/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1293/// acts as the allocation pool that owns the buffers' underlying
1294/// storage. Thus, `CommandBuffer`s must not outlive the
1295/// `CommandEncoder` that created them.
1296///
1297/// The life cycle of a `CommandBuffer` is as follows:
1298///
1299/// - Call [`Device::create_command_encoder`] to create a new
1300///   `CommandEncoder`, in the "closed" state.
1301///
1302/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1303///   recording commands. This puts the `CommandEncoder` in the
1304///   "recording" state.
1305///
1306/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1307///   etc. on a "recording" `CommandEncoder` to add commands to the
1308///   list. (If an error occurs, you must call `discard_encoding`; see
1309///   below.)
1310///
1311/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1312///   encoder and construct a fresh `CommandBuffer` consisting of the
1313///   list of commands recorded up to that point.
1314///
1315/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1316///   the commands recorded thus far and close the encoder. This is
1317///   the only safe thing to do on a `CommandEncoder` if an error has
1318///   occurred while recording commands.
1319///
1320/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1321///   live `CommandBuffers` built from it. All the `CommandBuffer`s
1322///   are destroyed, and their resources are freed.
1323///
1324/// # Safety
1325///
1326/// - The `CommandEncoder` must be in the states described above to
1327///   make the given calls.
1328///
1329/// - A `CommandBuffer` that has been submitted for execution on the
1330///   GPU must live until its execution is complete.
1331///
1332/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1333///   built it.
1334///
1335/// It is the user's responsibility to meet this requirements. This
1336/// allows `CommandEncoder` implementations to keep their state
1337/// tracking to a minimum.
1338pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1339    type A: Api;
1340
1341    /// Begin encoding a new command buffer.
1342    ///
1343    /// This puts this `CommandEncoder` in the "recording" state.
1344    ///
1345    /// # Safety
1346    ///
1347    /// This `CommandEncoder` must be in the "closed" state.
1348    unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1349
1350    /// Discard the command list under construction.
1351    ///
1352    /// If an error has occurred while recording commands, this
1353    /// is the only safe thing to do with the encoder.
1354    ///
1355    /// This puts this `CommandEncoder` in the "closed" state.
1356    ///
1357    /// # Safety
1358    ///
1359    /// This `CommandEncoder` must be in the "recording" state.
1360    ///
1361    /// Callers must not assume that implementations of this
1362    /// function are idempotent, and thus should not call it
1363    /// multiple times in a row.
1364    unsafe fn discard_encoding(&mut self);
1365
1366    /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1367    ///
1368    /// The returned [`CommandBuffer`] holds all the commands recorded
1369    /// on this `CommandEncoder` since the last call to
1370    /// [`begin_encoding`].
1371    ///
1372    /// This puts this `CommandEncoder` in the "closed" state.
1373    ///
1374    /// # Safety
1375    ///
1376    /// This `CommandEncoder` must be in the "recording" state.
1377    ///
1378    /// The returned [`CommandBuffer`] must not outlive this
1379    /// `CommandEncoder`. Implementations are allowed to build
1380    /// `CommandBuffer`s that depend on storage owned by this
1381    /// `CommandEncoder`.
1382    ///
1383    /// [`CommandBuffer`]: Api::CommandBuffer
1384    /// [`begin_encoding`]: CommandEncoder::begin_encoding
1385    unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1386
1387    /// Reclaim all resources belonging to this `CommandEncoder`.
1388    ///
1389    /// # Safety
1390    ///
1391    /// This `CommandEncoder` must be in the "closed" state.
1392    ///
1393    /// The `command_buffers` iterator must produce all the live
1394    /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1395    /// is, every extant `CommandBuffer` returned from `end_encoding`.
1396    ///
1397    /// [`CommandBuffer`]: Api::CommandBuffer
1398    unsafe fn reset_all<I>(&mut self, command_buffers: I)
1399    where
1400        I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1401
1402    unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1403    where
1404        T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1405
1406    unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1407    where
1408        T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1409
1410    // copy operations
1411
1412    unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1413
1414    unsafe fn copy_buffer_to_buffer<T>(
1415        &mut self,
1416        src: &<Self::A as Api>::Buffer,
1417        dst: &<Self::A as Api>::Buffer,
1418        regions: T,
1419    ) where
1420        T: Iterator<Item = BufferCopy>;
1421
1422    /// Copy from an external image to an internal texture.
1423    /// Works with a single array layer.
1424    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1425    /// Note: the copy extent is in physical size (rounded to the block size)
1426    #[cfg(webgl)]
1427    unsafe fn copy_external_image_to_texture<T>(
1428        &mut self,
1429        src: &wgt::CopyExternalImageSourceInfo,
1430        dst: &<Self::A as Api>::Texture,
1431        dst_premultiplication: bool,
1432        regions: T,
1433    ) where
1434        T: Iterator<Item = TextureCopy>;
1435
1436    /// Copy from one texture to another.
1437    /// Works with a single array layer.
1438    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1439    /// Note: the copy extent is in physical size (rounded to the block size)
1440    unsafe fn copy_texture_to_texture<T>(
1441        &mut self,
1442        src: &<Self::A as Api>::Texture,
1443        src_usage: wgt::TextureUses,
1444        dst: &<Self::A as Api>::Texture,
1445        regions: T,
1446    ) where
1447        T: Iterator<Item = TextureCopy>;
1448
1449    /// Copy from buffer to texture.
1450    /// Works with a single array layer.
1451    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1452    /// Note: the copy extent is in physical size (rounded to the block size)
1453    unsafe fn copy_buffer_to_texture<T>(
1454        &mut self,
1455        src: &<Self::A as Api>::Buffer,
1456        dst: &<Self::A as Api>::Texture,
1457        regions: T,
1458    ) where
1459        T: Iterator<Item = BufferTextureCopy>;
1460
1461    /// Copy from texture to buffer.
1462    /// Works with a single array layer.
1463    /// Note: the copy extent is in physical size (rounded to the block size)
1464    unsafe fn copy_texture_to_buffer<T>(
1465        &mut self,
1466        src: &<Self::A as Api>::Texture,
1467        src_usage: wgt::TextureUses,
1468        dst: &<Self::A as Api>::Buffer,
1469        regions: T,
1470    ) where
1471        T: Iterator<Item = BufferTextureCopy>;
1472
1473    unsafe fn copy_acceleration_structure_to_acceleration_structure(
1474        &mut self,
1475        src: &<Self::A as Api>::AccelerationStructure,
1476        dst: &<Self::A as Api>::AccelerationStructure,
1477        copy: wgt::AccelerationStructureCopy,
1478    );
1479    // pass common
1480
1481    /// Sets the bind group at `index` to `group`.
1482    ///
1483    /// If this is not the first call to `set_bind_group` within the current
1484    /// render or compute pass:
1485    ///
1486    /// - If `layout` contains `n` bind group layouts, then any previously set
1487    ///   bind groups at indices `n` or higher are cleared.
1488    ///
1489    /// - If the first `m` bind group layouts of `layout` are equal to those of
1490    ///   the previously passed layout, but no more, then any previously set
1491    ///   bind groups at indices `m` or higher are cleared.
1492    ///
1493    /// It follows from the above that passing the same layout as before doesn't
1494    /// clear any bind groups.
1495    ///
1496    /// # Safety
1497    ///
1498    /// - This [`CommandEncoder`] must be within a render or compute pass.
1499    ///
1500    /// - `index` must be the valid index of some bind group layout in `layout`.
1501    ///   Call this the "relevant bind group layout".
1502    ///
1503    /// - The layout of `group` must be equal to the relevant bind group layout.
1504    ///
1505    /// - The length of `dynamic_offsets` must match the number of buffer
1506    ///   bindings [with dynamic offsets][hdo] in the relevant bind group
1507    ///   layout.
1508    ///
1509    /// - If those buffer bindings are ordered by increasing [`binding` number]
1510    ///   and paired with elements from `dynamic_offsets`, then each offset must
1511    ///   be a valid offset for the binding's corresponding buffer in `group`.
1512    ///
1513    /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1514    /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1515    unsafe fn set_bind_group(
1516        &mut self,
1517        layout: &<Self::A as Api>::PipelineLayout,
1518        index: u32,
1519        group: &<Self::A as Api>::BindGroup,
1520        dynamic_offsets: &[wgt::DynamicOffset],
1521    );
1522
1523    /// Sets a range in immediate data.
1524    ///
1525    /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1526    ///
1527    /// # Safety
1528    ///
1529    /// - `offset_bytes` must be a multiple of 4.
1530    /// - The range of immediates written must be valid for the pipeline layout at draw time.
1531    unsafe fn set_immediates(
1532        &mut self,
1533        layout: &<Self::A as Api>::PipelineLayout,
1534        offset_bytes: u32,
1535        data: &[u32],
1536    );
1537
1538    unsafe fn insert_debug_marker(&mut self, label: &str);
1539    unsafe fn begin_debug_marker(&mut self, group_label: &str);
1540    unsafe fn end_debug_marker(&mut self);
1541
1542    // queries
1543
1544    /// # Safety:
1545    ///
1546    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1547    unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1548    /// # Safety:
1549    ///
1550    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1551    unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1552    unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1553    unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1554    unsafe fn copy_query_results(
1555        &mut self,
1556        set: &<Self::A as Api>::QuerySet,
1557        range: Range<u32>,
1558        buffer: &<Self::A as Api>::Buffer,
1559        offset: wgt::BufferAddress,
1560        stride: wgt::BufferSize,
1561    );
1562
1563    // render passes
1564
1565    /// Begin a new render pass, clearing all active bindings.
1566    ///
1567    /// This clears any bindings established by the following calls:
1568    ///
1569    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1570    /// - [`set_immediates`](CommandEncoder::set_immediates)
1571    /// - [`begin_query`](CommandEncoder::begin_query)
1572    /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1573    /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1574    /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1575    ///
1576    /// # Safety
1577    ///
1578    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1579    ///   by a call to [`end_render_pass`].
1580    ///
1581    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1582    ///   by a call to [`end_compute_pass`].
1583    ///
1584    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1585    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1586    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1587    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1588    unsafe fn begin_render_pass(
1589        &mut self,
1590        desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1591    ) -> Result<(), DeviceError>;
1592
1593    /// End the current render pass.
1594    ///
1595    /// # Safety
1596    ///
1597    /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1598    ///   that has not been followed by a call to [`end_render_pass`].
1599    ///
1600    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1601    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1602    unsafe fn end_render_pass(&mut self);
1603
1604    unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1605
1606    unsafe fn set_index_buffer<'a>(
1607        &mut self,
1608        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1609        format: wgt::IndexFormat,
1610    );
1611    unsafe fn set_vertex_buffer<'a>(
1612        &mut self,
1613        index: u32,
1614        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1615    );
1616    unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1617    unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1618    unsafe fn set_stencil_reference(&mut self, value: u32);
1619    unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1620
1621    unsafe fn draw(
1622        &mut self,
1623        first_vertex: u32,
1624        vertex_count: u32,
1625        first_instance: u32,
1626        instance_count: u32,
1627    );
1628    unsafe fn draw_indexed(
1629        &mut self,
1630        first_index: u32,
1631        index_count: u32,
1632        base_vertex: i32,
1633        first_instance: u32,
1634        instance_count: u32,
1635    );
1636    unsafe fn draw_indirect(
1637        &mut self,
1638        buffer: &<Self::A as Api>::Buffer,
1639        offset: wgt::BufferAddress,
1640        draw_count: u32,
1641    );
1642    unsafe fn draw_indexed_indirect(
1643        &mut self,
1644        buffer: &<Self::A as Api>::Buffer,
1645        offset: wgt::BufferAddress,
1646        draw_count: u32,
1647    );
1648    unsafe fn draw_indirect_count(
1649        &mut self,
1650        buffer: &<Self::A as Api>::Buffer,
1651        offset: wgt::BufferAddress,
1652        count_buffer: &<Self::A as Api>::Buffer,
1653        count_offset: wgt::BufferAddress,
1654        max_count: u32,
1655    );
1656    unsafe fn draw_indexed_indirect_count(
1657        &mut self,
1658        buffer: &<Self::A as Api>::Buffer,
1659        offset: wgt::BufferAddress,
1660        count_buffer: &<Self::A as Api>::Buffer,
1661        count_offset: wgt::BufferAddress,
1662        max_count: u32,
1663    );
1664    unsafe fn draw_mesh_tasks(
1665        &mut self,
1666        group_count_x: u32,
1667        group_count_y: u32,
1668        group_count_z: u32,
1669    );
1670    unsafe fn draw_mesh_tasks_indirect(
1671        &mut self,
1672        buffer: &<Self::A as Api>::Buffer,
1673        offset: wgt::BufferAddress,
1674        draw_count: u32,
1675    );
1676    unsafe fn draw_mesh_tasks_indirect_count(
1677        &mut self,
1678        buffer: &<Self::A as Api>::Buffer,
1679        offset: wgt::BufferAddress,
1680        count_buffer: &<Self::A as Api>::Buffer,
1681        count_offset: wgt::BufferAddress,
1682        max_count: u32,
1683    );
1684
1685    // compute passes
1686
1687    /// Begin a new compute pass, clearing all active bindings.
1688    ///
1689    /// This clears any bindings established by the following calls:
1690    ///
1691    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1692    /// - [`set_immediates`](CommandEncoder::set_immediates)
1693    /// - [`begin_query`](CommandEncoder::begin_query)
1694    /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1695    ///
1696    /// # Safety
1697    ///
1698    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1699    ///   by a call to [`end_render_pass`].
1700    ///
1701    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1702    ///   by a call to [`end_compute_pass`].
1703    ///
1704    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1705    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1706    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1707    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1708    unsafe fn begin_compute_pass(
1709        &mut self,
1710        desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1711    );
1712
1713    /// End the current compute pass.
1714    ///
1715    /// # Safety
1716    ///
1717    /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1718    ///   that has not been followed by a call to [`end_compute_pass`].
1719    ///
1720    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1721    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1722    unsafe fn end_compute_pass(&mut self);
1723
1724    unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1725
1726    unsafe fn dispatch_workgroups(&mut self, count: [u32; 3]);
1727    unsafe fn dispatch_workgroups_indirect(
1728        &mut self,
1729        buffer: &<Self::A as Api>::Buffer,
1730        offset: wgt::BufferAddress,
1731    );
1732
1733    /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1734    /// All buffers must be synchronized externally
1735    /// All buffer regions, which are written to may only be passed once per function call,
1736    /// with the exception of updates in the same descriptor.
1737    /// Consequences of this limitation:
1738    /// - scratch buffers need to be unique
1739    /// - a tlas can't be build in the same call with a blas it contains
1740    unsafe fn build_acceleration_structures<'a, T>(
1741        &mut self,
1742        descriptor_count: u32,
1743        descriptors: T,
1744    ) where
1745        Self::A: 'a,
1746        T: IntoIterator<
1747            Item = BuildAccelerationStructureDescriptor<
1748                'a,
1749                <Self::A as Api>::Buffer,
1750                <Self::A as Api>::AccelerationStructure,
1751            >,
1752        >;
1753    unsafe fn place_acceleration_structure_barrier(
1754        &mut self,
1755        barrier: AccelerationStructureBarrier,
1756    );
1757    // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1758    unsafe fn read_acceleration_structure_compact_size(
1759        &mut self,
1760        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1761        buf: &<Self::A as Api>::Buffer,
1762    );
1763    unsafe fn set_acceleration_structure_dependencies(
1764        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1765        dependencies: &[&<Self::A as Api>::AccelerationStructure],
1766    );
1767}
1768
1769bitflags!(
1770    /// Pipeline layout creation flags.
1771    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1772    pub struct PipelineLayoutFlags: u32 {
1773        /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1774        /// via immediates for direct execution.
1775        const FIRST_VERTEX_INSTANCE = 1 << 0;
1776        /// D3D12: Add support for `num_workgroups` builtins via immediates
1777        /// for direct execution.
1778        const NUM_WORK_GROUPS = 1 << 1;
1779        /// D3D12: Add support for the builtins that the other flags enable for
1780        /// indirect execution.
1781        const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1782    }
1783);
1784
1785bitflags!(
1786    /// Pipeline layout creation flags.
1787    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1788    pub struct BindGroupLayoutFlags: u32 {
1789        /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1790        const PARTIALLY_BOUND = 1 << 0;
1791    }
1792);
1793
1794bitflags!(
1795    /// Texture format capability flags.
1796    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1797    pub struct TextureFormatCapabilities: u32 {
1798        /// Format can be sampled.
1799        const SAMPLED = 1 << 0;
1800        /// Format can be sampled with a linear sampler.
1801        const SAMPLED_LINEAR = 1 << 1;
1802        /// Format can be sampled with a min/max reduction sampler.
1803        const SAMPLED_MINMAX = 1 << 2;
1804
1805        /// Format can be used as storage with read-only access.
1806        const STORAGE_READ_ONLY = 1 << 3;
1807        /// Format can be used as storage with write-only access.
1808        const STORAGE_WRITE_ONLY = 1 << 4;
1809        /// Format can be used as storage with both read and write access.
1810        const STORAGE_READ_WRITE = 1 << 5;
1811        /// Format can be used as storage with atomics.
1812        const STORAGE_ATOMIC = 1 << 6;
1813
1814        /// Format can be used as color and input attachment.
1815        const COLOR_ATTACHMENT = 1 << 7;
1816        /// Format can be used as color (with blending) and input attachment.
1817        const COLOR_ATTACHMENT_BLEND = 1 << 8;
1818        /// Format can be used as depth-stencil and input attachment.
1819        const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1820
1821        /// Format can be multisampled by x2.
1822        const MULTISAMPLE_X2   = 1 << 10;
1823        /// Format can be multisampled by x4.
1824        const MULTISAMPLE_X4   = 1 << 11;
1825        /// Format can be multisampled by x8.
1826        const MULTISAMPLE_X8   = 1 << 12;
1827        /// Format can be multisampled by x16.
1828        const MULTISAMPLE_X16  = 1 << 13;
1829
1830        /// Format can be used for render pass resolve targets.
1831        const MULTISAMPLE_RESOLVE = 1 << 14;
1832
1833        /// Format can be copied from.
1834        const COPY_SRC = 1 << 15;
1835        /// Format can be copied to.
1836        const COPY_DST = 1 << 16;
1837    }
1838);
1839
1840bitflags!(
1841    /// Texture format capability flags.
1842    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1843    pub struct FormatAspects: u8 {
1844        const COLOR = 1 << 0;
1845        const DEPTH = 1 << 1;
1846        const STENCIL = 1 << 2;
1847        const PLANE_0 = 1 << 3;
1848        const PLANE_1 = 1 << 4;
1849        const PLANE_2 = 1 << 5;
1850
1851        const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1852    }
1853);
1854
1855impl FormatAspects {
1856    pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1857        let aspect_mask = match aspect {
1858            wgt::TextureAspect::All => Self::all(),
1859            wgt::TextureAspect::DepthOnly => Self::DEPTH,
1860            wgt::TextureAspect::StencilOnly => Self::STENCIL,
1861            wgt::TextureAspect::Plane0 => Self::PLANE_0,
1862            wgt::TextureAspect::Plane1 => Self::PLANE_1,
1863            wgt::TextureAspect::Plane2 => Self::PLANE_2,
1864        };
1865        Self::from(format) & aspect_mask
1866    }
1867
1868    /// Returns `true` if only one flag is set
1869    pub fn is_one(&self) -> bool {
1870        self.bits().is_power_of_two()
1871    }
1872
1873    pub fn map(&self) -> wgt::TextureAspect {
1874        match *self {
1875            Self::COLOR => wgt::TextureAspect::All,
1876            Self::DEPTH => wgt::TextureAspect::DepthOnly,
1877            Self::STENCIL => wgt::TextureAspect::StencilOnly,
1878            Self::PLANE_0 => wgt::TextureAspect::Plane0,
1879            Self::PLANE_1 => wgt::TextureAspect::Plane1,
1880            Self::PLANE_2 => wgt::TextureAspect::Plane2,
1881            _ => unreachable!(),
1882        }
1883    }
1884}
1885
1886impl From<wgt::TextureFormat> for FormatAspects {
1887    fn from(format: wgt::TextureFormat) -> Self {
1888        match format {
1889            wgt::TextureFormat::Stencil8 => Self::STENCIL,
1890            wgt::TextureFormat::Depth16Unorm
1891            | wgt::TextureFormat::Depth32Float
1892            | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1893            wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1894                Self::DEPTH_STENCIL
1895            }
1896            wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1897            _ => Self::COLOR,
1898        }
1899    }
1900}
1901
1902bitflags!(
1903    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1904    pub struct MemoryFlags: u32 {
1905        const TRANSIENT = 1 << 0;
1906        const PREFER_COHERENT = 1 << 1;
1907    }
1908);
1909
1910bitflags!(
1911    /// Attachment load and store operations.
1912    ///
1913    /// There must be at least one flag from the LOAD group and one from the STORE group set.
1914    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1915    pub struct AttachmentOps: u8 {
1916        /// Load the existing contents of the attachment.
1917        const LOAD = 1 << 0;
1918        /// Clear the attachment to a specified value.
1919        const LOAD_CLEAR = 1 << 1;
1920        /// The contents of the attachment are undefined.
1921        const LOAD_DONT_CARE = 1 << 2;
1922        /// Store the contents of the attachment.
1923        const STORE = 1 << 3;
1924        /// The contents of the attachment are undefined after the pass.
1925        const STORE_DISCARD = 1 << 4;
1926    }
1927);
1928
1929#[derive(Debug)]
1930pub struct InstanceDescriptor<'a> {
1931    pub name: &'a str,
1932    pub flags: wgt::InstanceFlags,
1933    pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1934    pub backend_options: wgt::BackendOptions,
1935    pub telemetry: Option<Telemetry>,
1936    /// This is a borrow because the surrounding `core::Instance` keeps the the owned display handle
1937    /// alive already.
1938    pub display: Option<DisplayHandle<'a>>,
1939}
1940
1941#[derive(Clone, Debug)]
1942pub struct Alignments {
1943    /// The alignment of the start of the buffer used as a GPU copy source.
1944    pub buffer_copy_offset: wgt::BufferSize,
1945
1946    /// The alignment of the row pitch of the texture data stored in a buffer that is
1947    /// used in a GPU copy operation.
1948    pub buffer_copy_pitch: wgt::BufferSize,
1949
1950    /// The finest alignment of bound range checking for uniform buffers.
1951    ///
1952    /// When `wgpu_hal` restricts shader references to the [accessible
1953    /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1954    /// is the bind group binding's stated [size], rounded up to the next
1955    /// multiple of this value.
1956    ///
1957    /// We don't need an analogous field for storage buffer bindings, because
1958    /// all our backends promise to enforce the size at least to a four-byte
1959    /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1960    /// of four anyway.
1961    ///
1962    /// [ar]: struct.BufferBinding.html#accessible-region
1963    /// [`Uniform`]: wgt::BufferBindingType::Uniform
1964    /// [size]: BufferBinding::size
1965    pub uniform_bounds_check_alignment: wgt::BufferSize,
1966
1967    /// The size of the raw TLAS instance
1968    pub raw_tlas_instance_size: u32,
1969
1970    /// What the scratch buffer for building an acceleration structure must be aligned to
1971    pub ray_tracing_scratch_buffer_alignment: u32,
1972}
1973
1974#[derive(Clone, Debug)]
1975pub struct Capabilities {
1976    pub limits: wgt::Limits,
1977    pub alignments: Alignments,
1978    pub downlevel: wgt::DownlevelCapabilities,
1979    /// Supported cooperative matrix configurations.
1980    ///
1981    /// Empty if cooperative matrices are not supported.
1982    pub cooperative_matrix_properties: Vec<wgt::CooperativeMatrixProperties>,
1983}
1984
1985/// An adapter with all the information needed to reason about its capabilities.
1986///
1987/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1988/// methods on the backend [`Instance`] or [`Adapter`].
1989#[derive(Debug)]
1990pub struct ExposedAdapter<A: Api> {
1991    pub adapter: A::Adapter,
1992    pub info: wgt::AdapterInfo,
1993    pub features: wgt::Features,
1994    pub capabilities: Capabilities,
1995}
1996
1997/// Describes information about what a `Surface`'s presentation capabilities are.
1998/// Fetch this with [Adapter::surface_capabilities].
1999#[derive(Debug, Clone)]
2000pub struct SurfaceCapabilities {
2001    /// List of supported texture formats.
2002    ///
2003    /// Must be at least one.
2004    pub formats: Vec<wgt::TextureFormat>,
2005
2006    /// Range for the number of queued frames.
2007    ///
2008    /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
2009    /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
2010    ///
2011    /// - `maximum_frame_latency.start` must be at least 1.
2012    /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
2013    pub maximum_frame_latency: RangeInclusive<u32>,
2014
2015    /// Current extent of the surface, if known.
2016    pub current_extent: Option<wgt::Extent3d>,
2017
2018    /// Supported texture usage flags.
2019    ///
2020    /// Must have at least `wgt::TextureUses::COLOR_TARGET`
2021    pub usage: wgt::TextureUses,
2022
2023    /// List of supported V-sync modes.
2024    ///
2025    /// Must be at least one.
2026    pub present_modes: Vec<wgt::PresentMode>,
2027
2028    /// List of supported alpha composition modes.
2029    ///
2030    /// Must be at least one.
2031    pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
2032}
2033
2034#[derive(Debug)]
2035pub struct AcquiredSurfaceTexture<A: Api> {
2036    pub texture: A::SurfaceTexture,
2037    /// The presentation configuration no longer matches
2038    /// the surface properties exactly, but can still be used to present
2039    /// to the surface successfully.
2040    pub suboptimal: bool,
2041}
2042
2043/// An open connection to a device and a queue.
2044///
2045/// This can be created from [`Adapter::open`] or backend
2046/// specific methods on the backend's [`Instance`] or [`Adapter`].
2047#[derive(Debug)]
2048pub struct OpenDevice<A: Api> {
2049    pub device: A::Device,
2050    pub queue: A::Queue,
2051}
2052
2053#[derive(Clone, Debug)]
2054pub struct BufferMapping {
2055    pub ptr: NonNull<u8>,
2056    pub is_coherent: bool,
2057}
2058
2059#[derive(Clone, Debug)]
2060pub struct BufferDescriptor<'a> {
2061    pub label: Label<'a>,
2062    pub size: wgt::BufferAddress,
2063    pub usage: wgt::BufferUses,
2064    pub memory_flags: MemoryFlags,
2065}
2066
2067#[derive(Clone, Debug)]
2068pub struct TextureDescriptor<'a> {
2069    pub label: Label<'a>,
2070    pub size: wgt::Extent3d,
2071    pub mip_level_count: u32,
2072    pub sample_count: u32,
2073    pub dimension: wgt::TextureDimension,
2074    pub format: wgt::TextureFormat,
2075    pub usage: wgt::TextureUses,
2076    pub memory_flags: MemoryFlags,
2077    /// Allows views of this texture to have a different format
2078    /// than the texture does.
2079    pub view_formats: Vec<wgt::TextureFormat>,
2080}
2081
2082impl TextureDescriptor<'_> {
2083    pub fn copy_extent(&self) -> CopyExtent {
2084        CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
2085    }
2086
2087    pub fn is_cube_compatible(&self) -> bool {
2088        self.dimension == wgt::TextureDimension::D2
2089            && self.size.depth_or_array_layers.is_multiple_of(6)
2090            && self.sample_count == 1
2091            && self.size.width == self.size.height
2092    }
2093
2094    pub fn array_layer_count(&self) -> u32 {
2095        match self.dimension {
2096            wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
2097            wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
2098        }
2099    }
2100}
2101
2102/// TextureView descriptor.
2103///
2104/// Valid usage:
2105///. - `format` has to be the same as `TextureDescriptor::format`
2106///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
2107///. - `usage` has to be a subset of `TextureDescriptor::usage`
2108///. - `range` has to be a subset of parent texture
2109#[derive(Clone, Debug)]
2110pub struct TextureViewDescriptor<'a> {
2111    pub label: Label<'a>,
2112    pub format: wgt::TextureFormat,
2113    pub dimension: wgt::TextureViewDimension,
2114    pub usage: wgt::TextureUses,
2115    pub range: wgt::ImageSubresourceRange,
2116}
2117
2118#[derive(Clone, Debug)]
2119pub struct SamplerDescriptor<'a> {
2120    pub label: Label<'a>,
2121    pub address_modes: [wgt::AddressMode; 3],
2122    pub mag_filter: wgt::FilterMode,
2123    pub min_filter: wgt::FilterMode,
2124    pub mipmap_filter: wgt::MipmapFilterMode,
2125    pub lod_clamp: Range<f32>,
2126    pub compare: Option<wgt::CompareFunction>,
2127    // Must in the range [1, 16].
2128    //
2129    // Anisotropic filtering must be supported if this is not 1.
2130    pub anisotropy_clamp: u16,
2131    pub border_color: Option<wgt::SamplerBorderColor>,
2132}
2133
2134/// BindGroupLayout descriptor.
2135///
2136/// Valid usage:
2137/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
2138#[derive(Clone, Debug)]
2139pub struct BindGroupLayoutDescriptor<'a> {
2140    pub label: Label<'a>,
2141    pub flags: BindGroupLayoutFlags,
2142    pub entries: &'a [wgt::BindGroupLayoutEntry],
2143}
2144
2145#[derive(Clone, Debug)]
2146pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
2147    pub label: Label<'a>,
2148    pub flags: PipelineLayoutFlags,
2149    pub bind_group_layouts: &'a [Option<&'a B>],
2150    pub immediate_size: u32,
2151}
2152
2153/// A region of a buffer made visible to shaders via a [`BindGroup`].
2154///
2155/// [`BindGroup`]: Api::BindGroup
2156///
2157/// ## Construction
2158///
2159/// The recommended way to construct a `BufferBinding` is using the `binding`
2160/// method on a wgpu-core `Buffer`, which will validate the binding size
2161/// against the buffer size. A `new_unchecked` constructor is also provided for
2162/// cases where direct construction is necessary.
2163///
2164/// ## Accessible region
2165///
2166/// `wgpu_hal` guarantees that shaders compiled with
2167/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
2168/// write data via this binding outside the *accessible region* of a buffer:
2169///
2170/// - The accessible region starts at [`offset`].
2171///
2172/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2173///   which must be a multiple of 4.
2174///
2175/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2176///   rounded up to the next multiple of
2177///   [`Alignments::uniform_bounds_check_alignment`].
2178///
2179/// Note that this guarantee is stricter than WGSL's requirements for
2180/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2181/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2182/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2183/// never be read by the application before they are overwritten. This
2184/// optimization consults bind group buffer binding regions to determine which
2185/// parts of which buffers shaders might observe. This optimization is only
2186/// sound if shader access is bounds-checked.
2187///
2188/// ## Zero-length bindings
2189///
2190/// Some back ends cannot tolerate zero-length regions; for example, see
2191/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2192/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2193/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2194/// previously stated that a `BufferBinding` must have `offset` strictly less
2195/// than the size of the buffer, but this restriction was not honored elsewhere
2196/// in the code, so has been removed. However, it remains the case that
2197/// some backends do not support zero-length bindings, so additional
2198/// logic is needed somewhere to handle this properly. See
2199/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2200///
2201/// [`offset`]: BufferBinding::offset
2202/// [`size`]: BufferBinding::size
2203/// [`Storage`]: wgt::BufferBindingType::Storage
2204/// [`Uniform`]: wgt::BufferBindingType::Uniform
2205/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2206/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2207/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2208/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2209#[derive(Debug)]
2210pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2211    /// The buffer being bound.
2212    ///
2213    /// This is not fully `pub` to prevent direct construction of
2214    /// `BufferBinding`s, while still allowing public read access to the `offset`
2215    /// and `size` properties.
2216    pub(crate) buffer: &'a B,
2217
2218    /// The offset at which the bound region starts.
2219    ///
2220    /// This must be less or equal to the size of the buffer.
2221    pub offset: wgt::BufferAddress,
2222
2223    /// The size of the region bound, in bytes.
2224    ///
2225    /// If `None`, the region extends from `offset` to the end of the
2226    /// buffer. Given the restrictions on `offset`, this means that
2227    /// the size is always greater than zero.
2228    pub size: Option<wgt::BufferSize>,
2229}
2230
2231// We must implement this manually because `B` is not necessarily `Clone`.
2232impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2233    fn clone(&self) -> Self {
2234        BufferBinding {
2235            buffer: self.buffer,
2236            offset: self.offset,
2237            size: self.size,
2238        }
2239    }
2240}
2241
2242/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2243/// really wants to be using `NonZeroU64`.
2244/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2245pub trait ShouldBeNonZeroExt {
2246    fn get(&self) -> u64;
2247}
2248
2249impl ShouldBeNonZeroExt for NonZeroU64 {
2250    fn get(&self) -> u64 {
2251        NonZeroU64::get(*self)
2252    }
2253}
2254
2255impl ShouldBeNonZeroExt for u64 {
2256    fn get(&self) -> u64 {
2257        *self
2258    }
2259}
2260
2261impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2262    fn get(&self) -> u64 {
2263        match *self {
2264            Some(non_zero) => non_zero.get(),
2265            None => 0,
2266        }
2267    }
2268}
2269
2270impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2271    /// Construct a `BufferBinding` with the given contents.
2272    ///
2273    /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2274    /// of this method. `Buffer::binding` validates the size of the binding
2275    /// against the size of the buffer.
2276    ///
2277    /// It is more difficult to provide a validating constructor here, due to
2278    /// not having direct access to the size of a `DynBuffer`.
2279    ///
2280    /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2281    /// bytes starting at `offset` is contained within the buffer.
2282    ///
2283    /// The `S` type parameter is a temporary convenience to allow callers to
2284    /// pass a zero size. When the zero-size binding issue is resolved, the
2285    /// argument should just match the type of the member.
2286    /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2287    pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2288        buffer: &'a B,
2289        offset: wgt::BufferAddress,
2290        size: S,
2291    ) -> Self {
2292        Self {
2293            buffer,
2294            offset,
2295            size: size.into(),
2296        }
2297    }
2298}
2299
2300#[derive(Debug)]
2301pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2302    pub view: &'a T,
2303    pub usage: wgt::TextureUses,
2304}
2305
2306impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2307    fn clone(&self) -> Self {
2308        TextureBinding {
2309            view: self.view,
2310            usage: self.usage,
2311        }
2312    }
2313}
2314
2315#[derive(Debug)]
2316pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2317    pub planes: [TextureBinding<'a, T>; 3],
2318    pub params: BufferBinding<'a, B>,
2319}
2320
2321impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2322    for ExternalTextureBinding<'a, B, T>
2323{
2324    fn clone(&self) -> Self {
2325        ExternalTextureBinding {
2326            planes: self.planes.clone(),
2327            params: self.params.clone(),
2328        }
2329    }
2330}
2331
2332/// cbindgen:ignore
2333#[derive(Clone, Debug)]
2334pub struct BindGroupEntry {
2335    pub binding: u32,
2336    pub resource_index: u32,
2337    pub count: u32,
2338}
2339
2340/// BindGroup descriptor.
2341///
2342/// Valid usage:
2343///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2344///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2345///. - each entry has to be compatible with the `layout`
2346///. - each entry's `BindGroupEntry::resource_index` is within range
2347///    of the corresponding resource array, selected by the relevant
2348///    `BindGroupLayoutEntry`.
2349#[derive(Clone, Debug)]
2350pub struct BindGroupDescriptor<
2351    'a,
2352    Bgl: DynBindGroupLayout + ?Sized,
2353    B: DynBuffer + ?Sized,
2354    S: DynSampler + ?Sized,
2355    T: DynTextureView + ?Sized,
2356    A: DynAccelerationStructure + ?Sized,
2357> {
2358    pub label: Label<'a>,
2359    pub layout: &'a Bgl,
2360    pub buffers: &'a [BufferBinding<'a, B>],
2361    pub samplers: &'a [&'a S],
2362    pub textures: &'a [TextureBinding<'a, T>],
2363    pub entries: &'a [BindGroupEntry],
2364    pub acceleration_structures: &'a [&'a A],
2365    pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2366}
2367
2368#[derive(Clone, Debug)]
2369pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2370    pub label: Label<'a>,
2371    pub queue: &'a Q,
2372}
2373
2374/// Naga shader module.
2375#[derive(Default)]
2376pub struct NagaShader {
2377    /// Shader module IR.
2378    pub module: Cow<'static, naga::Module>,
2379    /// Analysis information of the module.
2380    pub info: naga::valid::ModuleInfo,
2381    /// Source codes for debug
2382    pub debug_source: Option<DebugSource>,
2383}
2384
2385// Custom implementation avoids the need to generate Debug impl code
2386// for the whole Naga module and info.
2387impl fmt::Debug for NagaShader {
2388    fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2389        write!(formatter, "Naga shader")
2390    }
2391}
2392
2393/// Shader input.
2394pub enum ShaderInput<'a> {
2395    Naga(NagaShader),
2396    MetalLib {
2397        file: &'a [u8],
2398        num_workgroups: hashbrown::HashMap<String, (u32, u32, u32)>,
2399    },
2400    Msl {
2401        shader: &'a str,
2402        num_workgroups: hashbrown::HashMap<String, (u32, u32, u32)>,
2403    },
2404    SpirV(&'a [u32]),
2405    Dxil {
2406        shader: &'a [u8],
2407    },
2408    Hlsl {
2409        shader: &'a str,
2410    },
2411    Glsl {
2412        shader: &'a str,
2413    },
2414}
2415
2416pub struct ShaderModuleDescriptor<'a> {
2417    pub label: Label<'a>,
2418
2419    /// # Safety
2420    ///
2421    /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2422    ///
2423    /// [src]: wgt::ShaderRuntimeChecks
2424    pub runtime_checks: wgt::ShaderRuntimeChecks,
2425}
2426
2427#[derive(Debug, Clone)]
2428pub struct DebugSource {
2429    pub file_name: Cow<'static, str>,
2430    pub source_code: Cow<'static, str>,
2431}
2432
2433/// Describes a programmable pipeline stage.
2434#[derive(Debug)]
2435pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2436    /// The compiled shader module for this stage.
2437    pub module: &'a M,
2438    /// The name of the entry point in the compiled shader. There must be a function with this name
2439    ///  in the shader.
2440    pub entry_point: &'a str,
2441    /// Pipeline constants
2442    pub constants: &'a naga::back::PipelineConstants,
2443    /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2444    ///
2445    /// This is required by the WebGPU spec, but may have overhead which can be avoided
2446    /// for cross-platform applications
2447    pub zero_initialize_workgroup_memory: bool,
2448}
2449
2450impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2451    fn clone(&self) -> Self {
2452        Self {
2453            module: self.module,
2454            entry_point: self.entry_point,
2455            constants: self.constants,
2456            zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2457        }
2458    }
2459}
2460
2461/// Describes a compute pipeline.
2462#[derive(Clone, Debug)]
2463pub struct ComputePipelineDescriptor<
2464    'a,
2465    Pl: DynPipelineLayout + ?Sized,
2466    M: DynShaderModule + ?Sized,
2467    Pc: DynPipelineCache + ?Sized,
2468> {
2469    pub label: Label<'a>,
2470    /// The layout of bind groups for this pipeline.
2471    pub layout: &'a Pl,
2472    /// The compiled compute stage and its entry point.
2473    pub stage: ProgrammableStage<'a, M>,
2474    /// The cache which will be used and filled when compiling this pipeline
2475    pub cache: Option<&'a Pc>,
2476}
2477
2478pub struct PipelineCacheDescriptor<'a> {
2479    pub label: Label<'a>,
2480    pub data: Option<&'a [u8]>,
2481}
2482
2483/// Describes how the vertex buffer is interpreted.
2484#[derive(Clone, Debug)]
2485pub struct VertexBufferLayout<'a> {
2486    /// The stride, in bytes, between elements of this buffer.
2487    pub array_stride: wgt::BufferAddress,
2488    /// How often this vertex buffer is "stepped" forward.
2489    pub step_mode: wgt::VertexStepMode,
2490    /// The list of attributes which comprise a single vertex.
2491    pub attributes: &'a [wgt::VertexAttribute],
2492}
2493
2494#[derive(Clone, Debug)]
2495pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2496    Standard {
2497        /// The format of any vertex buffers used with this pipeline.
2498        vertex_buffers: &'a [Option<VertexBufferLayout<'a>>],
2499        /// The vertex stage for this pipeline.
2500        vertex_stage: ProgrammableStage<'a, M>,
2501    },
2502    Mesh {
2503        task_stage: Option<ProgrammableStage<'a, M>>,
2504        mesh_stage: ProgrammableStage<'a, M>,
2505    },
2506}
2507
2508/// Describes a render (graphics) pipeline.
2509#[derive(Clone, Debug)]
2510pub struct RenderPipelineDescriptor<
2511    'a,
2512    Pl: DynPipelineLayout + ?Sized,
2513    M: DynShaderModule + ?Sized,
2514    Pc: DynPipelineCache + ?Sized,
2515> {
2516    pub label: Label<'a>,
2517    /// The layout of bind groups for this pipeline.
2518    pub layout: &'a Pl,
2519    /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2520    pub vertex_processor: VertexProcessor<'a, M>,
2521    /// The properties of the pipeline at the primitive assembly and rasterization level.
2522    pub primitive: wgt::PrimitiveState,
2523    /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2524    pub depth_stencil: Option<wgt::DepthStencilState>,
2525    /// The multi-sampling properties of the pipeline.
2526    pub multisample: wgt::MultisampleState,
2527    /// The fragment stage for this pipeline.
2528    pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2529    /// The effect of draw calls on the color aspect of the output target.
2530    pub color_targets: &'a [Option<wgt::ColorTargetState>],
2531    /// If the pipeline will be used with a multiview render pass, this indicates how many array
2532    /// layers the attachments will have.
2533    pub multiview_mask: Option<NonZeroU32>,
2534    /// The cache which will be used and filled when compiling this pipeline
2535    pub cache: Option<&'a Pc>,
2536}
2537
2538#[derive(Debug, Clone)]
2539pub struct SurfaceConfiguration {
2540    /// Maximum number of queued frames. Must be in
2541    /// `SurfaceCapabilities::maximum_frame_latency` range.
2542    pub maximum_frame_latency: u32,
2543    /// Vertical synchronization mode.
2544    pub present_mode: wgt::PresentMode,
2545    /// Alpha composition mode.
2546    pub composite_alpha_mode: wgt::CompositeAlphaMode,
2547    /// Format of the surface textures.
2548    pub format: wgt::TextureFormat,
2549    /// Requested texture extent. Must be in
2550    /// `SurfaceCapabilities::extents` range.
2551    pub extent: wgt::Extent3d,
2552    /// Allowed usage of surface textures,
2553    pub usage: wgt::TextureUses,
2554    /// Allows views of swapchain texture to have a different format
2555    /// than the texture does.
2556    pub view_formats: Vec<wgt::TextureFormat>,
2557}
2558
2559#[derive(Debug, Clone)]
2560pub struct Rect<T> {
2561    pub x: T,
2562    pub y: T,
2563    pub w: T,
2564    pub h: T,
2565}
2566
2567#[derive(Debug, Clone, PartialEq)]
2568pub struct StateTransition<T> {
2569    pub from: T,
2570    pub to: T,
2571}
2572
2573#[derive(Debug, Clone)]
2574pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2575    pub buffer: &'a B,
2576    pub usage: StateTransition<wgt::BufferUses>,
2577}
2578
2579#[derive(Debug, Clone)]
2580pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2581    pub texture: &'a T,
2582    pub range: wgt::ImageSubresourceRange,
2583    pub usage: StateTransition<wgt::TextureUses>,
2584}
2585
2586#[derive(Clone, Copy, Debug)]
2587pub struct BufferCopy {
2588    pub src_offset: wgt::BufferAddress,
2589    pub dst_offset: wgt::BufferAddress,
2590    pub size: wgt::BufferSize,
2591}
2592
2593#[derive(Clone, Debug)]
2594pub struct TextureCopyBase {
2595    pub mip_level: u32,
2596    pub array_layer: u32,
2597    /// Origin within a texture.
2598    /// Note: for 1D and 2D textures, Z must be 0.
2599    pub origin: wgt::Origin3d,
2600    pub aspect: FormatAspects,
2601}
2602
2603#[derive(Clone, Copy, Debug)]
2604pub struct CopyExtent {
2605    pub width: u32,
2606    pub height: u32,
2607    pub depth: u32,
2608}
2609
2610impl From<wgt::Extent3d> for CopyExtent {
2611    fn from(value: wgt::Extent3d) -> Self {
2612        let wgt::Extent3d {
2613            width,
2614            height,
2615            depth_or_array_layers,
2616        } = value;
2617        Self {
2618            width,
2619            height,
2620            depth: depth_or_array_layers,
2621        }
2622    }
2623}
2624
2625impl From<CopyExtent> for wgt::Extent3d {
2626    fn from(value: CopyExtent) -> Self {
2627        let CopyExtent {
2628            width,
2629            height,
2630            depth,
2631        } = value;
2632        Self {
2633            width,
2634            height,
2635            depth_or_array_layers: depth,
2636        }
2637    }
2638}
2639
2640#[derive(Clone, Debug)]
2641pub struct TextureCopy {
2642    pub src_base: TextureCopyBase,
2643    pub dst_base: TextureCopyBase,
2644    pub size: CopyExtent,
2645}
2646
2647#[derive(Clone, Debug)]
2648pub struct BufferTextureCopy {
2649    pub buffer_layout: wgt::TexelCopyBufferLayout,
2650    pub texture_base: TextureCopyBase,
2651    pub size: CopyExtent,
2652}
2653
2654#[derive(Clone, Debug)]
2655pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2656    pub view: &'a T,
2657    /// Contains either a single mutating usage as a target,
2658    /// or a valid combination of read-only usages.
2659    pub usage: wgt::TextureUses,
2660}
2661
2662#[derive(Clone, Debug)]
2663pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2664    pub target: Attachment<'a, T>,
2665    pub depth_slice: Option<u32>,
2666    pub resolve_target: Option<Attachment<'a, T>>,
2667    pub ops: AttachmentOps,
2668    pub clear_value: wgt::Color,
2669}
2670
2671#[derive(Clone, Debug)]
2672pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2673    pub target: Attachment<'a, T>,
2674    pub depth_ops: AttachmentOps,
2675    pub stencil_ops: AttachmentOps,
2676    pub clear_value: (f32, u32),
2677}
2678
2679#[derive(Clone, Debug)]
2680pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2681    pub query_set: &'a Q,
2682    pub beginning_of_pass_write_index: Option<u32>,
2683    pub end_of_pass_write_index: Option<u32>,
2684}
2685
2686#[derive(Clone, Debug)]
2687pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2688    pub label: Label<'a>,
2689    pub extent: wgt::Extent3d,
2690    pub sample_count: u32,
2691    pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2692    pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2693    pub multiview_mask: Option<NonZeroU32>,
2694    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2695    pub occlusion_query_set: Option<&'a Q>,
2696}
2697
2698#[derive(Clone, Debug)]
2699pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2700    pub label: Label<'a>,
2701    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2702}
2703
2704#[test]
2705fn test_default_limits() {
2706    let limits = wgt::Limits::default();
2707    assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2708}
2709
2710#[derive(Clone, Debug)]
2711pub struct AccelerationStructureDescriptor<'a> {
2712    pub label: Label<'a>,
2713    pub size: wgt::BufferAddress,
2714    pub format: AccelerationStructureFormat,
2715    pub allow_compaction: bool,
2716}
2717
2718#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2719pub enum AccelerationStructureFormat {
2720    TopLevel,
2721    BottomLevel,
2722}
2723
2724#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2725pub enum AccelerationStructureBuildMode {
2726    Build,
2727    Update,
2728}
2729
2730/// Information of the required size for a corresponding entries struct (+ flags)
2731#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2732pub struct AccelerationStructureBuildSizes {
2733    pub acceleration_structure_size: wgt::BufferAddress,
2734    pub update_scratch_size: wgt::BufferAddress,
2735    pub build_scratch_size: wgt::BufferAddress,
2736}
2737
2738/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2739/// For updates, only the data is allowed to change (not the meta data or sizes).
2740#[derive(Clone, Debug)]
2741pub struct BuildAccelerationStructureDescriptor<
2742    'a,
2743    B: DynBuffer + ?Sized,
2744    A: DynAccelerationStructure + ?Sized,
2745> {
2746    pub entries: &'a AccelerationStructureEntries<'a, B>,
2747    pub mode: AccelerationStructureBuildMode,
2748    pub flags: AccelerationStructureBuildFlags,
2749    pub source_acceleration_structure: Option<&'a A>,
2750    pub destination_acceleration_structure: &'a A,
2751    pub scratch_buffer: &'a B,
2752    pub scratch_buffer_offset: wgt::BufferAddress,
2753}
2754
2755/// - All buffers, buffer addresses and offsets will be ignored.
2756/// - The build mode will be ignored.
2757/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2758///   may result in reduced size requirements.
2759/// - Any other change may result in a bigger or smaller size requirement.
2760#[derive(Clone, Debug)]
2761pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2762    pub entries: &'a AccelerationStructureEntries<'a, B>,
2763    pub flags: AccelerationStructureBuildFlags,
2764}
2765
2766/// Entries for a single descriptor
2767/// * `Instances` - Multiple instances for a top level acceleration structure
2768/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2769/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2770#[derive(Debug)]
2771pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2772    Instances(AccelerationStructureInstances<'a, B>),
2773    Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2774    AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2775}
2776
2777/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2778/// * `indices` - optional index buffer with attributes
2779/// * `transform` - optional transform
2780#[derive(Clone, Debug)]
2781pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2782    pub vertex_buffer: Option<&'a B>,
2783    pub vertex_format: wgt::VertexFormat,
2784    pub first_vertex: u32,
2785    pub vertex_count: u32,
2786    pub vertex_stride: wgt::BufferAddress,
2787    pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2788    pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2789    pub flags: AccelerationStructureGeometryFlags,
2790}
2791
2792/// * `offset` - offset in bytes
2793#[derive(Clone, Debug)]
2794pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2795    pub buffer: Option<&'a B>,
2796    pub offset: u32,
2797    pub count: u32,
2798    pub stride: wgt::BufferAddress,
2799    pub flags: AccelerationStructureGeometryFlags,
2800}
2801
2802pub struct AccelerationStructureCopy {
2803    pub copy_flags: wgt::AccelerationStructureCopy,
2804    pub type_flags: wgt::AccelerationStructureType,
2805}
2806
2807/// * `offset` - offset in bytes
2808#[derive(Clone, Debug)]
2809pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2810    pub buffer: Option<&'a B>,
2811    pub offset: u32,
2812    pub count: u32,
2813}
2814
2815/// * `offset` - offset in bytes
2816#[derive(Clone, Debug)]
2817pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2818    pub format: wgt::IndexFormat,
2819    pub buffer: Option<&'a B>,
2820    pub offset: u32,
2821    pub count: u32,
2822}
2823
2824/// * `offset` - offset in bytes
2825#[derive(Clone, Debug)]
2826pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2827    pub buffer: &'a B,
2828    pub offset: u32,
2829}
2830
2831pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2832pub use wgt::AccelerationStructureGeometryFlags;
2833
2834bitflags::bitflags! {
2835    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2836    pub struct AccelerationStructureUses: u8 {
2837        // For blas used as input for tlas
2838        const BUILD_INPUT = 1 << 0;
2839        // Target for acceleration structure build
2840        const BUILD_OUTPUT = 1 << 1;
2841        // Tlas used in a shader
2842        const SHADER_INPUT = 1 << 2;
2843        // Blas used to query compacted size
2844        const QUERY_INPUT = 1 << 3;
2845        // BLAS used as a src for a copy operation
2846        const COPY_SRC = 1 << 4;
2847        // BLAS used as a dst for a copy operation
2848        const COPY_DST = 1 << 5;
2849    }
2850}
2851
2852#[derive(Debug, Clone)]
2853pub struct AccelerationStructureBarrier {
2854    pub usage: StateTransition<AccelerationStructureUses>,
2855}
2856
2857#[derive(Debug, Copy, Clone)]
2858pub struct TlasInstance {
2859    pub transform: [f32; 12],
2860    pub custom_data: u32,
2861    pub mask: u8,
2862    pub blas_address: u64,
2863}
2864
2865#[cfg(dx12)]
2866pub enum D3D12ExposeAdapterResult {
2867    CreateDeviceError(dx12::CreateDeviceError),
2868    UnknownFeatureLevel(i32),
2869    ResourceBindingTier2Requirement,
2870    ShaderModel6Requirement,
2871    Success(dx12::FeatureLevel, dx12::ShaderModel),
2872}
2873
2874/// Pluggable telemetry, mainly to be used by Firefox.
2875#[derive(Debug, Clone, Copy)]
2876pub struct Telemetry {
2877    #[cfg(dx12)]
2878    pub d3d12_expose_adapter: fn(
2879        desc: &windows::Win32::Graphics::Dxgi::DXGI_ADAPTER_DESC2,
2880        driver_version: Result<[u16; 4], windows_core::HRESULT>,
2881        result: D3D12ExposeAdapterResult,
2882    ),
2883}