wgpu_hal/
lib.rs

1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//!   should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//!   validation, if any, and incorrect use will often cause undefined behavior.
20//!   This allows us to minimize the overhead we impose over the underlying
21//!   graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//!   safe API for driving `wgpu-hal`, implementing all necessary validation,
23//!   resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//!   use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//!   `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//!   anticipate*, like running out of memory or losing the device. Any errors
29//!   that the user could reasonably anticipate are their responsibility to
30//!   avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//!   not mappable: as the buffer creator, the user should already know if they
32//!   can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//!   generally object-safe. You must select a specific backend type
36//!   like [`vulkan::Api`] or [`metal::Api`], and then use that
37//!   according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//!   taking objects by reference, returning them by value, and so on,
41//!   unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//!   remain mapped on the CPU while the GPU reads or writes to it. You must
45//!   explicitly indicate when data might need to be transferred between CPU and
46//!   GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//!   resource. For example, if a buffer is written to by a compute
50//!   shader, and then used as and index buffer to a draw call, you
51//!   must use [`CommandEncoder::transition_buffers`] between those two
52//!   operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//!   Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//!   store data in particular containers. The implementation doesn't guarantee
59//!   that any of the iterators are drained, unless stated otherwise by the
60//!   function documentation. For this reason, we recommend that iterators don't
61//!   do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//!   Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//!   available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//!   own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//!   creates an instance value, which you can use to enumerate the adapters
121//!   available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//!   returns an instance that can enumerate the Vulkan physical devices on your
123//!   system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//!   particular device from a particular backend. For example, a Vulkan instance
127//!   might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//!   link to a device. You get a device value by calling [`Adapter::open`], and
131//!   then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//!   command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//!   use to build buffers of commands to submit to a queue. This has all the
138//!   methods for drawing and running compute shaders, which is presumably what
139//!   you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//!   swapchain for presenting images on the screen, via interaction with the
143//!   system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208    // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209    clippy::arc_with_non_send_sync,
210    // We don't use syntax sugar where it's not necessary.
211    clippy::match_like_matches_macro,
212    // Redundant matching is more explicit.
213    clippy::redundant_pattern_matching,
214    // Explicit lifetimes are often easier to reason about.
215    clippy::needless_lifetimes,
216    // No need for defaults in the internal types.
217    clippy::new_without_default,
218    // Matches are good and extendable, no need to make an exception here.
219    clippy::single_match,
220    // Push commands are more regular than macros.
221    clippy::vec_init_then_push,
222    // We unsafe impl `Send` for a reason.
223    clippy::non_send_fields_in_send_ty,
224    // TODO!
225    clippy::missing_safety_doc,
226    // It gets in the way a lot and does not prevent bugs in practice.
227    clippy::pattern_type_mismatch,
228    // We should investigate these.
229    clippy::large_enum_variant
230)]
231#![warn(
232    clippy::alloc_instead_of_core,
233    clippy::ptr_as_ptr,
234    clippy::std_instead_of_alloc,
235    clippy::std_instead_of_core,
236    trivial_casts,
237    trivial_numeric_casts,
238    unsafe_op_in_unsafe_fn,
239    unused_extern_crates,
240    unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268    #[cfg(dx12)]
269    pub use super::dx12::Api as Dx12;
270    #[cfg(gles)]
271    pub use super::gles::Api as Gles;
272    #[cfg(metal)]
273    pub use super::metal::Api as Metal;
274    pub use super::noop::Api as Noop;
275    #[cfg(vulkan)]
276    pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288    DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289    DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290    DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291    DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292    DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299    borrow::Borrow,
300    error::Error,
301    fmt,
302    num::{NonZeroU32, NonZeroU64},
303    ops::{Range, RangeInclusive},
304    ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use raw_window_handle::DisplayHandle;
309use thiserror::Error;
310use wgt::WasmNotSendSync;
311
312cfg_if::cfg_if! {
313    if #[cfg(supports_ptr_atomics)] {
314        use alloc::sync::Arc;
315    } else if #[cfg(feature = "portable-atomic")] {
316        use portable_atomic_util::Arc;
317    }
318}
319
320// - Vertex + Fragment
321// - Compute
322// Task + Mesh + Fragment
323pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
324pub const MAX_ANISOTROPY: u8 = 16;
325pub const MAX_BIND_GROUPS: usize = 8;
326pub const MAX_VERTEX_BUFFERS: usize = 16;
327pub const MAX_COLOR_ATTACHMENTS: usize = 8;
328pub const MAX_MIP_LEVELS: u32 = 16;
329/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
330/// cbindgen:ignore
331pub const QUERY_SIZE: wgt::BufferAddress = 8;
332
333pub type Label<'a> = Option<&'a str>;
334pub type MemoryRange = Range<wgt::BufferAddress>;
335pub type FenceValue = u64;
336#[cfg(supports_64bit_atomics)]
337pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
338#[cfg(not(supports_64bit_atomics))]
339pub type AtomicFenceValue = portable_atomic::AtomicU64;
340
341/// A callback to signal that wgpu is no longer using a resource.
342#[cfg(any(gles, vulkan))]
343pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
344
345#[cfg(any(gles, vulkan))]
346pub struct DropGuard {
347    callback: Option<DropCallback>,
348}
349
350#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
351impl DropGuard {
352    fn from_option(callback: Option<DropCallback>) -> Option<Self> {
353        callback.map(|callback| Self {
354            callback: Some(callback),
355        })
356    }
357}
358
359#[cfg(any(gles, vulkan))]
360impl Drop for DropGuard {
361    fn drop(&mut self) {
362        if let Some(cb) = self.callback.take() {
363            (cb)();
364        }
365    }
366}
367
368#[cfg(any(gles, vulkan))]
369impl fmt::Debug for DropGuard {
370    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
371        f.debug_struct("DropGuard").finish()
372    }
373}
374
375#[derive(Clone, Debug, PartialEq, Eq, Error)]
376pub enum DeviceError {
377    #[error("Out of memory")]
378    OutOfMemory,
379    #[error("Device is lost")]
380    Lost,
381    #[error("Unexpected error variant (driver implementation is at fault)")]
382    Unexpected,
383}
384
385#[cfg(any(dx12, vulkan))]
386impl From<gpu_allocator::AllocationError> for DeviceError {
387    fn from(result: gpu_allocator::AllocationError) -> Self {
388        match result {
389            gpu_allocator::AllocationError::OutOfMemory => Self::OutOfMemory,
390            gpu_allocator::AllocationError::FailedToMap(e) => {
391                log::error!("gpu-allocator: Failed to map: {e}");
392                Self::Lost
393            }
394            gpu_allocator::AllocationError::NoCompatibleMemoryTypeFound => {
395                log::error!("gpu-allocator: No Compatible Memory Type Found");
396                Self::Lost
397            }
398            gpu_allocator::AllocationError::InvalidAllocationCreateDesc => {
399                log::error!("gpu-allocator: Invalid Allocation Creation Description");
400                Self::Lost
401            }
402            gpu_allocator::AllocationError::InvalidAllocatorCreateDesc(e) => {
403                log::error!("gpu-allocator: Invalid Allocator Creation Description: {e}");
404                Self::Lost
405            }
406
407            gpu_allocator::AllocationError::Internal(e) => {
408                log::error!("gpu-allocator: Internal Error: {e}");
409                Self::Lost
410            }
411            gpu_allocator::AllocationError::BarrierLayoutNeedsDevice10
412            | gpu_allocator::AllocationError::CastableFormatsRequiresEnhancedBarriers
413            | gpu_allocator::AllocationError::CastableFormatsRequiresAtLeastDevice12 => {
414                unreachable!()
415            }
416        }
417    }
418}
419
420// A copy of gpu_allocator::AllocationSizes, allowing to read the configured value for
421// the dx12 backend, we should instead add getters to gpu_allocator::AllocationSizes
422// and remove this type.
423// https://github.com/Traverse-Research/gpu-allocator/issues/295
424#[cfg_attr(not(any(dx12, vulkan)), expect(dead_code))]
425pub(crate) struct AllocationSizes {
426    pub(crate) min_device_memblock_size: u64,
427    pub(crate) max_device_memblock_size: u64,
428    pub(crate) min_host_memblock_size: u64,
429    pub(crate) max_host_memblock_size: u64,
430}
431
432impl AllocationSizes {
433    #[allow(dead_code)] // may be unused on some platforms
434    pub(crate) fn from_memory_hints(memory_hints: &wgt::MemoryHints) -> Self {
435        // TODO: the allocator's configuration should take hardware capability into
436        // account.
437        const MB: u64 = 1024 * 1024;
438
439        match memory_hints {
440            wgt::MemoryHints::Performance => Self {
441                min_device_memblock_size: 128 * MB,
442                max_device_memblock_size: 256 * MB,
443                min_host_memblock_size: 64 * MB,
444                max_host_memblock_size: 128 * MB,
445            },
446            wgt::MemoryHints::MemoryUsage => Self {
447                min_device_memblock_size: 8 * MB,
448                max_device_memblock_size: 64 * MB,
449                min_host_memblock_size: 4 * MB,
450                max_host_memblock_size: 32 * MB,
451            },
452            wgt::MemoryHints::Manual {
453                suballocated_device_memory_block_size,
454            } => {
455                // TODO: https://github.com/gfx-rs/wgpu/issues/8625
456                // Would it be useful to expose the host size in memory hints
457                // instead of always using half of the device size?
458                let device_size = suballocated_device_memory_block_size;
459                let host_size = device_size.start / 2..device_size.end / 2;
460
461                // gpu_allocator clamps the sizes between 4MiB and 256MiB, but we clamp them ourselves since we use
462                // the sizes when detecting high memory pressure and there is no way to query the values otherwise.
463                Self {
464                    min_device_memblock_size: device_size.start.clamp(4 * MB, 256 * MB),
465                    max_device_memblock_size: device_size.end.clamp(4 * MB, 256 * MB),
466                    min_host_memblock_size: host_size.start.clamp(4 * MB, 256 * MB),
467                    max_host_memblock_size: host_size.end.clamp(4 * MB, 256 * MB),
468                }
469            }
470        }
471    }
472}
473
474#[cfg(any(dx12, vulkan))]
475impl From<AllocationSizes> for gpu_allocator::AllocationSizes {
476    fn from(value: AllocationSizes) -> gpu_allocator::AllocationSizes {
477        gpu_allocator::AllocationSizes::new(
478            value.min_device_memblock_size,
479            value.min_host_memblock_size,
480        )
481        .with_max_device_memblock_size(value.max_device_memblock_size)
482        .with_max_host_memblock_size(value.max_host_memblock_size)
483    }
484}
485
486#[allow(dead_code)] // may be unused on some platforms
487#[cold]
488fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
489    panic!("wgpu-hal invariant was violated (usage error): {txt}")
490}
491
492#[allow(dead_code)] // may be unused on some platforms
493#[cold]
494fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
495    panic!("wgpu-hal ran into a preventable internal error: {txt}")
496}
497
498#[derive(Clone, Debug, Eq, PartialEq, Error)]
499pub enum ShaderError {
500    #[error("Compilation failed: {0:?}")]
501    Compilation(String),
502    #[error(transparent)]
503    Device(#[from] DeviceError),
504}
505
506#[derive(Clone, Debug, Eq, PartialEq, Error)]
507pub enum PipelineError {
508    #[error("Linkage failed for stage {0:?}: {1}")]
509    Linkage(wgt::ShaderStages, String),
510    #[error("Entry point for stage {0:?} is invalid")]
511    EntryPoint(naga::ShaderStage),
512    #[error(transparent)]
513    Device(#[from] DeviceError),
514    #[error("Pipeline constant error for stage {0:?}: {1}")]
515    PipelineConstants(wgt::ShaderStages, String),
516}
517
518#[derive(Clone, Debug, Eq, PartialEq, Error)]
519pub enum PipelineCacheError {
520    #[error(transparent)]
521    Device(#[from] DeviceError),
522}
523
524#[derive(Clone, Debug, Eq, PartialEq, Error)]
525pub enum SurfaceError {
526    #[error("Surface is lost")]
527    Lost,
528    #[error("Surface is outdated, needs to be re-created")]
529    Outdated,
530    #[error(transparent)]
531    Device(#[from] DeviceError),
532    #[error("Other reason: {0}")]
533    Other(&'static str),
534}
535
536/// Error occurring while trying to create an instance, or create a surface from an instance;
537/// typically relating to the state of the underlying graphics API or hardware.
538#[derive(Clone, Debug, Error)]
539#[error("{message}")]
540pub struct InstanceError {
541    /// These errors are very platform specific, so do not attempt to encode them as an enum.
542    ///
543    /// This message should describe the problem in sufficient detail to be useful for a
544    /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
545    /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
546    message: String,
547
548    /// Underlying error value, if any is available.
549    #[source]
550    source: Option<Arc<dyn Error + Send + Sync + 'static>>,
551}
552
553impl InstanceError {
554    #[allow(dead_code)] // may be unused on some platforms
555    pub(crate) fn new(message: String) -> Self {
556        Self {
557            message,
558            source: None,
559        }
560    }
561    #[allow(dead_code)] // may be unused on some platforms
562    pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
563        cfg_if::cfg_if! {
564            if #[cfg(supports_ptr_atomics)] {
565                let source = Arc::new(source);
566            } else {
567                // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
568                let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
569                let source = Arc::from(source);
570            }
571        }
572        Self {
573            message,
574            source: Some(source),
575        }
576    }
577}
578
579/// All the types and methods that make up a implementation on top of a backend.
580///
581/// Only the types that have non-dyn trait bounds have methods on them. Most methods
582/// are either on [`CommandEncoder`] or [`Device`].
583///
584/// The api can either be used through generics (through use of this trait and associated
585/// types) or dynamically through using the `Dyn*` traits.
586pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
587    const VARIANT: wgt::Backend;
588
589    type Instance: DynInstance + Instance<A = Self>;
590    type Surface: DynSurface + Surface<A = Self>;
591    type Adapter: DynAdapter + Adapter<A = Self>;
592    type Device: DynDevice + Device<A = Self>;
593
594    type Queue: DynQueue + Queue<A = Self>;
595    type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
596
597    /// This API's command buffer type.
598    ///
599    /// The only thing you can do with `CommandBuffer`s is build them
600    /// with a [`CommandEncoder`] and then pass them to
601    /// [`Queue::submit`] for execution, or destroy them by passing
602    /// them to [`CommandEncoder::reset_all`].
603    ///
604    /// [`CommandEncoder`]: Api::CommandEncoder
605    type CommandBuffer: DynCommandBuffer;
606
607    type Buffer: DynBuffer;
608    type Texture: DynTexture;
609    type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
610    type TextureView: DynTextureView;
611    type Sampler: DynSampler;
612    type QuerySet: DynQuerySet;
613
614    /// A value you can block on to wait for something to finish.
615    ///
616    /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
617    /// [`Device::wait`] to block until a fence reaches or passes a value you
618    /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
619    /// store in it when the submitted work is complete.
620    ///
621    /// Attempting to set a fence to a value less than its current value has no
622    /// effect.
623    ///
624    /// Waiting on a fence returns as soon as the fence reaches *or passes* the
625    /// requested value. This implies that, in order to reliably determine when
626    /// an operation has completed, operations must finish in order of
627    /// increasing fence values: if a higher-valued operation were to finish
628    /// before a lower-valued operation, then waiting for the fence to reach the
629    /// lower value could return before the lower-valued operation has actually
630    /// finished.
631    type Fence: DynFence;
632
633    type BindGroupLayout: DynBindGroupLayout;
634    type BindGroup: DynBindGroup;
635    type PipelineLayout: DynPipelineLayout;
636    type ShaderModule: DynShaderModule;
637    type RenderPipeline: DynRenderPipeline;
638    type ComputePipeline: DynComputePipeline;
639    type PipelineCache: DynPipelineCache;
640
641    type AccelerationStructure: DynAccelerationStructure + 'static;
642}
643
644pub trait Instance: Sized + WasmNotSendSync {
645    type A: Api;
646
647    unsafe fn init(desc: &InstanceDescriptor<'_>) -> Result<Self, InstanceError>;
648    unsafe fn create_surface(
649        &self,
650        display_handle: raw_window_handle::RawDisplayHandle,
651        window_handle: raw_window_handle::RawWindowHandle,
652    ) -> Result<<Self::A as Api>::Surface, InstanceError>;
653    /// `surface_hint` is only used by the GLES backend targeting WebGL2
654    unsafe fn enumerate_adapters(
655        &self,
656        surface_hint: Option<&<Self::A as Api>::Surface>,
657    ) -> Vec<ExposedAdapter<Self::A>>;
658}
659
660pub trait Surface: WasmNotSendSync {
661    type A: Api;
662
663    /// Configure `self` to use `device`.
664    ///
665    /// # Safety
666    ///
667    /// - All GPU work using `self` must have been completed.
668    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
669    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
670    /// - The surface `self` must not currently be configured to use any other [`Device`].
671    unsafe fn configure(
672        &self,
673        device: &<Self::A as Api>::Device,
674        config: &SurfaceConfiguration,
675    ) -> Result<(), SurfaceError>;
676
677    /// Unconfigure `self` on `device`.
678    ///
679    /// # Safety
680    ///
681    /// - All GPU work that uses `surface` must have been completed.
682    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
683    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
684    /// - The surface `self` must have been configured on `device`.
685    unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
686
687    /// Return the next texture to be presented by `self`, for the caller to draw on.
688    ///
689    /// On success, return an [`AcquiredSurfaceTexture`] representing the
690    /// texture into which the caller should draw the image to be displayed on
691    /// `self`.
692    ///
693    /// If `timeout` elapses before `self` has a texture ready to be acquired,
694    /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
695    /// timeout.
696    ///
697    /// # Using an [`AcquiredSurfaceTexture`]
698    ///
699    /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
700    /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
701    /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
702    /// carries some metadata about that [`SurfaceTexture`].
703    ///
704    /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
705    /// include the [`SurfaceTexture`] in the `surface_textures` argument.
706    ///
707    /// When you are done drawing on the texture, you can display it on `self`
708    /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
709    ///
710    /// If you do not wish to display the texture, you must pass the
711    /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
712    /// by future acquisitions.
713    ///
714    /// # Portability
715    ///
716    /// Some backends can't support a timeout when acquiring a texture. On these
717    /// backends, `timeout` is ignored.
718    ///
719    /// # Safety
720    ///
721    /// - The surface `self` must currently be configured on some [`Device`].
722    ///
723    /// - The `fence` argument must be the same [`Fence`] passed to all calls to
724    ///   [`Queue::submit`] that used [`Texture`]s acquired from this surface.
725    ///
726    /// - You may only have one texture acquired from `self` at a time. When
727    ///   `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
728    ///   [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
729    ///   [`Surface::discard_texture`] before calling `acquire_texture` again.
730    ///
731    /// [`texture`]: AcquiredSurfaceTexture::texture
732    /// [`SurfaceTexture`]: Api::SurfaceTexture
733    /// [`borrow`]: alloc::borrow::Borrow::borrow
734    /// [`Texture`]: Api::Texture
735    /// [`Fence`]: Api::Fence
736    /// [`self.discard_texture`]: Surface::discard_texture
737    unsafe fn acquire_texture(
738        &self,
739        timeout: Option<core::time::Duration>,
740        fence: &<Self::A as Api>::Fence,
741    ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
742
743    /// Relinquish an acquired texture without presenting it.
744    ///
745    /// After this call, the texture underlying [`SurfaceTexture`] may be
746    /// returned by subsequent calls to [`self.acquire_texture`].
747    ///
748    /// # Safety
749    ///
750    /// - The surface `self` must currently be configured on some [`Device`].
751    ///
752    /// - `texture` must be a [`SurfaceTexture`] returned by a call to
753    ///   [`self.acquire_texture`] that has not yet been passed to
754    ///   [`Queue::present`].
755    ///
756    /// [`SurfaceTexture`]: Api::SurfaceTexture
757    /// [`self.acquire_texture`]: Surface::acquire_texture
758    unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
759}
760
761pub trait Adapter: WasmNotSendSync {
762    type A: Api;
763
764    unsafe fn open(
765        &self,
766        features: wgt::Features,
767        limits: &wgt::Limits,
768        memory_hints: &wgt::MemoryHints,
769    ) -> Result<OpenDevice<Self::A>, DeviceError>;
770
771    /// Return the set of supported capabilities for a texture format.
772    unsafe fn texture_format_capabilities(
773        &self,
774        format: wgt::TextureFormat,
775    ) -> TextureFormatCapabilities;
776
777    /// Returns the capabilities of working with a specified surface.
778    ///
779    /// `None` means presentation is not supported for it.
780    unsafe fn surface_capabilities(
781        &self,
782        surface: &<Self::A as Api>::Surface,
783    ) -> Option<SurfaceCapabilities>;
784
785    /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
786    ///
787    /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
788    unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
789}
790
791/// A connection to a GPU and a pool of resources to use with it.
792///
793/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
794/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
795/// used for creating resources. Each `Device` has an associated [`Queue`] used
796/// for command submission.
797///
798/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
799/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
800/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
801/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
802/// `Adapter`.
803///
804/// A `Device`'s life cycle is generally:
805///
806/// 1)  Obtain a `Device` and its associated [`Queue`] by calling
807///     [`Adapter::open`].
808///
809///     Alternatively, the backend-specific types that implement [`Adapter`] often
810///     have methods for creating a `wgpu-hal` `Device` from a platform-specific
811///     handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
812///     [`vulkan::Device`] from an [`ash::Device`].
813///
814/// 1)  Create resources to use on the device by calling methods like
815///     [`Device::create_texture`] or [`Device::create_shader_module`].
816///
817/// 1)  Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
818///     which you can use to build [`CommandBuffer`]s holding commands to be
819///     executed on the GPU.
820///
821/// 1)  Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
822///     [`CommandBuffer`]s for execution on the GPU. If needed, call
823///     [`Device::wait`] to wait for them to finish execution.
824///
825/// 1)  Free resources with methods like [`Device::destroy_texture`] or
826///     [`Device::destroy_shader_module`].
827///
828/// 1)  Drop the device.
829///
830/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
831/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
832/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
833/// [`wgpu_hal::Adapter`]: Adapter
834/// [`wgpu_hal::Device`]: Device
835/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
836/// [`vulkan::Device`]: vulkan/struct.Device.html
837/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
838/// [`CommandBuffer`]: Api::CommandBuffer
839///
840/// # Safety
841///
842/// As with other `wgpu-hal` APIs, [validation] is the caller's
843/// responsibility. Here are the general requirements for all `Device`
844/// methods:
845///
846/// - Any resource passed to a `Device` method must have been created by that
847///   `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
848///   have been created with the `Device` passed as `self`.
849///
850/// - Resources may not be destroyed if they are used by any submitted command
851///   buffers that have not yet finished execution.
852///
853/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
854/// [`Texture`]: Api::Texture
855pub trait Device: WasmNotSendSync {
856    type A: Api;
857
858    /// Creates a new buffer.
859    ///
860    /// The initial usage is `wgt::BufferUses::empty()`.
861    unsafe fn create_buffer(
862        &self,
863        desc: &BufferDescriptor,
864    ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
865
866    /// Free `buffer` and any GPU resources it owns.
867    ///
868    /// Note that backends are allowed to allocate GPU memory for buffers from
869    /// allocation pools, and this call is permitted to simply return `buffer`'s
870    /// storage to that pool, without making it available to other applications.
871    ///
872    /// # Safety
873    ///
874    /// - The given `buffer` must not currently be mapped.
875    unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
876
877    /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
878    unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
879
880    /// Return a pointer to CPU memory mapping the contents of `buffer`.
881    ///
882    /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
883    /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
884    /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
885    /// `wgpu_hal` buffer is also unmapped.)
886    ///
887    /// If this function returns `Ok(mapping)`, then:
888    ///
889    /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
890    ///
891    /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
892    ///   memory are immediately visible on the GPU, and vice versa.
893    ///
894    /// # Safety
895    ///
896    /// - The given `buffer` must have been created with the [`MAP_READ`] or
897    ///   [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
898    ///
899    /// - The given `range` must fall within the size of `buffer`.
900    ///
901    /// - The caller must avoid data races between the CPU and the GPU. A data
902    ///   race is any pair of accesses to a particular byte, one of which is a
903    ///   write, that are not ordered with respect to each other by some sort of
904    ///   synchronization operation.
905    ///
906    /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
907    ///   `false`, then:
908    ///
909    ///   - Every CPU write to a mapped byte followed by a GPU read of that byte
910    ///     must have at least one call to [`Device::flush_mapped_ranges`]
911    ///     covering that byte that occurs between those two accesses.
912    ///
913    ///   - Every GPU write to a mapped byte followed by a CPU read of that byte
914    ///     must have at least one call to [`Device::invalidate_mapped_ranges`]
915    ///     covering that byte that occurs between those two accesses.
916    ///
917    ///   Note that the data race rule above requires that all such access pairs
918    ///   be ordered, so it is meaningful to talk about what must occur
919    ///   "between" them.
920    ///
921    /// - Zero-sized mappings are not allowed.
922    ///
923    /// - The returned [`BufferMapping::ptr`] must not be used after a call to
924    ///   [`Device::unmap_buffer`].
925    ///
926    /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
927    /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
928    unsafe fn map_buffer(
929        &self,
930        buffer: &<Self::A as Api>::Buffer,
931        range: MemoryRange,
932    ) -> Result<BufferMapping, DeviceError>;
933
934    /// Remove the mapping established by the last call to [`Device::map_buffer`].
935    ///
936    /// # Safety
937    ///
938    /// - The given `buffer` must be currently mapped.
939    unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
940
941    /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
942    ///
943    /// # Safety
944    ///
945    /// - The given `buffer` must be currently mapped.
946    ///
947    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
948    unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
949    where
950        I: Iterator<Item = MemoryRange>;
951
952    /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
953    ///
954    /// # Safety
955    ///
956    /// - The given `buffer` must be currently mapped.
957    ///
958    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
959    unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
960    where
961        I: Iterator<Item = MemoryRange>;
962
963    /// Creates a new texture.
964    ///
965    /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
966    unsafe fn create_texture(
967        &self,
968        desc: &TextureDescriptor,
969    ) -> Result<<Self::A as Api>::Texture, DeviceError>;
970    unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
971
972    /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
973    unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
974
975    unsafe fn create_texture_view(
976        &self,
977        texture: &<Self::A as Api>::Texture,
978        desc: &TextureViewDescriptor,
979    ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
980    unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
981    unsafe fn create_sampler(
982        &self,
983        desc: &SamplerDescriptor,
984    ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
985    unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
986
987    /// Create a fresh [`CommandEncoder`].
988    ///
989    /// The new `CommandEncoder` is in the "closed" state.
990    unsafe fn create_command_encoder(
991        &self,
992        desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
993    ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
994
995    /// Creates a bind group layout.
996    unsafe fn create_bind_group_layout(
997        &self,
998        desc: &BindGroupLayoutDescriptor,
999    ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
1000    unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
1001    unsafe fn create_pipeline_layout(
1002        &self,
1003        desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
1004    ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
1005    unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
1006
1007    #[allow(clippy::type_complexity)]
1008    unsafe fn create_bind_group(
1009        &self,
1010        desc: &BindGroupDescriptor<
1011            <Self::A as Api>::BindGroupLayout,
1012            <Self::A as Api>::Buffer,
1013            <Self::A as Api>::Sampler,
1014            <Self::A as Api>::TextureView,
1015            <Self::A as Api>::AccelerationStructure,
1016        >,
1017    ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
1018    unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
1019
1020    unsafe fn create_shader_module(
1021        &self,
1022        desc: &ShaderModuleDescriptor,
1023        shader: ShaderInput,
1024    ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
1025    unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
1026
1027    #[allow(clippy::type_complexity)]
1028    unsafe fn create_render_pipeline(
1029        &self,
1030        desc: &RenderPipelineDescriptor<
1031            <Self::A as Api>::PipelineLayout,
1032            <Self::A as Api>::ShaderModule,
1033            <Self::A as Api>::PipelineCache,
1034        >,
1035    ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
1036    unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
1037
1038    #[allow(clippy::type_complexity)]
1039    unsafe fn create_compute_pipeline(
1040        &self,
1041        desc: &ComputePipelineDescriptor<
1042            <Self::A as Api>::PipelineLayout,
1043            <Self::A as Api>::ShaderModule,
1044            <Self::A as Api>::PipelineCache,
1045        >,
1046    ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
1047    unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
1048
1049    unsafe fn create_pipeline_cache(
1050        &self,
1051        desc: &PipelineCacheDescriptor<'_>,
1052    ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
1053    fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
1054        None
1055    }
1056    unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
1057
1058    unsafe fn create_query_set(
1059        &self,
1060        desc: &wgt::QuerySetDescriptor<Label>,
1061    ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
1062    unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
1063    unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
1064    unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
1065    unsafe fn get_fence_value(
1066        &self,
1067        fence: &<Self::A as Api>::Fence,
1068    ) -> Result<FenceValue, DeviceError>;
1069
1070    /// Wait for `fence` to reach `value`.
1071    ///
1072    /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
1073    /// [`FenceValue`] to store in it, so you can use this `wait` function
1074    /// to wait for a given queue submission to finish execution.
1075    ///
1076    /// The `value` argument must be a value that some actual operation you have
1077    /// already presented to the device is going to store in `fence`. You cannot
1078    /// wait for values yet to be submitted. (This restriction accommodates
1079    /// implementations like the `vulkan` backend's [`FencePool`] that must
1080    /// allocate a distinct synchronization object for each fence value one is
1081    /// able to wait for.)
1082    ///
1083    /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
1084    /// returns immediately.
1085    ///
1086    /// If `timeout` is provided, the function will block indefinitely or until
1087    /// an error is encountered.
1088    ///
1089    /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
1090    ///
1091    /// [`Fence`]: Api::Fence
1092    /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
1093    unsafe fn wait(
1094        &self,
1095        fence: &<Self::A as Api>::Fence,
1096        value: FenceValue,
1097        timeout: Option<core::time::Duration>,
1098    ) -> Result<bool, DeviceError>;
1099
1100    /// Start a graphics debugger capture.
1101    ///
1102    /// # Safety
1103    ///
1104    /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1105    ///
1106    /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1107    unsafe fn start_graphics_debugger_capture(&self) -> bool;
1108
1109    /// Stop a graphics debugger capture.
1110    ///
1111    /// # Safety
1112    ///
1113    /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1114    ///
1115    /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1116    unsafe fn stop_graphics_debugger_capture(&self);
1117
1118    #[allow(unused_variables)]
1119    unsafe fn pipeline_cache_get_data(
1120        &self,
1121        cache: &<Self::A as Api>::PipelineCache,
1122    ) -> Option<Vec<u8>> {
1123        None
1124    }
1125
1126    unsafe fn create_acceleration_structure(
1127        &self,
1128        desc: &AccelerationStructureDescriptor,
1129    ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1130    unsafe fn get_acceleration_structure_build_sizes(
1131        &self,
1132        desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1133    ) -> AccelerationStructureBuildSizes;
1134    unsafe fn get_acceleration_structure_device_address(
1135        &self,
1136        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1137    ) -> wgt::BufferAddress;
1138    unsafe fn destroy_acceleration_structure(
1139        &self,
1140        acceleration_structure: <Self::A as Api>::AccelerationStructure,
1141    );
1142    fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1143
1144    fn get_internal_counters(&self) -> wgt::HalCounters;
1145
1146    fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1147        None
1148    }
1149
1150    fn check_if_oom(&self) -> Result<(), DeviceError>;
1151}
1152
1153pub trait Queue: WasmNotSendSync {
1154    type A: Api;
1155
1156    /// Submit `command_buffers` for execution on GPU.
1157    ///
1158    /// Update `fence` to `value` when the operation is complete. See
1159    /// [`Fence`] for details.
1160    ///
1161    /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1162    /// order they're submitted, with each buffer able to observe the effects of
1163    /// previous buffers' execution. Specifically:
1164    ///
1165    /// - If two calls to `submit` on a single `Queue` occur in a particular
1166    ///   order (that is, they happen on the same thread, or on two threads that
1167    ///   have synchronized to establish an ordering), then the first
1168    ///   submission's commands all complete execution before any of the second
1169    ///   submission's commands begin. All results produced by one submission
1170    ///   are visible to the next.
1171    ///
1172    /// - Within a submission, command buffers execute in the order in which they
1173    ///   appear in `command_buffers`. All results produced by one buffer are
1174    ///   visible to the next.
1175    ///
1176    /// If two calls to `submit` on a single `Queue` from different threads are
1177    /// not synchronized to occur in a particular order, they must pass distinct
1178    /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1179    /// operations to complete is only trustworthy when operations finish in
1180    /// order of increasing fence value, but submissions from different threads
1181    /// cannot determine how to order the fence values if the submissions
1182    /// themselves are unordered. If each thread uses a separate [`Fence`], this
1183    /// problem does not arise.
1184    ///
1185    /// # Safety
1186    ///
1187    /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1188    ///   from a [`CommandEncoder`][ce] that was constructed from the
1189    ///   [`Device`][d] associated with this [`Queue`].
1190    ///
1191    /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1192    ///   commands have finished execution. Since command buffers must not
1193    ///   outlive their encoders, this implies that the encoders must remain
1194    ///   alive as well.
1195    ///
1196    /// - All resources used by a submitted [`CommandBuffer`][cb]
1197    ///   ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1198    ///   on) must remain alive until the command buffer finishes execution.
1199    ///
1200    /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1201    ///   writes to must appear in the `surface_textures` argument.
1202    ///
1203    /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1204    ///   argument more than once.
1205    ///
1206    /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1207    ///   for use with the [`Device`][d] associated with this [`Queue`],
1208    ///   typically by calling [`Surface::configure`].
1209    ///
1210    /// - All calls to this function that include a given [`SurfaceTexture`][st]
1211    ///   in `surface_textures` must use the same [`Fence`].
1212    ///
1213    /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1214    ///   all submissions that will signal it have completed.
1215    ///
1216    /// [`Fence`]: Api::Fence
1217    /// [cb]: Api::CommandBuffer
1218    /// [ce]: Api::CommandEncoder
1219    /// [d]: Api::Device
1220    /// [t]: Api::Texture
1221    /// [bg]: Api::BindGroup
1222    /// [rp]: Api::RenderPipeline
1223    /// [st]: Api::SurfaceTexture
1224    unsafe fn submit(
1225        &self,
1226        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1227        surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1228        signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1229    ) -> Result<(), DeviceError>;
1230    unsafe fn present(
1231        &self,
1232        surface: &<Self::A as Api>::Surface,
1233        texture: <Self::A as Api>::SurfaceTexture,
1234    ) -> Result<(), SurfaceError>;
1235    unsafe fn get_timestamp_period(&self) -> f32;
1236}
1237
1238/// Encoder and allocation pool for `CommandBuffer`s.
1239///
1240/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1241/// acts as the allocation pool that owns the buffers' underlying
1242/// storage. Thus, `CommandBuffer`s must not outlive the
1243/// `CommandEncoder` that created them.
1244///
1245/// The life cycle of a `CommandBuffer` is as follows:
1246///
1247/// - Call [`Device::create_command_encoder`] to create a new
1248///   `CommandEncoder`, in the "closed" state.
1249///
1250/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1251///   recording commands. This puts the `CommandEncoder` in the
1252///   "recording" state.
1253///
1254/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1255///   etc. on a "recording" `CommandEncoder` to add commands to the
1256///   list. (If an error occurs, you must call `discard_encoding`; see
1257///   below.)
1258///
1259/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1260///   encoder and construct a fresh `CommandBuffer` consisting of the
1261///   list of commands recorded up to that point.
1262///
1263/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1264///   the commands recorded thus far and close the encoder. This is
1265///   the only safe thing to do on a `CommandEncoder` if an error has
1266///   occurred while recording commands.
1267///
1268/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1269///   live `CommandBuffers` built from it. All the `CommandBuffer`s
1270///   are destroyed, and their resources are freed.
1271///
1272/// # Safety
1273///
1274/// - The `CommandEncoder` must be in the states described above to
1275///   make the given calls.
1276///
1277/// - A `CommandBuffer` that has been submitted for execution on the
1278///   GPU must live until its execution is complete.
1279///
1280/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1281///   built it.
1282///
1283/// It is the user's responsibility to meet this requirements. This
1284/// allows `CommandEncoder` implementations to keep their state
1285/// tracking to a minimum.
1286pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1287    type A: Api;
1288
1289    /// Begin encoding a new command buffer.
1290    ///
1291    /// This puts this `CommandEncoder` in the "recording" state.
1292    ///
1293    /// # Safety
1294    ///
1295    /// This `CommandEncoder` must be in the "closed" state.
1296    unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1297
1298    /// Discard the command list under construction.
1299    ///
1300    /// If an error has occurred while recording commands, this
1301    /// is the only safe thing to do with the encoder.
1302    ///
1303    /// This puts this `CommandEncoder` in the "closed" state.
1304    ///
1305    /// # Safety
1306    ///
1307    /// This `CommandEncoder` must be in the "recording" state.
1308    ///
1309    /// Callers must not assume that implementations of this
1310    /// function are idempotent, and thus should not call it
1311    /// multiple times in a row.
1312    unsafe fn discard_encoding(&mut self);
1313
1314    /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1315    ///
1316    /// The returned [`CommandBuffer`] holds all the commands recorded
1317    /// on this `CommandEncoder` since the last call to
1318    /// [`begin_encoding`].
1319    ///
1320    /// This puts this `CommandEncoder` in the "closed" state.
1321    ///
1322    /// # Safety
1323    ///
1324    /// This `CommandEncoder` must be in the "recording" state.
1325    ///
1326    /// The returned [`CommandBuffer`] must not outlive this
1327    /// `CommandEncoder`. Implementations are allowed to build
1328    /// `CommandBuffer`s that depend on storage owned by this
1329    /// `CommandEncoder`.
1330    ///
1331    /// [`CommandBuffer`]: Api::CommandBuffer
1332    /// [`begin_encoding`]: CommandEncoder::begin_encoding
1333    unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1334
1335    /// Reclaim all resources belonging to this `CommandEncoder`.
1336    ///
1337    /// # Safety
1338    ///
1339    /// This `CommandEncoder` must be in the "closed" state.
1340    ///
1341    /// The `command_buffers` iterator must produce all the live
1342    /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1343    /// is, every extant `CommandBuffer` returned from `end_encoding`.
1344    ///
1345    /// [`CommandBuffer`]: Api::CommandBuffer
1346    unsafe fn reset_all<I>(&mut self, command_buffers: I)
1347    where
1348        I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1349
1350    unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1351    where
1352        T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1353
1354    unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1355    where
1356        T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1357
1358    // copy operations
1359
1360    unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1361
1362    unsafe fn copy_buffer_to_buffer<T>(
1363        &mut self,
1364        src: &<Self::A as Api>::Buffer,
1365        dst: &<Self::A as Api>::Buffer,
1366        regions: T,
1367    ) where
1368        T: Iterator<Item = BufferCopy>;
1369
1370    /// Copy from an external image to an internal texture.
1371    /// Works with a single array layer.
1372    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1373    /// Note: the copy extent is in physical size (rounded to the block size)
1374    #[cfg(webgl)]
1375    unsafe fn copy_external_image_to_texture<T>(
1376        &mut self,
1377        src: &wgt::CopyExternalImageSourceInfo,
1378        dst: &<Self::A as Api>::Texture,
1379        dst_premultiplication: bool,
1380        regions: T,
1381    ) where
1382        T: Iterator<Item = TextureCopy>;
1383
1384    /// Copy from one texture to another.
1385    /// Works with a single array layer.
1386    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1387    /// Note: the copy extent is in physical size (rounded to the block size)
1388    unsafe fn copy_texture_to_texture<T>(
1389        &mut self,
1390        src: &<Self::A as Api>::Texture,
1391        src_usage: wgt::TextureUses,
1392        dst: &<Self::A as Api>::Texture,
1393        regions: T,
1394    ) where
1395        T: Iterator<Item = TextureCopy>;
1396
1397    /// Copy from buffer to texture.
1398    /// Works with a single array layer.
1399    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1400    /// Note: the copy extent is in physical size (rounded to the block size)
1401    unsafe fn copy_buffer_to_texture<T>(
1402        &mut self,
1403        src: &<Self::A as Api>::Buffer,
1404        dst: &<Self::A as Api>::Texture,
1405        regions: T,
1406    ) where
1407        T: Iterator<Item = BufferTextureCopy>;
1408
1409    /// Copy from texture to buffer.
1410    /// Works with a single array layer.
1411    /// Note: the copy extent is in physical size (rounded to the block size)
1412    unsafe fn copy_texture_to_buffer<T>(
1413        &mut self,
1414        src: &<Self::A as Api>::Texture,
1415        src_usage: wgt::TextureUses,
1416        dst: &<Self::A as Api>::Buffer,
1417        regions: T,
1418    ) where
1419        T: Iterator<Item = BufferTextureCopy>;
1420
1421    unsafe fn copy_acceleration_structure_to_acceleration_structure(
1422        &mut self,
1423        src: &<Self::A as Api>::AccelerationStructure,
1424        dst: &<Self::A as Api>::AccelerationStructure,
1425        copy: wgt::AccelerationStructureCopy,
1426    );
1427    // pass common
1428
1429    /// Sets the bind group at `index` to `group`.
1430    ///
1431    /// If this is not the first call to `set_bind_group` within the current
1432    /// render or compute pass:
1433    ///
1434    /// - If `layout` contains `n` bind group layouts, then any previously set
1435    ///   bind groups at indices `n` or higher are cleared.
1436    ///
1437    /// - If the first `m` bind group layouts of `layout` are equal to those of
1438    ///   the previously passed layout, but no more, then any previously set
1439    ///   bind groups at indices `m` or higher are cleared.
1440    ///
1441    /// It follows from the above that passing the same layout as before doesn't
1442    /// clear any bind groups.
1443    ///
1444    /// # Safety
1445    ///
1446    /// - This [`CommandEncoder`] must be within a render or compute pass.
1447    ///
1448    /// - `index` must be the valid index of some bind group layout in `layout`.
1449    ///   Call this the "relevant bind group layout".
1450    ///
1451    /// - The layout of `group` must be equal to the relevant bind group layout.
1452    ///
1453    /// - The length of `dynamic_offsets` must match the number of buffer
1454    ///   bindings [with dynamic offsets][hdo] in the relevant bind group
1455    ///   layout.
1456    ///
1457    /// - If those buffer bindings are ordered by increasing [`binding` number]
1458    ///   and paired with elements from `dynamic_offsets`, then each offset must
1459    ///   be a valid offset for the binding's corresponding buffer in `group`.
1460    ///
1461    /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1462    /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1463    unsafe fn set_bind_group(
1464        &mut self,
1465        layout: &<Self::A as Api>::PipelineLayout,
1466        index: u32,
1467        group: &<Self::A as Api>::BindGroup,
1468        dynamic_offsets: &[wgt::DynamicOffset],
1469    );
1470
1471    /// Sets a range in immediate data.
1472    ///
1473    /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1474    ///
1475    /// # Safety
1476    ///
1477    /// - `offset_bytes` must be a multiple of 4.
1478    /// - The range of immediates written must be valid for the pipeline layout at draw time.
1479    unsafe fn set_immediates(
1480        &mut self,
1481        layout: &<Self::A as Api>::PipelineLayout,
1482        offset_bytes: u32,
1483        data: &[u32],
1484    );
1485
1486    unsafe fn insert_debug_marker(&mut self, label: &str);
1487    unsafe fn begin_debug_marker(&mut self, group_label: &str);
1488    unsafe fn end_debug_marker(&mut self);
1489
1490    // queries
1491
1492    /// # Safety:
1493    ///
1494    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1495    unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1496    /// # Safety:
1497    ///
1498    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1499    unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1500    unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1501    unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1502    unsafe fn copy_query_results(
1503        &mut self,
1504        set: &<Self::A as Api>::QuerySet,
1505        range: Range<u32>,
1506        buffer: &<Self::A as Api>::Buffer,
1507        offset: wgt::BufferAddress,
1508        stride: wgt::BufferSize,
1509    );
1510
1511    // render passes
1512
1513    /// Begin a new render pass, clearing all active bindings.
1514    ///
1515    /// This clears any bindings established by the following calls:
1516    ///
1517    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1518    /// - [`set_immediates`](CommandEncoder::set_immediates)
1519    /// - [`begin_query`](CommandEncoder::begin_query)
1520    /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1521    /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1522    /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1523    ///
1524    /// # Safety
1525    ///
1526    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1527    ///   by a call to [`end_render_pass`].
1528    ///
1529    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1530    ///   by a call to [`end_compute_pass`].
1531    ///
1532    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1533    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1534    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1535    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1536    unsafe fn begin_render_pass(
1537        &mut self,
1538        desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1539    ) -> Result<(), DeviceError>;
1540
1541    /// End the current render pass.
1542    ///
1543    /// # Safety
1544    ///
1545    /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1546    ///   that has not been followed by a call to [`end_render_pass`].
1547    ///
1548    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1549    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1550    unsafe fn end_render_pass(&mut self);
1551
1552    unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1553
1554    unsafe fn set_index_buffer<'a>(
1555        &mut self,
1556        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1557        format: wgt::IndexFormat,
1558    );
1559    unsafe fn set_vertex_buffer<'a>(
1560        &mut self,
1561        index: u32,
1562        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1563    );
1564    unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1565    unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1566    unsafe fn set_stencil_reference(&mut self, value: u32);
1567    unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1568
1569    unsafe fn draw(
1570        &mut self,
1571        first_vertex: u32,
1572        vertex_count: u32,
1573        first_instance: u32,
1574        instance_count: u32,
1575    );
1576    unsafe fn draw_indexed(
1577        &mut self,
1578        first_index: u32,
1579        index_count: u32,
1580        base_vertex: i32,
1581        first_instance: u32,
1582        instance_count: u32,
1583    );
1584    unsafe fn draw_indirect(
1585        &mut self,
1586        buffer: &<Self::A as Api>::Buffer,
1587        offset: wgt::BufferAddress,
1588        draw_count: u32,
1589    );
1590    unsafe fn draw_indexed_indirect(
1591        &mut self,
1592        buffer: &<Self::A as Api>::Buffer,
1593        offset: wgt::BufferAddress,
1594        draw_count: u32,
1595    );
1596    unsafe fn draw_indirect_count(
1597        &mut self,
1598        buffer: &<Self::A as Api>::Buffer,
1599        offset: wgt::BufferAddress,
1600        count_buffer: &<Self::A as Api>::Buffer,
1601        count_offset: wgt::BufferAddress,
1602        max_count: u32,
1603    );
1604    unsafe fn draw_indexed_indirect_count(
1605        &mut self,
1606        buffer: &<Self::A as Api>::Buffer,
1607        offset: wgt::BufferAddress,
1608        count_buffer: &<Self::A as Api>::Buffer,
1609        count_offset: wgt::BufferAddress,
1610        max_count: u32,
1611    );
1612    unsafe fn draw_mesh_tasks(
1613        &mut self,
1614        group_count_x: u32,
1615        group_count_y: u32,
1616        group_count_z: u32,
1617    );
1618    unsafe fn draw_mesh_tasks_indirect(
1619        &mut self,
1620        buffer: &<Self::A as Api>::Buffer,
1621        offset: wgt::BufferAddress,
1622        draw_count: u32,
1623    );
1624    unsafe fn draw_mesh_tasks_indirect_count(
1625        &mut self,
1626        buffer: &<Self::A as Api>::Buffer,
1627        offset: wgt::BufferAddress,
1628        count_buffer: &<Self::A as Api>::Buffer,
1629        count_offset: wgt::BufferAddress,
1630        max_count: u32,
1631    );
1632
1633    // compute passes
1634
1635    /// Begin a new compute pass, clearing all active bindings.
1636    ///
1637    /// This clears any bindings established by the following calls:
1638    ///
1639    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1640    /// - [`set_immediates`](CommandEncoder::set_immediates)
1641    /// - [`begin_query`](CommandEncoder::begin_query)
1642    /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1643    ///
1644    /// # Safety
1645    ///
1646    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1647    ///   by a call to [`end_render_pass`].
1648    ///
1649    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1650    ///   by a call to [`end_compute_pass`].
1651    ///
1652    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1653    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1654    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1655    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1656    unsafe fn begin_compute_pass(
1657        &mut self,
1658        desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1659    );
1660
1661    /// End the current compute pass.
1662    ///
1663    /// # Safety
1664    ///
1665    /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1666    ///   that has not been followed by a call to [`end_compute_pass`].
1667    ///
1668    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1669    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1670    unsafe fn end_compute_pass(&mut self);
1671
1672    unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1673
1674    unsafe fn dispatch(&mut self, count: [u32; 3]);
1675    unsafe fn dispatch_indirect(
1676        &mut self,
1677        buffer: &<Self::A as Api>::Buffer,
1678        offset: wgt::BufferAddress,
1679    );
1680
1681    /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1682    /// All buffers must be synchronized externally
1683    /// All buffer regions, which are written to may only be passed once per function call,
1684    /// with the exception of updates in the same descriptor.
1685    /// Consequences of this limitation:
1686    /// - scratch buffers need to be unique
1687    /// - a tlas can't be build in the same call with a blas it contains
1688    unsafe fn build_acceleration_structures<'a, T>(
1689        &mut self,
1690        descriptor_count: u32,
1691        descriptors: T,
1692    ) where
1693        Self::A: 'a,
1694        T: IntoIterator<
1695            Item = BuildAccelerationStructureDescriptor<
1696                'a,
1697                <Self::A as Api>::Buffer,
1698                <Self::A as Api>::AccelerationStructure,
1699            >,
1700        >;
1701    unsafe fn place_acceleration_structure_barrier(
1702        &mut self,
1703        barrier: AccelerationStructureBarrier,
1704    );
1705    // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1706    unsafe fn read_acceleration_structure_compact_size(
1707        &mut self,
1708        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1709        buf: &<Self::A as Api>::Buffer,
1710    );
1711    unsafe fn set_acceleration_structure_dependencies(
1712        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1713        dependencies: &[&<Self::A as Api>::AccelerationStructure],
1714    );
1715}
1716
1717bitflags!(
1718    /// Pipeline layout creation flags.
1719    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1720    pub struct PipelineLayoutFlags: u32 {
1721        /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1722        /// via immediates for direct execution.
1723        const FIRST_VERTEX_INSTANCE = 1 << 0;
1724        /// D3D12: Add support for `num_workgroups` builtins via immediates
1725        /// for direct execution.
1726        const NUM_WORK_GROUPS = 1 << 1;
1727        /// D3D12: Add support for the builtins that the other flags enable for
1728        /// indirect execution.
1729        const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1730    }
1731);
1732
1733bitflags!(
1734    /// Pipeline layout creation flags.
1735    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1736    pub struct BindGroupLayoutFlags: u32 {
1737        /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1738        const PARTIALLY_BOUND = 1 << 0;
1739    }
1740);
1741
1742bitflags!(
1743    /// Texture format capability flags.
1744    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1745    pub struct TextureFormatCapabilities: u32 {
1746        /// Format can be sampled.
1747        const SAMPLED = 1 << 0;
1748        /// Format can be sampled with a linear sampler.
1749        const SAMPLED_LINEAR = 1 << 1;
1750        /// Format can be sampled with a min/max reduction sampler.
1751        const SAMPLED_MINMAX = 1 << 2;
1752
1753        /// Format can be used as storage with read-only access.
1754        const STORAGE_READ_ONLY = 1 << 3;
1755        /// Format can be used as storage with write-only access.
1756        const STORAGE_WRITE_ONLY = 1 << 4;
1757        /// Format can be used as storage with both read and write access.
1758        const STORAGE_READ_WRITE = 1 << 5;
1759        /// Format can be used as storage with atomics.
1760        const STORAGE_ATOMIC = 1 << 6;
1761
1762        /// Format can be used as color and input attachment.
1763        const COLOR_ATTACHMENT = 1 << 7;
1764        /// Format can be used as color (with blending) and input attachment.
1765        const COLOR_ATTACHMENT_BLEND = 1 << 8;
1766        /// Format can be used as depth-stencil and input attachment.
1767        const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1768
1769        /// Format can be multisampled by x2.
1770        const MULTISAMPLE_X2   = 1 << 10;
1771        /// Format can be multisampled by x4.
1772        const MULTISAMPLE_X4   = 1 << 11;
1773        /// Format can be multisampled by x8.
1774        const MULTISAMPLE_X8   = 1 << 12;
1775        /// Format can be multisampled by x16.
1776        const MULTISAMPLE_X16  = 1 << 13;
1777
1778        /// Format can be used for render pass resolve targets.
1779        const MULTISAMPLE_RESOLVE = 1 << 14;
1780
1781        /// Format can be copied from.
1782        const COPY_SRC = 1 << 15;
1783        /// Format can be copied to.
1784        const COPY_DST = 1 << 16;
1785    }
1786);
1787
1788bitflags!(
1789    /// Texture format capability flags.
1790    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1791    pub struct FormatAspects: u8 {
1792        const COLOR = 1 << 0;
1793        const DEPTH = 1 << 1;
1794        const STENCIL = 1 << 2;
1795        const PLANE_0 = 1 << 3;
1796        const PLANE_1 = 1 << 4;
1797        const PLANE_2 = 1 << 5;
1798
1799        const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1800    }
1801);
1802
1803impl FormatAspects {
1804    pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1805        let aspect_mask = match aspect {
1806            wgt::TextureAspect::All => Self::all(),
1807            wgt::TextureAspect::DepthOnly => Self::DEPTH,
1808            wgt::TextureAspect::StencilOnly => Self::STENCIL,
1809            wgt::TextureAspect::Plane0 => Self::PLANE_0,
1810            wgt::TextureAspect::Plane1 => Self::PLANE_1,
1811            wgt::TextureAspect::Plane2 => Self::PLANE_2,
1812        };
1813        Self::from(format) & aspect_mask
1814    }
1815
1816    /// Returns `true` if only one flag is set
1817    pub fn is_one(&self) -> bool {
1818        self.bits().is_power_of_two()
1819    }
1820
1821    pub fn map(&self) -> wgt::TextureAspect {
1822        match *self {
1823            Self::COLOR => wgt::TextureAspect::All,
1824            Self::DEPTH => wgt::TextureAspect::DepthOnly,
1825            Self::STENCIL => wgt::TextureAspect::StencilOnly,
1826            Self::PLANE_0 => wgt::TextureAspect::Plane0,
1827            Self::PLANE_1 => wgt::TextureAspect::Plane1,
1828            Self::PLANE_2 => wgt::TextureAspect::Plane2,
1829            _ => unreachable!(),
1830        }
1831    }
1832}
1833
1834impl From<wgt::TextureFormat> for FormatAspects {
1835    fn from(format: wgt::TextureFormat) -> Self {
1836        match format {
1837            wgt::TextureFormat::Stencil8 => Self::STENCIL,
1838            wgt::TextureFormat::Depth16Unorm
1839            | wgt::TextureFormat::Depth32Float
1840            | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1841            wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1842                Self::DEPTH_STENCIL
1843            }
1844            wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1845            _ => Self::COLOR,
1846        }
1847    }
1848}
1849
1850bitflags!(
1851    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1852    pub struct MemoryFlags: u32 {
1853        const TRANSIENT = 1 << 0;
1854        const PREFER_COHERENT = 1 << 1;
1855    }
1856);
1857
1858bitflags!(
1859    /// Attachment load and store operations.
1860    ///
1861    /// There must be at least one flag from the LOAD group and one from the STORE group set.
1862    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1863    pub struct AttachmentOps: u8 {
1864        /// Load the existing contents of the attachment.
1865        const LOAD = 1 << 0;
1866        /// Clear the attachment to a specified value.
1867        const LOAD_CLEAR = 1 << 1;
1868        /// The contents of the attachment are undefined.
1869        const LOAD_DONT_CARE = 1 << 2;
1870        /// Store the contents of the attachment.
1871        const STORE = 1 << 3;
1872        /// The contents of the attachment are undefined after the pass.
1873        const STORE_DISCARD = 1 << 4;
1874    }
1875);
1876
1877#[derive(Debug)]
1878pub struct InstanceDescriptor<'a> {
1879    pub name: &'a str,
1880    pub flags: wgt::InstanceFlags,
1881    pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1882    pub backend_options: wgt::BackendOptions,
1883    pub telemetry: Option<Telemetry>,
1884    /// This is a borrow because the surrounding `core::Instance` keeps the the owned display handle
1885    /// alive already.
1886    pub display: Option<DisplayHandle<'a>>,
1887}
1888
1889#[derive(Clone, Debug)]
1890pub struct Alignments {
1891    /// The alignment of the start of the buffer used as a GPU copy source.
1892    pub buffer_copy_offset: wgt::BufferSize,
1893
1894    /// The alignment of the row pitch of the texture data stored in a buffer that is
1895    /// used in a GPU copy operation.
1896    pub buffer_copy_pitch: wgt::BufferSize,
1897
1898    /// The finest alignment of bound range checking for uniform buffers.
1899    ///
1900    /// When `wgpu_hal` restricts shader references to the [accessible
1901    /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1902    /// is the bind group binding's stated [size], rounded up to the next
1903    /// multiple of this value.
1904    ///
1905    /// We don't need an analogous field for storage buffer bindings, because
1906    /// all our backends promise to enforce the size at least to a four-byte
1907    /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1908    /// of four anyway.
1909    ///
1910    /// [ar]: struct.BufferBinding.html#accessible-region
1911    /// [`Uniform`]: wgt::BufferBindingType::Uniform
1912    /// [size]: BufferBinding::size
1913    pub uniform_bounds_check_alignment: wgt::BufferSize,
1914
1915    /// The size of the raw TLAS instance
1916    pub raw_tlas_instance_size: usize,
1917
1918    /// What the scratch buffer for building an acceleration structure must be aligned to
1919    pub ray_tracing_scratch_buffer_alignment: u32,
1920}
1921
1922#[derive(Clone, Debug)]
1923pub struct Capabilities {
1924    pub limits: wgt::Limits,
1925    pub alignments: Alignments,
1926    pub downlevel: wgt::DownlevelCapabilities,
1927    /// Supported cooperative matrix configurations.
1928    ///
1929    /// Empty if cooperative matrices are not supported.
1930    pub cooperative_matrix_properties: Vec<wgt::CooperativeMatrixProperties>,
1931}
1932
1933/// An adapter with all the information needed to reason about its capabilities.
1934///
1935/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1936/// methods on the backend [`Instance`] or [`Adapter`].
1937#[derive(Debug)]
1938pub struct ExposedAdapter<A: Api> {
1939    pub adapter: A::Adapter,
1940    pub info: wgt::AdapterInfo,
1941    pub features: wgt::Features,
1942    pub capabilities: Capabilities,
1943}
1944
1945/// Describes information about what a `Surface`'s presentation capabilities are.
1946/// Fetch this with [Adapter::surface_capabilities].
1947#[derive(Debug, Clone)]
1948pub struct SurfaceCapabilities {
1949    /// List of supported texture formats.
1950    ///
1951    /// Must be at least one.
1952    pub formats: Vec<wgt::TextureFormat>,
1953
1954    /// Range for the number of queued frames.
1955    ///
1956    /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1957    /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1958    ///
1959    /// - `maximum_frame_latency.start` must be at least 1.
1960    /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1961    pub maximum_frame_latency: RangeInclusive<u32>,
1962
1963    /// Current extent of the surface, if known.
1964    pub current_extent: Option<wgt::Extent3d>,
1965
1966    /// Supported texture usage flags.
1967    ///
1968    /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1969    pub usage: wgt::TextureUses,
1970
1971    /// List of supported V-sync modes.
1972    ///
1973    /// Must be at least one.
1974    pub present_modes: Vec<wgt::PresentMode>,
1975
1976    /// List of supported alpha composition modes.
1977    ///
1978    /// Must be at least one.
1979    pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1980}
1981
1982#[derive(Debug)]
1983pub struct AcquiredSurfaceTexture<A: Api> {
1984    pub texture: A::SurfaceTexture,
1985    /// The presentation configuration no longer matches
1986    /// the surface properties exactly, but can still be used to present
1987    /// to the surface successfully.
1988    pub suboptimal: bool,
1989}
1990
1991/// An open connection to a device and a queue.
1992///
1993/// This can be created from [`Adapter::open`] or backend
1994/// specific methods on the backend's [`Instance`] or [`Adapter`].
1995#[derive(Debug)]
1996pub struct OpenDevice<A: Api> {
1997    pub device: A::Device,
1998    pub queue: A::Queue,
1999}
2000
2001#[derive(Clone, Debug)]
2002pub struct BufferMapping {
2003    pub ptr: NonNull<u8>,
2004    pub is_coherent: bool,
2005}
2006
2007#[derive(Clone, Debug)]
2008pub struct BufferDescriptor<'a> {
2009    pub label: Label<'a>,
2010    pub size: wgt::BufferAddress,
2011    pub usage: wgt::BufferUses,
2012    pub memory_flags: MemoryFlags,
2013}
2014
2015#[derive(Clone, Debug)]
2016pub struct TextureDescriptor<'a> {
2017    pub label: Label<'a>,
2018    pub size: wgt::Extent3d,
2019    pub mip_level_count: u32,
2020    pub sample_count: u32,
2021    pub dimension: wgt::TextureDimension,
2022    pub format: wgt::TextureFormat,
2023    pub usage: wgt::TextureUses,
2024    pub memory_flags: MemoryFlags,
2025    /// Allows views of this texture to have a different format
2026    /// than the texture does.
2027    pub view_formats: Vec<wgt::TextureFormat>,
2028}
2029
2030impl TextureDescriptor<'_> {
2031    pub fn copy_extent(&self) -> CopyExtent {
2032        CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
2033    }
2034
2035    pub fn is_cube_compatible(&self) -> bool {
2036        self.dimension == wgt::TextureDimension::D2
2037            && self.size.depth_or_array_layers.is_multiple_of(6)
2038            && self.sample_count == 1
2039            && self.size.width == self.size.height
2040    }
2041
2042    pub fn array_layer_count(&self) -> u32 {
2043        match self.dimension {
2044            wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
2045            wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
2046        }
2047    }
2048}
2049
2050/// TextureView descriptor.
2051///
2052/// Valid usage:
2053///. - `format` has to be the same as `TextureDescriptor::format`
2054///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
2055///. - `usage` has to be a subset of `TextureDescriptor::usage`
2056///. - `range` has to be a subset of parent texture
2057#[derive(Clone, Debug)]
2058pub struct TextureViewDescriptor<'a> {
2059    pub label: Label<'a>,
2060    pub format: wgt::TextureFormat,
2061    pub dimension: wgt::TextureViewDimension,
2062    pub usage: wgt::TextureUses,
2063    pub range: wgt::ImageSubresourceRange,
2064}
2065
2066#[derive(Clone, Debug)]
2067pub struct SamplerDescriptor<'a> {
2068    pub label: Label<'a>,
2069    pub address_modes: [wgt::AddressMode; 3],
2070    pub mag_filter: wgt::FilterMode,
2071    pub min_filter: wgt::FilterMode,
2072    pub mipmap_filter: wgt::MipmapFilterMode,
2073    pub lod_clamp: Range<f32>,
2074    pub compare: Option<wgt::CompareFunction>,
2075    // Must in the range [1, 16].
2076    //
2077    // Anisotropic filtering must be supported if this is not 1.
2078    pub anisotropy_clamp: u16,
2079    pub border_color: Option<wgt::SamplerBorderColor>,
2080}
2081
2082/// BindGroupLayout descriptor.
2083///
2084/// Valid usage:
2085/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
2086#[derive(Clone, Debug)]
2087pub struct BindGroupLayoutDescriptor<'a> {
2088    pub label: Label<'a>,
2089    pub flags: BindGroupLayoutFlags,
2090    pub entries: &'a [wgt::BindGroupLayoutEntry],
2091}
2092
2093#[derive(Clone, Debug)]
2094pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
2095    pub label: Label<'a>,
2096    pub flags: PipelineLayoutFlags,
2097    pub bind_group_layouts: &'a [Option<&'a B>],
2098    pub immediate_size: u32,
2099}
2100
2101/// A region of a buffer made visible to shaders via a [`BindGroup`].
2102///
2103/// [`BindGroup`]: Api::BindGroup
2104///
2105/// ## Construction
2106///
2107/// The recommended way to construct a `BufferBinding` is using the `binding`
2108/// method on a wgpu-core `Buffer`, which will validate the binding size
2109/// against the buffer size. A `new_unchecked` constructor is also provided for
2110/// cases where direct construction is necessary.
2111///
2112/// ## Accessible region
2113///
2114/// `wgpu_hal` guarantees that shaders compiled with
2115/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
2116/// write data via this binding outside the *accessible region* of a buffer:
2117///
2118/// - The accessible region starts at [`offset`].
2119///
2120/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2121///   which must be a multiple of 4.
2122///
2123/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2124///   rounded up to the next multiple of
2125///   [`Alignments::uniform_bounds_check_alignment`].
2126///
2127/// Note that this guarantee is stricter than WGSL's requirements for
2128/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2129/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2130/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2131/// never be read by the application before they are overwritten. This
2132/// optimization consults bind group buffer binding regions to determine which
2133/// parts of which buffers shaders might observe. This optimization is only
2134/// sound if shader access is bounds-checked.
2135///
2136/// ## Zero-length bindings
2137///
2138/// Some back ends cannot tolerate zero-length regions; for example, see
2139/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2140/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2141/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2142/// previously stated that a `BufferBinding` must have `offset` strictly less
2143/// than the size of the buffer, but this restriction was not honored elsewhere
2144/// in the code, so has been removed. However, it remains the case that
2145/// some backends do not support zero-length bindings, so additional
2146/// logic is needed somewhere to handle this properly. See
2147/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2148///
2149/// [`offset`]: BufferBinding::offset
2150/// [`size`]: BufferBinding::size
2151/// [`Storage`]: wgt::BufferBindingType::Storage
2152/// [`Uniform`]: wgt::BufferBindingType::Uniform
2153/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2154/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2155/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2156/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2157#[derive(Debug)]
2158pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2159    /// The buffer being bound.
2160    ///
2161    /// This is not fully `pub` to prevent direct construction of
2162    /// `BufferBinding`s, while still allowing public read access to the `offset`
2163    /// and `size` properties.
2164    pub(crate) buffer: &'a B,
2165
2166    /// The offset at which the bound region starts.
2167    ///
2168    /// This must be less or equal to the size of the buffer.
2169    pub offset: wgt::BufferAddress,
2170
2171    /// The size of the region bound, in bytes.
2172    ///
2173    /// If `None`, the region extends from `offset` to the end of the
2174    /// buffer. Given the restrictions on `offset`, this means that
2175    /// the size is always greater than zero.
2176    pub size: Option<wgt::BufferSize>,
2177}
2178
2179// We must implement this manually because `B` is not necessarily `Clone`.
2180impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2181    fn clone(&self) -> Self {
2182        BufferBinding {
2183            buffer: self.buffer,
2184            offset: self.offset,
2185            size: self.size,
2186        }
2187    }
2188}
2189
2190/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2191/// really wants to be using `NonZeroU64`.
2192/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2193pub trait ShouldBeNonZeroExt {
2194    fn get(&self) -> u64;
2195}
2196
2197impl ShouldBeNonZeroExt for NonZeroU64 {
2198    fn get(&self) -> u64 {
2199        NonZeroU64::get(*self)
2200    }
2201}
2202
2203impl ShouldBeNonZeroExt for u64 {
2204    fn get(&self) -> u64 {
2205        *self
2206    }
2207}
2208
2209impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2210    fn get(&self) -> u64 {
2211        match *self {
2212            Some(non_zero) => non_zero.get(),
2213            None => 0,
2214        }
2215    }
2216}
2217
2218impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2219    /// Construct a `BufferBinding` with the given contents.
2220    ///
2221    /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2222    /// of this method. `Buffer::binding` validates the size of the binding
2223    /// against the size of the buffer.
2224    ///
2225    /// It is more difficult to provide a validating constructor here, due to
2226    /// not having direct access to the size of a `DynBuffer`.
2227    ///
2228    /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2229    /// bytes starting at `offset` is contained within the buffer.
2230    ///
2231    /// The `S` type parameter is a temporary convenience to allow callers to
2232    /// pass a zero size. When the zero-size binding issue is resolved, the
2233    /// argument should just match the type of the member.
2234    /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2235    pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2236        buffer: &'a B,
2237        offset: wgt::BufferAddress,
2238        size: S,
2239    ) -> Self {
2240        Self {
2241            buffer,
2242            offset,
2243            size: size.into(),
2244        }
2245    }
2246}
2247
2248#[derive(Debug)]
2249pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2250    pub view: &'a T,
2251    pub usage: wgt::TextureUses,
2252}
2253
2254impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2255    fn clone(&self) -> Self {
2256        TextureBinding {
2257            view: self.view,
2258            usage: self.usage,
2259        }
2260    }
2261}
2262
2263#[derive(Debug)]
2264pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2265    pub planes: [TextureBinding<'a, T>; 3],
2266    pub params: BufferBinding<'a, B>,
2267}
2268
2269impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2270    for ExternalTextureBinding<'a, B, T>
2271{
2272    fn clone(&self) -> Self {
2273        ExternalTextureBinding {
2274            planes: self.planes.clone(),
2275            params: self.params.clone(),
2276        }
2277    }
2278}
2279
2280/// cbindgen:ignore
2281#[derive(Clone, Debug)]
2282pub struct BindGroupEntry {
2283    pub binding: u32,
2284    pub resource_index: u32,
2285    pub count: u32,
2286}
2287
2288/// BindGroup descriptor.
2289///
2290/// Valid usage:
2291///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2292///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2293///. - each entry has to be compatible with the `layout`
2294///. - each entry's `BindGroupEntry::resource_index` is within range
2295///    of the corresponding resource array, selected by the relevant
2296///    `BindGroupLayoutEntry`.
2297#[derive(Clone, Debug)]
2298pub struct BindGroupDescriptor<
2299    'a,
2300    Bgl: DynBindGroupLayout + ?Sized,
2301    B: DynBuffer + ?Sized,
2302    S: DynSampler + ?Sized,
2303    T: DynTextureView + ?Sized,
2304    A: DynAccelerationStructure + ?Sized,
2305> {
2306    pub label: Label<'a>,
2307    pub layout: &'a Bgl,
2308    pub buffers: &'a [BufferBinding<'a, B>],
2309    pub samplers: &'a [&'a S],
2310    pub textures: &'a [TextureBinding<'a, T>],
2311    pub entries: &'a [BindGroupEntry],
2312    pub acceleration_structures: &'a [&'a A],
2313    pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2314}
2315
2316#[derive(Clone, Debug)]
2317pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2318    pub label: Label<'a>,
2319    pub queue: &'a Q,
2320}
2321
2322/// Naga shader module.
2323#[derive(Default)]
2324pub struct NagaShader {
2325    /// Shader module IR.
2326    pub module: Cow<'static, naga::Module>,
2327    /// Analysis information of the module.
2328    pub info: naga::valid::ModuleInfo,
2329    /// Source codes for debug
2330    pub debug_source: Option<DebugSource>,
2331}
2332
2333// Custom implementation avoids the need to generate Debug impl code
2334// for the whole Naga module and info.
2335impl fmt::Debug for NagaShader {
2336    fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2337        write!(formatter, "Naga shader")
2338    }
2339}
2340
2341/// Shader input.
2342pub enum ShaderInput<'a> {
2343    Naga(NagaShader),
2344    MetalLib {
2345        file: &'a [u8],
2346        num_workgroups: (u32, u32, u32),
2347    },
2348    Msl {
2349        shader: &'a str,
2350        num_workgroups: (u32, u32, u32),
2351    },
2352    SpirV(&'a [u32]),
2353    Dxil {
2354        shader: &'a [u8],
2355        num_workgroups: (u32, u32, u32),
2356    },
2357    Hlsl {
2358        shader: &'a str,
2359        num_workgroups: (u32, u32, u32),
2360    },
2361    Glsl {
2362        shader: &'a str,
2363        num_workgroups: (u32, u32, u32),
2364    },
2365}
2366
2367pub struct ShaderModuleDescriptor<'a> {
2368    pub label: Label<'a>,
2369
2370    /// # Safety
2371    ///
2372    /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2373    ///
2374    /// [src]: wgt::ShaderRuntimeChecks
2375    pub runtime_checks: wgt::ShaderRuntimeChecks,
2376}
2377
2378#[derive(Debug, Clone)]
2379pub struct DebugSource {
2380    pub file_name: Cow<'static, str>,
2381    pub source_code: Cow<'static, str>,
2382}
2383
2384/// Describes a programmable pipeline stage.
2385#[derive(Debug)]
2386pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2387    /// The compiled shader module for this stage.
2388    pub module: &'a M,
2389    /// The name of the entry point in the compiled shader. There must be a function with this name
2390    ///  in the shader.
2391    pub entry_point: &'a str,
2392    /// Pipeline constants
2393    pub constants: &'a naga::back::PipelineConstants,
2394    /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2395    ///
2396    /// This is required by the WebGPU spec, but may have overhead which can be avoided
2397    /// for cross-platform applications
2398    pub zero_initialize_workgroup_memory: bool,
2399}
2400
2401impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2402    fn clone(&self) -> Self {
2403        Self {
2404            module: self.module,
2405            entry_point: self.entry_point,
2406            constants: self.constants,
2407            zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2408        }
2409    }
2410}
2411
2412/// Describes a compute pipeline.
2413#[derive(Clone, Debug)]
2414pub struct ComputePipelineDescriptor<
2415    'a,
2416    Pl: DynPipelineLayout + ?Sized,
2417    M: DynShaderModule + ?Sized,
2418    Pc: DynPipelineCache + ?Sized,
2419> {
2420    pub label: Label<'a>,
2421    /// The layout of bind groups for this pipeline.
2422    pub layout: &'a Pl,
2423    /// The compiled compute stage and its entry point.
2424    pub stage: ProgrammableStage<'a, M>,
2425    /// The cache which will be used and filled when compiling this pipeline
2426    pub cache: Option<&'a Pc>,
2427}
2428
2429pub struct PipelineCacheDescriptor<'a> {
2430    pub label: Label<'a>,
2431    pub data: Option<&'a [u8]>,
2432}
2433
2434/// Describes how the vertex buffer is interpreted.
2435#[derive(Clone, Debug)]
2436pub struct VertexBufferLayout<'a> {
2437    /// The stride, in bytes, between elements of this buffer.
2438    pub array_stride: wgt::BufferAddress,
2439    /// How often this vertex buffer is "stepped" forward.
2440    pub step_mode: wgt::VertexStepMode,
2441    /// The list of attributes which comprise a single vertex.
2442    pub attributes: &'a [wgt::VertexAttribute],
2443}
2444
2445#[derive(Clone, Debug)]
2446pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2447    Standard {
2448        /// The format of any vertex buffers used with this pipeline.
2449        vertex_buffers: &'a [VertexBufferLayout<'a>],
2450        /// The vertex stage for this pipeline.
2451        vertex_stage: ProgrammableStage<'a, M>,
2452    },
2453    Mesh {
2454        task_stage: Option<ProgrammableStage<'a, M>>,
2455        mesh_stage: ProgrammableStage<'a, M>,
2456    },
2457}
2458
2459/// Describes a render (graphics) pipeline.
2460#[derive(Clone, Debug)]
2461pub struct RenderPipelineDescriptor<
2462    'a,
2463    Pl: DynPipelineLayout + ?Sized,
2464    M: DynShaderModule + ?Sized,
2465    Pc: DynPipelineCache + ?Sized,
2466> {
2467    pub label: Label<'a>,
2468    /// The layout of bind groups for this pipeline.
2469    pub layout: &'a Pl,
2470    /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2471    pub vertex_processor: VertexProcessor<'a, M>,
2472    /// The properties of the pipeline at the primitive assembly and rasterization level.
2473    pub primitive: wgt::PrimitiveState,
2474    /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2475    pub depth_stencil: Option<wgt::DepthStencilState>,
2476    /// The multi-sampling properties of the pipeline.
2477    pub multisample: wgt::MultisampleState,
2478    /// The fragment stage for this pipeline.
2479    pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2480    /// The effect of draw calls on the color aspect of the output target.
2481    pub color_targets: &'a [Option<wgt::ColorTargetState>],
2482    /// If the pipeline will be used with a multiview render pass, this indicates how many array
2483    /// layers the attachments will have.
2484    pub multiview_mask: Option<NonZeroU32>,
2485    /// The cache which will be used and filled when compiling this pipeline
2486    pub cache: Option<&'a Pc>,
2487}
2488
2489#[derive(Debug, Clone)]
2490pub struct SurfaceConfiguration {
2491    /// Maximum number of queued frames. Must be in
2492    /// `SurfaceCapabilities::maximum_frame_latency` range.
2493    pub maximum_frame_latency: u32,
2494    /// Vertical synchronization mode.
2495    pub present_mode: wgt::PresentMode,
2496    /// Alpha composition mode.
2497    pub composite_alpha_mode: wgt::CompositeAlphaMode,
2498    /// Format of the surface textures.
2499    pub format: wgt::TextureFormat,
2500    /// Requested texture extent. Must be in
2501    /// `SurfaceCapabilities::extents` range.
2502    pub extent: wgt::Extent3d,
2503    /// Allowed usage of surface textures,
2504    pub usage: wgt::TextureUses,
2505    /// Allows views of swapchain texture to have a different format
2506    /// than the texture does.
2507    pub view_formats: Vec<wgt::TextureFormat>,
2508}
2509
2510#[derive(Debug, Clone)]
2511pub struct Rect<T> {
2512    pub x: T,
2513    pub y: T,
2514    pub w: T,
2515    pub h: T,
2516}
2517
2518#[derive(Debug, Clone, PartialEq)]
2519pub struct StateTransition<T> {
2520    pub from: T,
2521    pub to: T,
2522}
2523
2524#[derive(Debug, Clone)]
2525pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2526    pub buffer: &'a B,
2527    pub usage: StateTransition<wgt::BufferUses>,
2528}
2529
2530#[derive(Debug, Clone)]
2531pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2532    pub texture: &'a T,
2533    pub range: wgt::ImageSubresourceRange,
2534    pub usage: StateTransition<wgt::TextureUses>,
2535}
2536
2537#[derive(Clone, Copy, Debug)]
2538pub struct BufferCopy {
2539    pub src_offset: wgt::BufferAddress,
2540    pub dst_offset: wgt::BufferAddress,
2541    pub size: wgt::BufferSize,
2542}
2543
2544#[derive(Clone, Debug)]
2545pub struct TextureCopyBase {
2546    pub mip_level: u32,
2547    pub array_layer: u32,
2548    /// Origin within a texture.
2549    /// Note: for 1D and 2D textures, Z must be 0.
2550    pub origin: wgt::Origin3d,
2551    pub aspect: FormatAspects,
2552}
2553
2554#[derive(Clone, Copy, Debug)]
2555pub struct CopyExtent {
2556    pub width: u32,
2557    pub height: u32,
2558    pub depth: u32,
2559}
2560
2561impl From<wgt::Extent3d> for CopyExtent {
2562    fn from(value: wgt::Extent3d) -> Self {
2563        let wgt::Extent3d {
2564            width,
2565            height,
2566            depth_or_array_layers,
2567        } = value;
2568        Self {
2569            width,
2570            height,
2571            depth: depth_or_array_layers,
2572        }
2573    }
2574}
2575
2576impl From<CopyExtent> for wgt::Extent3d {
2577    fn from(value: CopyExtent) -> Self {
2578        let CopyExtent {
2579            width,
2580            height,
2581            depth,
2582        } = value;
2583        Self {
2584            width,
2585            height,
2586            depth_or_array_layers: depth,
2587        }
2588    }
2589}
2590
2591#[derive(Clone, Debug)]
2592pub struct TextureCopy {
2593    pub src_base: TextureCopyBase,
2594    pub dst_base: TextureCopyBase,
2595    pub size: CopyExtent,
2596}
2597
2598#[derive(Clone, Debug)]
2599pub struct BufferTextureCopy {
2600    pub buffer_layout: wgt::TexelCopyBufferLayout,
2601    pub texture_base: TextureCopyBase,
2602    pub size: CopyExtent,
2603}
2604
2605#[derive(Clone, Debug)]
2606pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2607    pub view: &'a T,
2608    /// Contains either a single mutating usage as a target,
2609    /// or a valid combination of read-only usages.
2610    pub usage: wgt::TextureUses,
2611}
2612
2613#[derive(Clone, Debug)]
2614pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2615    pub target: Attachment<'a, T>,
2616    pub depth_slice: Option<u32>,
2617    pub resolve_target: Option<Attachment<'a, T>>,
2618    pub ops: AttachmentOps,
2619    pub clear_value: wgt::Color,
2620}
2621
2622#[derive(Clone, Debug)]
2623pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2624    pub target: Attachment<'a, T>,
2625    pub depth_ops: AttachmentOps,
2626    pub stencil_ops: AttachmentOps,
2627    pub clear_value: (f32, u32),
2628}
2629
2630#[derive(Clone, Debug)]
2631pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2632    pub query_set: &'a Q,
2633    pub beginning_of_pass_write_index: Option<u32>,
2634    pub end_of_pass_write_index: Option<u32>,
2635}
2636
2637#[derive(Clone, Debug)]
2638pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2639    pub label: Label<'a>,
2640    pub extent: wgt::Extent3d,
2641    pub sample_count: u32,
2642    pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2643    pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2644    pub multiview_mask: Option<NonZeroU32>,
2645    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2646    pub occlusion_query_set: Option<&'a Q>,
2647}
2648
2649#[derive(Clone, Debug)]
2650pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2651    pub label: Label<'a>,
2652    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2653}
2654
2655#[test]
2656fn test_default_limits() {
2657    let limits = wgt::Limits::default();
2658    assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2659}
2660
2661#[derive(Clone, Debug)]
2662pub struct AccelerationStructureDescriptor<'a> {
2663    pub label: Label<'a>,
2664    pub size: wgt::BufferAddress,
2665    pub format: AccelerationStructureFormat,
2666    pub allow_compaction: bool,
2667}
2668
2669#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2670pub enum AccelerationStructureFormat {
2671    TopLevel,
2672    BottomLevel,
2673}
2674
2675#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2676pub enum AccelerationStructureBuildMode {
2677    Build,
2678    Update,
2679}
2680
2681/// Information of the required size for a corresponding entries struct (+ flags)
2682#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2683pub struct AccelerationStructureBuildSizes {
2684    pub acceleration_structure_size: wgt::BufferAddress,
2685    pub update_scratch_size: wgt::BufferAddress,
2686    pub build_scratch_size: wgt::BufferAddress,
2687}
2688
2689/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2690/// For updates, only the data is allowed to change (not the meta data or sizes).
2691#[derive(Clone, Debug)]
2692pub struct BuildAccelerationStructureDescriptor<
2693    'a,
2694    B: DynBuffer + ?Sized,
2695    A: DynAccelerationStructure + ?Sized,
2696> {
2697    pub entries: &'a AccelerationStructureEntries<'a, B>,
2698    pub mode: AccelerationStructureBuildMode,
2699    pub flags: AccelerationStructureBuildFlags,
2700    pub source_acceleration_structure: Option<&'a A>,
2701    pub destination_acceleration_structure: &'a A,
2702    pub scratch_buffer: &'a B,
2703    pub scratch_buffer_offset: wgt::BufferAddress,
2704}
2705
2706/// - All buffers, buffer addresses and offsets will be ignored.
2707/// - The build mode will be ignored.
2708/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2709///   may result in reduced size requirements.
2710/// - Any other change may result in a bigger or smaller size requirement.
2711#[derive(Clone, Debug)]
2712pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2713    pub entries: &'a AccelerationStructureEntries<'a, B>,
2714    pub flags: AccelerationStructureBuildFlags,
2715}
2716
2717/// Entries for a single descriptor
2718/// * `Instances` - Multiple instances for a top level acceleration structure
2719/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2720/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2721#[derive(Debug)]
2722pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2723    Instances(AccelerationStructureInstances<'a, B>),
2724    Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2725    AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2726}
2727
2728/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2729/// * `indices` - optional index buffer with attributes
2730/// * `transform` - optional transform
2731#[derive(Clone, Debug)]
2732pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2733    pub vertex_buffer: Option<&'a B>,
2734    pub vertex_format: wgt::VertexFormat,
2735    pub first_vertex: u32,
2736    pub vertex_count: u32,
2737    pub vertex_stride: wgt::BufferAddress,
2738    pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2739    pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2740    pub flags: AccelerationStructureGeometryFlags,
2741}
2742
2743/// * `offset` - offset in bytes
2744#[derive(Clone, Debug)]
2745pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2746    pub buffer: Option<&'a B>,
2747    pub offset: u32,
2748    pub count: u32,
2749    pub stride: wgt::BufferAddress,
2750    pub flags: AccelerationStructureGeometryFlags,
2751}
2752
2753pub struct AccelerationStructureCopy {
2754    pub copy_flags: wgt::AccelerationStructureCopy,
2755    pub type_flags: wgt::AccelerationStructureType,
2756}
2757
2758/// * `offset` - offset in bytes
2759#[derive(Clone, Debug)]
2760pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2761    pub buffer: Option<&'a B>,
2762    pub offset: u32,
2763    pub count: u32,
2764}
2765
2766/// * `offset` - offset in bytes
2767#[derive(Clone, Debug)]
2768pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2769    pub format: wgt::IndexFormat,
2770    pub buffer: Option<&'a B>,
2771    pub offset: u32,
2772    pub count: u32,
2773}
2774
2775/// * `offset` - offset in bytes
2776#[derive(Clone, Debug)]
2777pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2778    pub buffer: &'a B,
2779    pub offset: u32,
2780}
2781
2782pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2783pub use wgt::AccelerationStructureGeometryFlags;
2784
2785bitflags::bitflags! {
2786    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2787    pub struct AccelerationStructureUses: u8 {
2788        // For blas used as input for tlas
2789        const BUILD_INPUT = 1 << 0;
2790        // Target for acceleration structure build
2791        const BUILD_OUTPUT = 1 << 1;
2792        // Tlas used in a shader
2793        const SHADER_INPUT = 1 << 2;
2794        // Blas used to query compacted size
2795        const QUERY_INPUT = 1 << 3;
2796        // BLAS used as a src for a copy operation
2797        const COPY_SRC = 1 << 4;
2798        // BLAS used as a dst for a copy operation
2799        const COPY_DST = 1 << 5;
2800    }
2801}
2802
2803#[derive(Debug, Clone)]
2804pub struct AccelerationStructureBarrier {
2805    pub usage: StateTransition<AccelerationStructureUses>,
2806}
2807
2808#[derive(Debug, Copy, Clone)]
2809pub struct TlasInstance {
2810    pub transform: [f32; 12],
2811    pub custom_data: u32,
2812    pub mask: u8,
2813    pub blas_address: u64,
2814}
2815
2816#[cfg(dx12)]
2817pub enum D3D12ExposeAdapterResult {
2818    CreateDeviceError(dx12::CreateDeviceError),
2819    UnknownFeatureLevel(i32),
2820    ResourceBindingTier2Requirement,
2821    ShaderModel6Requirement,
2822    Success(dx12::FeatureLevel, dx12::ShaderModel),
2823}
2824
2825/// Pluggable telemetry, mainly to be used by Firefox.
2826#[derive(Debug, Clone, Copy)]
2827pub struct Telemetry {
2828    #[cfg(dx12)]
2829    pub d3d12_expose_adapter: fn(
2830        desc: &windows::Win32::Graphics::Dxgi::DXGI_ADAPTER_DESC2,
2831        driver_version: Result<[u16; 4], windows_core::HRESULT>,
2832        result: D3D12ExposeAdapterResult,
2833    ),
2834}