wgpu_hal/
lib.rs

1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//!   should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//!   validation, if any, and incorrect use will often cause undefined behavior.
20//!   This allows us to minimize the overhead we impose over the underlying
21//!   graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//!   safe API for driving `wgpu-hal`, implementing all necessary validation,
23//!   resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//!   use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//!   `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//!   anticipate*, like running out of memory or losing the device. Any errors
29//!   that the user could reasonably anticipate are their responsibility to
30//!   avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//!   not mappable: as the buffer creator, the user should already know if they
32//!   can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//!   generally object-safe. You must select a specific backend type
36//!   like [`vulkan::Api`] or [`metal::Api`], and then use that
37//!   according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//!   taking objects by reference, returning them by value, and so on,
41//!   unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//!   remain mapped on the CPU while the GPU reads or writes to it. You must
45//!   explicitly indicate when data might need to be transferred between CPU and
46//!   GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//!   resource. For example, if a buffer is written to by a compute
50//!   shader, and then used as and index buffer to a draw call, you
51//!   must use [`CommandEncoder::transition_buffers`] between those two
52//!   operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//!   Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//!   store data in particular containers. The implementation doesn't guarantee
59//!   that any of the iterators are drained, unless stated otherwise by the
60//!   function documentation. For this reason, we recommend that iterators don't
61//!   do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//!   Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//!   available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//!   own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//!   creates an instance value, which you can use to enumerate the adapters
121//!   available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//!   returns an instance that can enumerate the Vulkan physical devices on your
123//!   system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//!   particular device from a particular backend. For example, a Vulkan instance
127//!   might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//!   link to a device. You get a device value by calling [`Adapter::open`], and
131//!   then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//!   command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//!   use to build buffers of commands to submit to a queue. This has all the
138//!   methods for drawing and running compute shaders, which is presumably what
139//!   you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//!   swapchain for presenting images on the screen, via interaction with the
143//!   system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg))]
207#![allow(
208    // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209    clippy::arc_with_non_send_sync,
210    // We don't use syntax sugar where it's not necessary.
211    clippy::match_like_matches_macro,
212    // Redundant matching is more explicit.
213    clippy::redundant_pattern_matching,
214    // Explicit lifetimes are often easier to reason about.
215    clippy::needless_lifetimes,
216    // No need for defaults in the internal types.
217    clippy::new_without_default,
218    // Matches are good and extendable, no need to make an exception here.
219    clippy::single_match,
220    // Push commands are more regular than macros.
221    clippy::vec_init_then_push,
222    // We unsafe impl `Send` for a reason.
223    clippy::non_send_fields_in_send_ty,
224    // TODO!
225    clippy::missing_safety_doc,
226    // It gets in the way a lot and does not prevent bugs in practice.
227    clippy::pattern_type_mismatch,
228    // We should investigate these.
229    clippy::large_enum_variant
230)]
231#![warn(
232    clippy::alloc_instead_of_core,
233    clippy::ptr_as_ptr,
234    clippy::std_instead_of_alloc,
235    clippy::std_instead_of_core,
236    trivial_casts,
237    trivial_numeric_casts,
238    unsafe_op_in_unsafe_fn,
239    unused_extern_crates,
240    unused_qualifications
241)]
242
243extern crate alloc;
244extern crate wgpu_types as wgt;
245// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
246#[cfg(any(dx12, gles_with_std, metal, vulkan))]
247#[macro_use]
248extern crate std;
249
250/// DirectX12 API internals.
251#[cfg(dx12)]
252pub mod dx12;
253/// GLES API internals.
254#[cfg(gles)]
255pub mod gles;
256/// Metal API internals.
257#[cfg(metal)]
258pub mod metal;
259/// A dummy API implementation.
260// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
261pub mod noop;
262/// Vulkan API internals.
263#[cfg(vulkan)]
264pub mod vulkan;
265
266pub mod auxil;
267pub mod api {
268    #[cfg(dx12)]
269    pub use super::dx12::Api as Dx12;
270    #[cfg(gles)]
271    pub use super::gles::Api as Gles;
272    #[cfg(metal)]
273    pub use super::metal::Api as Metal;
274    pub use super::noop::Api as Noop;
275    #[cfg(vulkan)]
276    pub use super::vulkan::Api as Vulkan;
277}
278
279mod dynamic;
280#[cfg(feature = "validation_canary")]
281mod validation_canary;
282
283#[cfg(feature = "validation_canary")]
284pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
285
286pub(crate) use dynamic::impl_dyn_resource;
287pub use dynamic::{
288    DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
289    DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
290    DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
291    DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
292    DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
293};
294
295#[allow(unused)]
296use alloc::boxed::Box;
297use alloc::{borrow::Cow, string::String, vec::Vec};
298use core::{
299    borrow::Borrow,
300    error::Error,
301    fmt,
302    num::{NonZeroU32, NonZeroU64},
303    ops::{Range, RangeInclusive},
304    ptr::NonNull,
305};
306
307use bitflags::bitflags;
308use thiserror::Error;
309use wgt::WasmNotSendSync;
310
311cfg_if::cfg_if! {
312    if #[cfg(supports_ptr_atomics)] {
313        use alloc::sync::Arc;
314    } else if #[cfg(feature = "portable-atomic")] {
315        use portable_atomic_util::Arc;
316    }
317}
318
319// - Vertex + Fragment
320// - Compute
321// Task + Mesh + Fragment
322pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
323pub const MAX_ANISOTROPY: u8 = 16;
324pub const MAX_BIND_GROUPS: usize = 8;
325pub const MAX_VERTEX_BUFFERS: usize = 16;
326pub const MAX_COLOR_ATTACHMENTS: usize = 8;
327pub const MAX_MIP_LEVELS: u32 = 16;
328/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
329/// cbindgen:ignore
330pub const QUERY_SIZE: wgt::BufferAddress = 8;
331
332pub type Label<'a> = Option<&'a str>;
333pub type MemoryRange = Range<wgt::BufferAddress>;
334pub type FenceValue = u64;
335#[cfg(supports_64bit_atomics)]
336pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
337#[cfg(not(supports_64bit_atomics))]
338pub type AtomicFenceValue = portable_atomic::AtomicU64;
339
340/// A callback to signal that wgpu is no longer using a resource.
341#[cfg(any(gles, vulkan))]
342pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
343
344#[cfg(any(gles, vulkan))]
345pub struct DropGuard {
346    callback: Option<DropCallback>,
347}
348
349#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
350impl DropGuard {
351    fn from_option(callback: Option<DropCallback>) -> Option<Self> {
352        callback.map(|callback| Self {
353            callback: Some(callback),
354        })
355    }
356}
357
358#[cfg(any(gles, vulkan))]
359impl Drop for DropGuard {
360    fn drop(&mut self) {
361        if let Some(cb) = self.callback.take() {
362            (cb)();
363        }
364    }
365}
366
367#[cfg(any(gles, vulkan))]
368impl fmt::Debug for DropGuard {
369    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
370        f.debug_struct("DropGuard").finish()
371    }
372}
373
374#[derive(Clone, Debug, PartialEq, Eq, Error)]
375pub enum DeviceError {
376    #[error("Out of memory")]
377    OutOfMemory,
378    #[error("Device is lost")]
379    Lost,
380    #[error("Unexpected error variant (driver implementation is at fault)")]
381    Unexpected,
382}
383
384#[cfg(any(dx12, vulkan))]
385impl From<gpu_allocator::AllocationError> for DeviceError {
386    fn from(result: gpu_allocator::AllocationError) -> Self {
387        match result {
388            gpu_allocator::AllocationError::OutOfMemory => Self::OutOfMemory,
389            gpu_allocator::AllocationError::FailedToMap(e) => {
390                log::error!("gpu-allocator: Failed to map: {e}");
391                Self::Lost
392            }
393            gpu_allocator::AllocationError::NoCompatibleMemoryTypeFound => {
394                log::error!("gpu-allocator: No Compatible Memory Type Found");
395                Self::Lost
396            }
397            gpu_allocator::AllocationError::InvalidAllocationCreateDesc => {
398                log::error!("gpu-allocator: Invalid Allocation Creation Description");
399                Self::Lost
400            }
401            gpu_allocator::AllocationError::InvalidAllocatorCreateDesc(e) => {
402                log::error!("gpu-allocator: Invalid Allocator Creation Description: {e}");
403                Self::Lost
404            }
405
406            gpu_allocator::AllocationError::Internal(e) => {
407                log::error!("gpu-allocator: Internal Error: {e}");
408                Self::Lost
409            }
410            gpu_allocator::AllocationError::BarrierLayoutNeedsDevice10
411            | gpu_allocator::AllocationError::CastableFormatsRequiresEnhancedBarriers
412            | gpu_allocator::AllocationError::CastableFormatsRequiresAtLeastDevice12 => {
413                unreachable!()
414            }
415        }
416    }
417}
418
419// A copy of gpu_allocator::AllocationSizes, allowing to read the configured value for
420// the dx12 backend, we should instead add getters to gpu_allocator::AllocationSizes
421// and remove this type.
422// https://github.com/Traverse-Research/gpu-allocator/issues/295
423#[cfg_attr(not(any(dx12, vulkan)), expect(dead_code))]
424pub(crate) struct AllocationSizes {
425    pub(crate) min_device_memblock_size: u64,
426    pub(crate) max_device_memblock_size: u64,
427    pub(crate) min_host_memblock_size: u64,
428    pub(crate) max_host_memblock_size: u64,
429}
430
431impl AllocationSizes {
432    #[allow(dead_code)] // may be unused on some platforms
433    pub(crate) fn from_memory_hints(memory_hints: &wgt::MemoryHints) -> Self {
434        // TODO: the allocator's configuration should take hardware capability into
435        // account.
436        const MB: u64 = 1024 * 1024;
437
438        match memory_hints {
439            wgt::MemoryHints::Performance => Self {
440                min_device_memblock_size: 128 * MB,
441                max_device_memblock_size: 256 * MB,
442                min_host_memblock_size: 64 * MB,
443                max_host_memblock_size: 128 * MB,
444            },
445            wgt::MemoryHints::MemoryUsage => Self {
446                min_device_memblock_size: 8 * MB,
447                max_device_memblock_size: 64 * MB,
448                min_host_memblock_size: 4 * MB,
449                max_host_memblock_size: 32 * MB,
450            },
451            wgt::MemoryHints::Manual {
452                suballocated_device_memory_block_size,
453            } => {
454                // TODO: https://github.com/gfx-rs/wgpu/issues/8625
455                // Would it be useful to expose the host size in memory hints
456                // instead of always using half of the device size?
457                let device_size = suballocated_device_memory_block_size;
458                let host_size = device_size.start / 2..device_size.end / 2;
459
460                // gpu_allocator clamps the sizes between 4MiB and 256MiB, but we clamp them ourselves since we use
461                // the sizes when detecting high memory pressure and there is no way to query the values otherwise.
462                Self {
463                    min_device_memblock_size: device_size.start.clamp(4 * MB, 256 * MB),
464                    max_device_memblock_size: device_size.end.clamp(4 * MB, 256 * MB),
465                    min_host_memblock_size: host_size.start.clamp(4 * MB, 256 * MB),
466                    max_host_memblock_size: host_size.end.clamp(4 * MB, 256 * MB),
467                }
468            }
469        }
470    }
471}
472
473#[cfg(any(dx12, vulkan))]
474impl From<AllocationSizes> for gpu_allocator::AllocationSizes {
475    fn from(value: AllocationSizes) -> gpu_allocator::AllocationSizes {
476        gpu_allocator::AllocationSizes::new(
477            value.min_device_memblock_size,
478            value.min_host_memblock_size,
479        )
480        .with_max_device_memblock_size(value.max_device_memblock_size)
481        .with_max_host_memblock_size(value.max_host_memblock_size)
482    }
483}
484
485#[allow(dead_code)] // may be unused on some platforms
486#[cold]
487fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
488    panic!("wgpu-hal invariant was violated (usage error): {txt}")
489}
490
491#[allow(dead_code)] // may be unused on some platforms
492#[cold]
493fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
494    panic!("wgpu-hal ran into a preventable internal error: {txt}")
495}
496
497#[derive(Clone, Debug, Eq, PartialEq, Error)]
498pub enum ShaderError {
499    #[error("Compilation failed: {0:?}")]
500    Compilation(String),
501    #[error(transparent)]
502    Device(#[from] DeviceError),
503}
504
505#[derive(Clone, Debug, Eq, PartialEq, Error)]
506pub enum PipelineError {
507    #[error("Linkage failed for stage {0:?}: {1}")]
508    Linkage(wgt::ShaderStages, String),
509    #[error("Entry point for stage {0:?} is invalid")]
510    EntryPoint(naga::ShaderStage),
511    #[error(transparent)]
512    Device(#[from] DeviceError),
513    #[error("Pipeline constant error for stage {0:?}: {1}")]
514    PipelineConstants(wgt::ShaderStages, String),
515}
516
517#[derive(Clone, Debug, Eq, PartialEq, Error)]
518pub enum PipelineCacheError {
519    #[error(transparent)]
520    Device(#[from] DeviceError),
521}
522
523#[derive(Clone, Debug, Eq, PartialEq, Error)]
524pub enum SurfaceError {
525    #[error("Surface is lost")]
526    Lost,
527    #[error("Surface is outdated, needs to be re-created")]
528    Outdated,
529    #[error(transparent)]
530    Device(#[from] DeviceError),
531    #[error("Other reason: {0}")]
532    Other(&'static str),
533}
534
535/// Error occurring while trying to create an instance, or create a surface from an instance;
536/// typically relating to the state of the underlying graphics API or hardware.
537#[derive(Clone, Debug, Error)]
538#[error("{message}")]
539pub struct InstanceError {
540    /// These errors are very platform specific, so do not attempt to encode them as an enum.
541    ///
542    /// This message should describe the problem in sufficient detail to be useful for a
543    /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
544    /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
545    message: String,
546
547    /// Underlying error value, if any is available.
548    #[source]
549    source: Option<Arc<dyn Error + Send + Sync + 'static>>,
550}
551
552impl InstanceError {
553    #[allow(dead_code)] // may be unused on some platforms
554    pub(crate) fn new(message: String) -> Self {
555        Self {
556            message,
557            source: None,
558        }
559    }
560    #[allow(dead_code)] // may be unused on some platforms
561    pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
562        cfg_if::cfg_if! {
563            if #[cfg(supports_ptr_atomics)] {
564                let source = Arc::new(source);
565            } else {
566                // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
567                let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
568                let source = Arc::from(source);
569            }
570        }
571        Self {
572            message,
573            source: Some(source),
574        }
575    }
576}
577
578/// All the types and methods that make up a implementation on top of a backend.
579///
580/// Only the types that have non-dyn trait bounds have methods on them. Most methods
581/// are either on [`CommandEncoder`] or [`Device`].
582///
583/// The api can either be used through generics (through use of this trait and associated
584/// types) or dynamically through using the `Dyn*` traits.
585pub trait Api: Clone + fmt::Debug + Sized + WasmNotSendSync + 'static {
586    const VARIANT: wgt::Backend;
587
588    type Instance: DynInstance + Instance<A = Self>;
589    type Surface: DynSurface + Surface<A = Self>;
590    type Adapter: DynAdapter + Adapter<A = Self>;
591    type Device: DynDevice + Device<A = Self>;
592
593    type Queue: DynQueue + Queue<A = Self>;
594    type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
595
596    /// This API's command buffer type.
597    ///
598    /// The only thing you can do with `CommandBuffer`s is build them
599    /// with a [`CommandEncoder`] and then pass them to
600    /// [`Queue::submit`] for execution, or destroy them by passing
601    /// them to [`CommandEncoder::reset_all`].
602    ///
603    /// [`CommandEncoder`]: Api::CommandEncoder
604    type CommandBuffer: DynCommandBuffer;
605
606    type Buffer: DynBuffer;
607    type Texture: DynTexture;
608    type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
609    type TextureView: DynTextureView;
610    type Sampler: DynSampler;
611    type QuerySet: DynQuerySet;
612
613    /// A value you can block on to wait for something to finish.
614    ///
615    /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
616    /// [`Device::wait`] to block until a fence reaches or passes a value you
617    /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
618    /// store in it when the submitted work is complete.
619    ///
620    /// Attempting to set a fence to a value less than its current value has no
621    /// effect.
622    ///
623    /// Waiting on a fence returns as soon as the fence reaches *or passes* the
624    /// requested value. This implies that, in order to reliably determine when
625    /// an operation has completed, operations must finish in order of
626    /// increasing fence values: if a higher-valued operation were to finish
627    /// before a lower-valued operation, then waiting for the fence to reach the
628    /// lower value could return before the lower-valued operation has actually
629    /// finished.
630    type Fence: DynFence;
631
632    type BindGroupLayout: DynBindGroupLayout;
633    type BindGroup: DynBindGroup;
634    type PipelineLayout: DynPipelineLayout;
635    type ShaderModule: DynShaderModule;
636    type RenderPipeline: DynRenderPipeline;
637    type ComputePipeline: DynComputePipeline;
638    type PipelineCache: DynPipelineCache;
639
640    type AccelerationStructure: DynAccelerationStructure + 'static;
641}
642
643pub trait Instance: Sized + WasmNotSendSync {
644    type A: Api;
645
646    unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
647    unsafe fn create_surface(
648        &self,
649        display_handle: raw_window_handle::RawDisplayHandle,
650        window_handle: raw_window_handle::RawWindowHandle,
651    ) -> Result<<Self::A as Api>::Surface, InstanceError>;
652    /// `surface_hint` is only used by the GLES backend targeting WebGL2
653    unsafe fn enumerate_adapters(
654        &self,
655        surface_hint: Option<&<Self::A as Api>::Surface>,
656    ) -> Vec<ExposedAdapter<Self::A>>;
657}
658
659pub trait Surface: WasmNotSendSync {
660    type A: Api;
661
662    /// Configure `self` to use `device`.
663    ///
664    /// # Safety
665    ///
666    /// - All GPU work using `self` must have been completed.
667    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
668    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
669    /// - The surface `self` must not currently be configured to use any other [`Device`].
670    unsafe fn configure(
671        &self,
672        device: &<Self::A as Api>::Device,
673        config: &SurfaceConfiguration,
674    ) -> Result<(), SurfaceError>;
675
676    /// Unconfigure `self` on `device`.
677    ///
678    /// # Safety
679    ///
680    /// - All GPU work that uses `surface` must have been completed.
681    /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
682    /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
683    /// - The surface `self` must have been configured on `device`.
684    unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
685
686    /// Return the next texture to be presented by `self`, for the caller to draw on.
687    ///
688    /// On success, return an [`AcquiredSurfaceTexture`] representing the
689    /// texture into which the caller should draw the image to be displayed on
690    /// `self`.
691    ///
692    /// If `timeout` elapses before `self` has a texture ready to be acquired,
693    /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
694    /// timeout.
695    ///
696    /// # Using an [`AcquiredSurfaceTexture`]
697    ///
698    /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
699    /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
700    /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
701    /// carries some metadata about that [`SurfaceTexture`].
702    ///
703    /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
704    /// include the [`SurfaceTexture`] in the `surface_textures` argument.
705    ///
706    /// When you are done drawing on the texture, you can display it on `self`
707    /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
708    ///
709    /// If you do not wish to display the texture, you must pass the
710    /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
711    /// by future acquisitions.
712    ///
713    /// # Portability
714    ///
715    /// Some backends can't support a timeout when acquiring a texture. On these
716    /// backends, `timeout` is ignored.
717    ///
718    /// # Safety
719    ///
720    /// - The surface `self` must currently be configured on some [`Device`].
721    ///
722    /// - The `fence` argument must be the same [`Fence`] passed to all calls to
723    ///   [`Queue::submit`] that used [`Texture`]s acquired from this surface.
724    ///
725    /// - You may only have one texture acquired from `self` at a time. When
726    ///   `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
727    ///   [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
728    ///   [`Surface::discard_texture`] before calling `acquire_texture` again.
729    ///
730    /// [`texture`]: AcquiredSurfaceTexture::texture
731    /// [`SurfaceTexture`]: Api::SurfaceTexture
732    /// [`borrow`]: alloc::borrow::Borrow::borrow
733    /// [`Texture`]: Api::Texture
734    /// [`Fence`]: Api::Fence
735    /// [`self.discard_texture`]: Surface::discard_texture
736    unsafe fn acquire_texture(
737        &self,
738        timeout: Option<core::time::Duration>,
739        fence: &<Self::A as Api>::Fence,
740    ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
741
742    /// Relinquish an acquired texture without presenting it.
743    ///
744    /// After this call, the texture underlying [`SurfaceTexture`] may be
745    /// returned by subsequent calls to [`self.acquire_texture`].
746    ///
747    /// # Safety
748    ///
749    /// - The surface `self` must currently be configured on some [`Device`].
750    ///
751    /// - `texture` must be a [`SurfaceTexture`] returned by a call to
752    ///   [`self.acquire_texture`] that has not yet been passed to
753    ///   [`Queue::present`].
754    ///
755    /// [`SurfaceTexture`]: Api::SurfaceTexture
756    /// [`self.acquire_texture`]: Surface::acquire_texture
757    unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
758}
759
760pub trait Adapter: WasmNotSendSync {
761    type A: Api;
762
763    unsafe fn open(
764        &self,
765        features: wgt::Features,
766        limits: &wgt::Limits,
767        memory_hints: &wgt::MemoryHints,
768    ) -> Result<OpenDevice<Self::A>, DeviceError>;
769
770    /// Return the set of supported capabilities for a texture format.
771    unsafe fn texture_format_capabilities(
772        &self,
773        format: wgt::TextureFormat,
774    ) -> TextureFormatCapabilities;
775
776    /// Returns the capabilities of working with a specified surface.
777    ///
778    /// `None` means presentation is not supported for it.
779    unsafe fn surface_capabilities(
780        &self,
781        surface: &<Self::A as Api>::Surface,
782    ) -> Option<SurfaceCapabilities>;
783
784    /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
785    ///
786    /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
787    unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
788}
789
790/// A connection to a GPU and a pool of resources to use with it.
791///
792/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
793/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
794/// used for creating resources. Each `Device` has an associated [`Queue`] used
795/// for command submission.
796///
797/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
798/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
799/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
800/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
801/// `Adapter`.
802///
803/// A `Device`'s life cycle is generally:
804///
805/// 1)  Obtain a `Device` and its associated [`Queue`] by calling
806///     [`Adapter::open`].
807///
808///     Alternatively, the backend-specific types that implement [`Adapter`] often
809///     have methods for creating a `wgpu-hal` `Device` from a platform-specific
810///     handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
811///     [`vulkan::Device`] from an [`ash::Device`].
812///
813/// 1)  Create resources to use on the device by calling methods like
814///     [`Device::create_texture`] or [`Device::create_shader_module`].
815///
816/// 1)  Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
817///     which you can use to build [`CommandBuffer`]s holding commands to be
818///     executed on the GPU.
819///
820/// 1)  Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
821///     [`CommandBuffer`]s for execution on the GPU. If needed, call
822///     [`Device::wait`] to wait for them to finish execution.
823///
824/// 1)  Free resources with methods like [`Device::destroy_texture`] or
825///     [`Device::destroy_shader_module`].
826///
827/// 1)  Drop the device.
828///
829/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
830/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
831/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
832/// [`wgpu_hal::Adapter`]: Adapter
833/// [`wgpu_hal::Device`]: Device
834/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
835/// [`vulkan::Device`]: vulkan/struct.Device.html
836/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
837/// [`CommandBuffer`]: Api::CommandBuffer
838///
839/// # Safety
840///
841/// As with other `wgpu-hal` APIs, [validation] is the caller's
842/// responsibility. Here are the general requirements for all `Device`
843/// methods:
844///
845/// - Any resource passed to a `Device` method must have been created by that
846///   `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
847///   have been created with the `Device` passed as `self`.
848///
849/// - Resources may not be destroyed if they are used by any submitted command
850///   buffers that have not yet finished execution.
851///
852/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
853/// [`Texture`]: Api::Texture
854pub trait Device: WasmNotSendSync {
855    type A: Api;
856
857    /// Creates a new buffer.
858    ///
859    /// The initial usage is `wgt::BufferUses::empty()`.
860    unsafe fn create_buffer(
861        &self,
862        desc: &BufferDescriptor,
863    ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
864
865    /// Free `buffer` and any GPU resources it owns.
866    ///
867    /// Note that backends are allowed to allocate GPU memory for buffers from
868    /// allocation pools, and this call is permitted to simply return `buffer`'s
869    /// storage to that pool, without making it available to other applications.
870    ///
871    /// # Safety
872    ///
873    /// - The given `buffer` must not currently be mapped.
874    unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
875
876    /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
877    unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
878
879    /// Return a pointer to CPU memory mapping the contents of `buffer`.
880    ///
881    /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
882    /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
883    /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
884    /// `wgpu_hal` buffer is also unmapped.)
885    ///
886    /// If this function returns `Ok(mapping)`, then:
887    ///
888    /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
889    ///
890    /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
891    ///   memory are immediately visible on the GPU, and vice versa.
892    ///
893    /// # Safety
894    ///
895    /// - The given `buffer` must have been created with the [`MAP_READ`] or
896    ///   [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
897    ///
898    /// - The given `range` must fall within the size of `buffer`.
899    ///
900    /// - The caller must avoid data races between the CPU and the GPU. A data
901    ///   race is any pair of accesses to a particular byte, one of which is a
902    ///   write, that are not ordered with respect to each other by some sort of
903    ///   synchronization operation.
904    ///
905    /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
906    ///   `false`, then:
907    ///
908    ///   - Every CPU write to a mapped byte followed by a GPU read of that byte
909    ///     must have at least one call to [`Device::flush_mapped_ranges`]
910    ///     covering that byte that occurs between those two accesses.
911    ///
912    ///   - Every GPU write to a mapped byte followed by a CPU read of that byte
913    ///     must have at least one call to [`Device::invalidate_mapped_ranges`]
914    ///     covering that byte that occurs between those two accesses.
915    ///
916    ///   Note that the data race rule above requires that all such access pairs
917    ///   be ordered, so it is meaningful to talk about what must occur
918    ///   "between" them.
919    ///
920    /// - Zero-sized mappings are not allowed.
921    ///
922    /// - The returned [`BufferMapping::ptr`] must not be used after a call to
923    ///   [`Device::unmap_buffer`].
924    ///
925    /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
926    /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
927    unsafe fn map_buffer(
928        &self,
929        buffer: &<Self::A as Api>::Buffer,
930        range: MemoryRange,
931    ) -> Result<BufferMapping, DeviceError>;
932
933    /// Remove the mapping established by the last call to [`Device::map_buffer`].
934    ///
935    /// # Safety
936    ///
937    /// - The given `buffer` must be currently mapped.
938    unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
939
940    /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
941    ///
942    /// # Safety
943    ///
944    /// - The given `buffer` must be currently mapped.
945    ///
946    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
947    unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
948    where
949        I: Iterator<Item = MemoryRange>;
950
951    /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
952    ///
953    /// # Safety
954    ///
955    /// - The given `buffer` must be currently mapped.
956    ///
957    /// - All ranges produced by `ranges` must fall within `buffer`'s size.
958    unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
959    where
960        I: Iterator<Item = MemoryRange>;
961
962    /// Creates a new texture.
963    ///
964    /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
965    unsafe fn create_texture(
966        &self,
967        desc: &TextureDescriptor,
968    ) -> Result<<Self::A as Api>::Texture, DeviceError>;
969    unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
970
971    /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
972    unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
973
974    unsafe fn create_texture_view(
975        &self,
976        texture: &<Self::A as Api>::Texture,
977        desc: &TextureViewDescriptor,
978    ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
979    unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
980    unsafe fn create_sampler(
981        &self,
982        desc: &SamplerDescriptor,
983    ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
984    unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
985
986    /// Create a fresh [`CommandEncoder`].
987    ///
988    /// The new `CommandEncoder` is in the "closed" state.
989    unsafe fn create_command_encoder(
990        &self,
991        desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
992    ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
993
994    /// Creates a bind group layout.
995    unsafe fn create_bind_group_layout(
996        &self,
997        desc: &BindGroupLayoutDescriptor,
998    ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
999    unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
1000    unsafe fn create_pipeline_layout(
1001        &self,
1002        desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
1003    ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
1004    unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
1005
1006    #[allow(clippy::type_complexity)]
1007    unsafe fn create_bind_group(
1008        &self,
1009        desc: &BindGroupDescriptor<
1010            <Self::A as Api>::BindGroupLayout,
1011            <Self::A as Api>::Buffer,
1012            <Self::A as Api>::Sampler,
1013            <Self::A as Api>::TextureView,
1014            <Self::A as Api>::AccelerationStructure,
1015        >,
1016    ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
1017    unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
1018
1019    unsafe fn create_shader_module(
1020        &self,
1021        desc: &ShaderModuleDescriptor,
1022        shader: ShaderInput,
1023    ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
1024    unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
1025
1026    #[allow(clippy::type_complexity)]
1027    unsafe fn create_render_pipeline(
1028        &self,
1029        desc: &RenderPipelineDescriptor<
1030            <Self::A as Api>::PipelineLayout,
1031            <Self::A as Api>::ShaderModule,
1032            <Self::A as Api>::PipelineCache,
1033        >,
1034    ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
1035    unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
1036
1037    #[allow(clippy::type_complexity)]
1038    unsafe fn create_compute_pipeline(
1039        &self,
1040        desc: &ComputePipelineDescriptor<
1041            <Self::A as Api>::PipelineLayout,
1042            <Self::A as Api>::ShaderModule,
1043            <Self::A as Api>::PipelineCache,
1044        >,
1045    ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
1046    unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
1047
1048    unsafe fn create_pipeline_cache(
1049        &self,
1050        desc: &PipelineCacheDescriptor<'_>,
1051    ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
1052    fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
1053        None
1054    }
1055    unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
1056
1057    unsafe fn create_query_set(
1058        &self,
1059        desc: &wgt::QuerySetDescriptor<Label>,
1060    ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
1061    unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
1062    unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
1063    unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
1064    unsafe fn get_fence_value(
1065        &self,
1066        fence: &<Self::A as Api>::Fence,
1067    ) -> Result<FenceValue, DeviceError>;
1068
1069    /// Wait for `fence` to reach `value`.
1070    ///
1071    /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
1072    /// [`FenceValue`] to store in it, so you can use this `wait` function
1073    /// to wait for a given queue submission to finish execution.
1074    ///
1075    /// The `value` argument must be a value that some actual operation you have
1076    /// already presented to the device is going to store in `fence`. You cannot
1077    /// wait for values yet to be submitted. (This restriction accommodates
1078    /// implementations like the `vulkan` backend's [`FencePool`] that must
1079    /// allocate a distinct synchronization object for each fence value one is
1080    /// able to wait for.)
1081    ///
1082    /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
1083    /// returns immediately.
1084    ///
1085    /// If `timeout` is provided, the function will block indefinitely or until
1086    /// an error is encountered.
1087    ///
1088    /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
1089    ///
1090    /// [`Fence`]: Api::Fence
1091    /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
1092    unsafe fn wait(
1093        &self,
1094        fence: &<Self::A as Api>::Fence,
1095        value: FenceValue,
1096        timeout: Option<core::time::Duration>,
1097    ) -> Result<bool, DeviceError>;
1098
1099    /// Start a graphics debugger capture.
1100    ///
1101    /// # Safety
1102    ///
1103    /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
1104    ///
1105    /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1106    unsafe fn start_graphics_debugger_capture(&self) -> bool;
1107
1108    /// Stop a graphics debugger capture.
1109    ///
1110    /// # Safety
1111    ///
1112    /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1113    ///
1114    /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1115    unsafe fn stop_graphics_debugger_capture(&self);
1116
1117    #[allow(unused_variables)]
1118    unsafe fn pipeline_cache_get_data(
1119        &self,
1120        cache: &<Self::A as Api>::PipelineCache,
1121    ) -> Option<Vec<u8>> {
1122        None
1123    }
1124
1125    unsafe fn create_acceleration_structure(
1126        &self,
1127        desc: &AccelerationStructureDescriptor,
1128    ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1129    unsafe fn get_acceleration_structure_build_sizes(
1130        &self,
1131        desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1132    ) -> AccelerationStructureBuildSizes;
1133    unsafe fn get_acceleration_structure_device_address(
1134        &self,
1135        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1136    ) -> wgt::BufferAddress;
1137    unsafe fn destroy_acceleration_structure(
1138        &self,
1139        acceleration_structure: <Self::A as Api>::AccelerationStructure,
1140    );
1141    fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1142
1143    fn get_internal_counters(&self) -> wgt::HalCounters;
1144
1145    fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1146        None
1147    }
1148
1149    fn check_if_oom(&self) -> Result<(), DeviceError>;
1150}
1151
1152pub trait Queue: WasmNotSendSync {
1153    type A: Api;
1154
1155    /// Submit `command_buffers` for execution on GPU.
1156    ///
1157    /// Update `fence` to `value` when the operation is complete. See
1158    /// [`Fence`] for details.
1159    ///
1160    /// All command buffers submitted to a `wgpu_hal` queue are executed in the
1161    /// order they're submitted, with each buffer able to observe the effects of
1162    /// previous buffers' execution. Specifically:
1163    ///
1164    /// - If two calls to `submit` on a single `Queue` occur in a particular
1165    ///   order (that is, they happen on the same thread, or on two threads that
1166    ///   have synchronized to establish an ordering), then the first
1167    ///   submission's commands all complete execution before any of the second
1168    ///   submission's commands begin. All results produced by one submission
1169    ///   are visible to the next.
1170    ///
1171    /// - Within a submission, command buffers execute in the order in which they
1172    ///   appear in `command_buffers`. All results produced by one buffer are
1173    ///   visible to the next.
1174    ///
1175    /// If two calls to `submit` on a single `Queue` from different threads are
1176    /// not synchronized to occur in a particular order, they must pass distinct
1177    /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1178    /// operations to complete is only trustworthy when operations finish in
1179    /// order of increasing fence value, but submissions from different threads
1180    /// cannot determine how to order the fence values if the submissions
1181    /// themselves are unordered. If each thread uses a separate [`Fence`], this
1182    /// problem does not arise.
1183    ///
1184    /// # Safety
1185    ///
1186    /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1187    ///   from a [`CommandEncoder`][ce] that was constructed from the
1188    ///   [`Device`][d] associated with this [`Queue`].
1189    ///
1190    /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1191    ///   commands have finished execution. Since command buffers must not
1192    ///   outlive their encoders, this implies that the encoders must remain
1193    ///   alive as well.
1194    ///
1195    /// - All resources used by a submitted [`CommandBuffer`][cb]
1196    ///   ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1197    ///   on) must remain alive until the command buffer finishes execution.
1198    ///
1199    /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1200    ///   writes to must appear in the `surface_textures` argument.
1201    ///
1202    /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1203    ///   argument more than once.
1204    ///
1205    /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1206    ///   for use with the [`Device`][d] associated with this [`Queue`],
1207    ///   typically by calling [`Surface::configure`].
1208    ///
1209    /// - All calls to this function that include a given [`SurfaceTexture`][st]
1210    ///   in `surface_textures` must use the same [`Fence`].
1211    ///
1212    /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1213    ///   all submissions that will signal it have completed.
1214    ///
1215    /// [`Fence`]: Api::Fence
1216    /// [cb]: Api::CommandBuffer
1217    /// [ce]: Api::CommandEncoder
1218    /// [d]: Api::Device
1219    /// [t]: Api::Texture
1220    /// [bg]: Api::BindGroup
1221    /// [rp]: Api::RenderPipeline
1222    /// [st]: Api::SurfaceTexture
1223    unsafe fn submit(
1224        &self,
1225        command_buffers: &[&<Self::A as Api>::CommandBuffer],
1226        surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1227        signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1228    ) -> Result<(), DeviceError>;
1229    unsafe fn present(
1230        &self,
1231        surface: &<Self::A as Api>::Surface,
1232        texture: <Self::A as Api>::SurfaceTexture,
1233    ) -> Result<(), SurfaceError>;
1234    unsafe fn get_timestamp_period(&self) -> f32;
1235}
1236
1237/// Encoder and allocation pool for `CommandBuffer`s.
1238///
1239/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1240/// acts as the allocation pool that owns the buffers' underlying
1241/// storage. Thus, `CommandBuffer`s must not outlive the
1242/// `CommandEncoder` that created them.
1243///
1244/// The life cycle of a `CommandBuffer` is as follows:
1245///
1246/// - Call [`Device::create_command_encoder`] to create a new
1247///   `CommandEncoder`, in the "closed" state.
1248///
1249/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1250///   recording commands. This puts the `CommandEncoder` in the
1251///   "recording" state.
1252///
1253/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1254///   etc. on a "recording" `CommandEncoder` to add commands to the
1255///   list. (If an error occurs, you must call `discard_encoding`; see
1256///   below.)
1257///
1258/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1259///   encoder and construct a fresh `CommandBuffer` consisting of the
1260///   list of commands recorded up to that point.
1261///
1262/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1263///   the commands recorded thus far and close the encoder. This is
1264///   the only safe thing to do on a `CommandEncoder` if an error has
1265///   occurred while recording commands.
1266///
1267/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1268///   live `CommandBuffers` built from it. All the `CommandBuffer`s
1269///   are destroyed, and their resources are freed.
1270///
1271/// # Safety
1272///
1273/// - The `CommandEncoder` must be in the states described above to
1274///   make the given calls.
1275///
1276/// - A `CommandBuffer` that has been submitted for execution on the
1277///   GPU must live until its execution is complete.
1278///
1279/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1280///   built it.
1281///
1282/// It is the user's responsibility to meet this requirements. This
1283/// allows `CommandEncoder` implementations to keep their state
1284/// tracking to a minimum.
1285pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1286    type A: Api;
1287
1288    /// Begin encoding a new command buffer.
1289    ///
1290    /// This puts this `CommandEncoder` in the "recording" state.
1291    ///
1292    /// # Safety
1293    ///
1294    /// This `CommandEncoder` must be in the "closed" state.
1295    unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1296
1297    /// Discard the command list under construction.
1298    ///
1299    /// If an error has occurred while recording commands, this
1300    /// is the only safe thing to do with the encoder.
1301    ///
1302    /// This puts this `CommandEncoder` in the "closed" state.
1303    ///
1304    /// # Safety
1305    ///
1306    /// This `CommandEncoder` must be in the "recording" state.
1307    ///
1308    /// Callers must not assume that implementations of this
1309    /// function are idempotent, and thus should not call it
1310    /// multiple times in a row.
1311    unsafe fn discard_encoding(&mut self);
1312
1313    /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1314    ///
1315    /// The returned [`CommandBuffer`] holds all the commands recorded
1316    /// on this `CommandEncoder` since the last call to
1317    /// [`begin_encoding`].
1318    ///
1319    /// This puts this `CommandEncoder` in the "closed" state.
1320    ///
1321    /// # Safety
1322    ///
1323    /// This `CommandEncoder` must be in the "recording" state.
1324    ///
1325    /// The returned [`CommandBuffer`] must not outlive this
1326    /// `CommandEncoder`. Implementations are allowed to build
1327    /// `CommandBuffer`s that depend on storage owned by this
1328    /// `CommandEncoder`.
1329    ///
1330    /// [`CommandBuffer`]: Api::CommandBuffer
1331    /// [`begin_encoding`]: CommandEncoder::begin_encoding
1332    unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1333
1334    /// Reclaim all resources belonging to this `CommandEncoder`.
1335    ///
1336    /// # Safety
1337    ///
1338    /// This `CommandEncoder` must be in the "closed" state.
1339    ///
1340    /// The `command_buffers` iterator must produce all the live
1341    /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1342    /// is, every extant `CommandBuffer` returned from `end_encoding`.
1343    ///
1344    /// [`CommandBuffer`]: Api::CommandBuffer
1345    unsafe fn reset_all<I>(&mut self, command_buffers: I)
1346    where
1347        I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1348
1349    unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1350    where
1351        T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1352
1353    unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1354    where
1355        T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1356
1357    // copy operations
1358
1359    unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1360
1361    unsafe fn copy_buffer_to_buffer<T>(
1362        &mut self,
1363        src: &<Self::A as Api>::Buffer,
1364        dst: &<Self::A as Api>::Buffer,
1365        regions: T,
1366    ) where
1367        T: Iterator<Item = BufferCopy>;
1368
1369    /// Copy from an external image to an internal texture.
1370    /// Works with a single array layer.
1371    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1372    /// Note: the copy extent is in physical size (rounded to the block size)
1373    #[cfg(webgl)]
1374    unsafe fn copy_external_image_to_texture<T>(
1375        &mut self,
1376        src: &wgt::CopyExternalImageSourceInfo,
1377        dst: &<Self::A as Api>::Texture,
1378        dst_premultiplication: bool,
1379        regions: T,
1380    ) where
1381        T: Iterator<Item = TextureCopy>;
1382
1383    /// Copy from one texture to another.
1384    /// Works with a single array layer.
1385    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1386    /// Note: the copy extent is in physical size (rounded to the block size)
1387    unsafe fn copy_texture_to_texture<T>(
1388        &mut self,
1389        src: &<Self::A as Api>::Texture,
1390        src_usage: wgt::TextureUses,
1391        dst: &<Self::A as Api>::Texture,
1392        regions: T,
1393    ) where
1394        T: Iterator<Item = TextureCopy>;
1395
1396    /// Copy from buffer to texture.
1397    /// Works with a single array layer.
1398    /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1399    /// Note: the copy extent is in physical size (rounded to the block size)
1400    unsafe fn copy_buffer_to_texture<T>(
1401        &mut self,
1402        src: &<Self::A as Api>::Buffer,
1403        dst: &<Self::A as Api>::Texture,
1404        regions: T,
1405    ) where
1406        T: Iterator<Item = BufferTextureCopy>;
1407
1408    /// Copy from texture to buffer.
1409    /// Works with a single array layer.
1410    /// Note: the copy extent is in physical size (rounded to the block size)
1411    unsafe fn copy_texture_to_buffer<T>(
1412        &mut self,
1413        src: &<Self::A as Api>::Texture,
1414        src_usage: wgt::TextureUses,
1415        dst: &<Self::A as Api>::Buffer,
1416        regions: T,
1417    ) where
1418        T: Iterator<Item = BufferTextureCopy>;
1419
1420    unsafe fn copy_acceleration_structure_to_acceleration_structure(
1421        &mut self,
1422        src: &<Self::A as Api>::AccelerationStructure,
1423        dst: &<Self::A as Api>::AccelerationStructure,
1424        copy: wgt::AccelerationStructureCopy,
1425    );
1426    // pass common
1427
1428    /// Sets the bind group at `index` to `group`.
1429    ///
1430    /// If this is not the first call to `set_bind_group` within the current
1431    /// render or compute pass:
1432    ///
1433    /// - If `layout` contains `n` bind group layouts, then any previously set
1434    ///   bind groups at indices `n` or higher are cleared.
1435    ///
1436    /// - If the first `m` bind group layouts of `layout` are equal to those of
1437    ///   the previously passed layout, but no more, then any previously set
1438    ///   bind groups at indices `m` or higher are cleared.
1439    ///
1440    /// It follows from the above that passing the same layout as before doesn't
1441    /// clear any bind groups.
1442    ///
1443    /// # Safety
1444    ///
1445    /// - This [`CommandEncoder`] must be within a render or compute pass.
1446    ///
1447    /// - `index` must be the valid index of some bind group layout in `layout`.
1448    ///   Call this the "relevant bind group layout".
1449    ///
1450    /// - The layout of `group` must be equal to the relevant bind group layout.
1451    ///
1452    /// - The length of `dynamic_offsets` must match the number of buffer
1453    ///   bindings [with dynamic offsets][hdo] in the relevant bind group
1454    ///   layout.
1455    ///
1456    /// - If those buffer bindings are ordered by increasing [`binding` number]
1457    ///   and paired with elements from `dynamic_offsets`, then each offset must
1458    ///   be a valid offset for the binding's corresponding buffer in `group`.
1459    ///
1460    /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1461    /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1462    unsafe fn set_bind_group(
1463        &mut self,
1464        layout: &<Self::A as Api>::PipelineLayout,
1465        index: u32,
1466        group: &<Self::A as Api>::BindGroup,
1467        dynamic_offsets: &[wgt::DynamicOffset],
1468    );
1469
1470    /// Sets a range in immediate data data.
1471    ///
1472    /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1473    ///
1474    /// # Safety
1475    ///
1476    /// - `offset_bytes` must be a multiple of 4.
1477    /// - The range of immediates written must be valid for the pipeline layout at draw time.
1478    unsafe fn set_immediates(
1479        &mut self,
1480        layout: &<Self::A as Api>::PipelineLayout,
1481        stages: wgt::ShaderStages,
1482        offset_bytes: u32,
1483        data: &[u32],
1484    );
1485
1486    unsafe fn insert_debug_marker(&mut self, label: &str);
1487    unsafe fn begin_debug_marker(&mut self, group_label: &str);
1488    unsafe fn end_debug_marker(&mut self);
1489
1490    // queries
1491
1492    /// # Safety:
1493    ///
1494    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1495    unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1496    /// # Safety:
1497    ///
1498    /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1499    unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1500    unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1501    unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1502    unsafe fn copy_query_results(
1503        &mut self,
1504        set: &<Self::A as Api>::QuerySet,
1505        range: Range<u32>,
1506        buffer: &<Self::A as Api>::Buffer,
1507        offset: wgt::BufferAddress,
1508        stride: wgt::BufferSize,
1509    );
1510
1511    // render passes
1512
1513    /// Begin a new render pass, clearing all active bindings.
1514    ///
1515    /// This clears any bindings established by the following calls:
1516    ///
1517    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1518    /// - [`set_immediates`](CommandEncoder::set_immediates)
1519    /// - [`begin_query`](CommandEncoder::begin_query)
1520    /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1521    /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1522    /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1523    ///
1524    /// # Safety
1525    ///
1526    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1527    ///   by a call to [`end_render_pass`].
1528    ///
1529    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1530    ///   by a call to [`end_compute_pass`].
1531    ///
1532    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1533    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1534    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1535    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1536    unsafe fn begin_render_pass(
1537        &mut self,
1538        desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1539    ) -> Result<(), DeviceError>;
1540
1541    /// End the current render pass.
1542    ///
1543    /// # Safety
1544    ///
1545    /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1546    ///   that has not been followed by a call to [`end_render_pass`].
1547    ///
1548    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1549    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1550    unsafe fn end_render_pass(&mut self);
1551
1552    unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1553
1554    unsafe fn set_index_buffer<'a>(
1555        &mut self,
1556        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1557        format: wgt::IndexFormat,
1558    );
1559    unsafe fn set_vertex_buffer<'a>(
1560        &mut self,
1561        index: u32,
1562        binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1563    );
1564    unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1565    unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1566    unsafe fn set_stencil_reference(&mut self, value: u32);
1567    unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1568
1569    unsafe fn draw(
1570        &mut self,
1571        first_vertex: u32,
1572        vertex_count: u32,
1573        first_instance: u32,
1574        instance_count: u32,
1575    );
1576    unsafe fn draw_indexed(
1577        &mut self,
1578        first_index: u32,
1579        index_count: u32,
1580        base_vertex: i32,
1581        first_instance: u32,
1582        instance_count: u32,
1583    );
1584    unsafe fn draw_indirect(
1585        &mut self,
1586        buffer: &<Self::A as Api>::Buffer,
1587        offset: wgt::BufferAddress,
1588        draw_count: u32,
1589    );
1590    unsafe fn draw_indexed_indirect(
1591        &mut self,
1592        buffer: &<Self::A as Api>::Buffer,
1593        offset: wgt::BufferAddress,
1594        draw_count: u32,
1595    );
1596    unsafe fn draw_indirect_count(
1597        &mut self,
1598        buffer: &<Self::A as Api>::Buffer,
1599        offset: wgt::BufferAddress,
1600        count_buffer: &<Self::A as Api>::Buffer,
1601        count_offset: wgt::BufferAddress,
1602        max_count: u32,
1603    );
1604    unsafe fn draw_indexed_indirect_count(
1605        &mut self,
1606        buffer: &<Self::A as Api>::Buffer,
1607        offset: wgt::BufferAddress,
1608        count_buffer: &<Self::A as Api>::Buffer,
1609        count_offset: wgt::BufferAddress,
1610        max_count: u32,
1611    );
1612    unsafe fn draw_mesh_tasks(
1613        &mut self,
1614        group_count_x: u32,
1615        group_count_y: u32,
1616        group_count_z: u32,
1617    );
1618    unsafe fn draw_mesh_tasks_indirect(
1619        &mut self,
1620        buffer: &<Self::A as Api>::Buffer,
1621        offset: wgt::BufferAddress,
1622        draw_count: u32,
1623    );
1624    unsafe fn draw_mesh_tasks_indirect_count(
1625        &mut self,
1626        buffer: &<Self::A as Api>::Buffer,
1627        offset: wgt::BufferAddress,
1628        count_buffer: &<Self::A as Api>::Buffer,
1629        count_offset: wgt::BufferAddress,
1630        max_count: u32,
1631    );
1632
1633    // compute passes
1634
1635    /// Begin a new compute pass, clearing all active bindings.
1636    ///
1637    /// This clears any bindings established by the following calls:
1638    ///
1639    /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1640    /// - [`set_immediates`](CommandEncoder::set_immediates)
1641    /// - [`begin_query`](CommandEncoder::begin_query)
1642    /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1643    ///
1644    /// # Safety
1645    ///
1646    /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1647    ///   by a call to [`end_render_pass`].
1648    ///
1649    /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1650    ///   by a call to [`end_compute_pass`].
1651    ///
1652    /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1653    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1654    /// [`end_render_pass`]: CommandEncoder::end_render_pass
1655    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1656    unsafe fn begin_compute_pass(
1657        &mut self,
1658        desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1659    );
1660
1661    /// End the current compute pass.
1662    ///
1663    /// # Safety
1664    ///
1665    /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1666    ///   that has not been followed by a call to [`end_compute_pass`].
1667    ///
1668    /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1669    /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1670    unsafe fn end_compute_pass(&mut self);
1671
1672    unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1673
1674    unsafe fn dispatch(&mut self, count: [u32; 3]);
1675    unsafe fn dispatch_indirect(
1676        &mut self,
1677        buffer: &<Self::A as Api>::Buffer,
1678        offset: wgt::BufferAddress,
1679    );
1680
1681    /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1682    /// All buffers must be synchronized externally
1683    /// All buffer regions, which are written to may only be passed once per function call,
1684    /// with the exception of updates in the same descriptor.
1685    /// Consequences of this limitation:
1686    /// - scratch buffers need to be unique
1687    /// - a tlas can't be build in the same call with a blas it contains
1688    unsafe fn build_acceleration_structures<'a, T>(
1689        &mut self,
1690        descriptor_count: u32,
1691        descriptors: T,
1692    ) where
1693        Self::A: 'a,
1694        T: IntoIterator<
1695            Item = BuildAccelerationStructureDescriptor<
1696                'a,
1697                <Self::A as Api>::Buffer,
1698                <Self::A as Api>::AccelerationStructure,
1699            >,
1700        >;
1701
1702    unsafe fn place_acceleration_structure_barrier(
1703        &mut self,
1704        barrier: AccelerationStructureBarrier,
1705    );
1706    // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1707    unsafe fn read_acceleration_structure_compact_size(
1708        &mut self,
1709        acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1710        buf: &<Self::A as Api>::Buffer,
1711    );
1712}
1713
1714bitflags!(
1715    /// Pipeline layout creation flags.
1716    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1717    pub struct PipelineLayoutFlags: u32 {
1718        /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1719        /// via immediates for direct execution.
1720        const FIRST_VERTEX_INSTANCE = 1 << 0;
1721        /// D3D12: Add support for `num_workgroups` builtins via immediates
1722        /// for direct execution.
1723        const NUM_WORK_GROUPS = 1 << 1;
1724        /// D3D12: Add support for the builtins that the other flags enable for
1725        /// indirect execution.
1726        const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1727    }
1728);
1729
1730bitflags!(
1731    /// Pipeline layout creation flags.
1732    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1733    pub struct BindGroupLayoutFlags: u32 {
1734        /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1735        const PARTIALLY_BOUND = 1 << 0;
1736    }
1737);
1738
1739bitflags!(
1740    /// Texture format capability flags.
1741    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1742    pub struct TextureFormatCapabilities: u32 {
1743        /// Format can be sampled.
1744        const SAMPLED = 1 << 0;
1745        /// Format can be sampled with a linear sampler.
1746        const SAMPLED_LINEAR = 1 << 1;
1747        /// Format can be sampled with a min/max reduction sampler.
1748        const SAMPLED_MINMAX = 1 << 2;
1749
1750        /// Format can be used as storage with read-only access.
1751        const STORAGE_READ_ONLY = 1 << 3;
1752        /// Format can be used as storage with write-only access.
1753        const STORAGE_WRITE_ONLY = 1 << 4;
1754        /// Format can be used as storage with both read and write access.
1755        const STORAGE_READ_WRITE = 1 << 5;
1756        /// Format can be used as storage with atomics.
1757        const STORAGE_ATOMIC = 1 << 6;
1758
1759        /// Format can be used as color and input attachment.
1760        const COLOR_ATTACHMENT = 1 << 7;
1761        /// Format can be used as color (with blending) and input attachment.
1762        const COLOR_ATTACHMENT_BLEND = 1 << 8;
1763        /// Format can be used as depth-stencil and input attachment.
1764        const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1765
1766        /// Format can be multisampled by x2.
1767        const MULTISAMPLE_X2   = 1 << 10;
1768        /// Format can be multisampled by x4.
1769        const MULTISAMPLE_X4   = 1 << 11;
1770        /// Format can be multisampled by x8.
1771        const MULTISAMPLE_X8   = 1 << 12;
1772        /// Format can be multisampled by x16.
1773        const MULTISAMPLE_X16  = 1 << 13;
1774
1775        /// Format can be used for render pass resolve targets.
1776        const MULTISAMPLE_RESOLVE = 1 << 14;
1777
1778        /// Format can be copied from.
1779        const COPY_SRC = 1 << 15;
1780        /// Format can be copied to.
1781        const COPY_DST = 1 << 16;
1782    }
1783);
1784
1785bitflags!(
1786    /// Texture format capability flags.
1787    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1788    pub struct FormatAspects: u8 {
1789        const COLOR = 1 << 0;
1790        const DEPTH = 1 << 1;
1791        const STENCIL = 1 << 2;
1792        const PLANE_0 = 1 << 3;
1793        const PLANE_1 = 1 << 4;
1794        const PLANE_2 = 1 << 5;
1795
1796        const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1797    }
1798);
1799
1800impl FormatAspects {
1801    pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1802        let aspect_mask = match aspect {
1803            wgt::TextureAspect::All => Self::all(),
1804            wgt::TextureAspect::DepthOnly => Self::DEPTH,
1805            wgt::TextureAspect::StencilOnly => Self::STENCIL,
1806            wgt::TextureAspect::Plane0 => Self::PLANE_0,
1807            wgt::TextureAspect::Plane1 => Self::PLANE_1,
1808            wgt::TextureAspect::Plane2 => Self::PLANE_2,
1809        };
1810        Self::from(format) & aspect_mask
1811    }
1812
1813    /// Returns `true` if only one flag is set
1814    pub fn is_one(&self) -> bool {
1815        self.bits().is_power_of_two()
1816    }
1817
1818    pub fn map(&self) -> wgt::TextureAspect {
1819        match *self {
1820            Self::COLOR => wgt::TextureAspect::All,
1821            Self::DEPTH => wgt::TextureAspect::DepthOnly,
1822            Self::STENCIL => wgt::TextureAspect::StencilOnly,
1823            Self::PLANE_0 => wgt::TextureAspect::Plane0,
1824            Self::PLANE_1 => wgt::TextureAspect::Plane1,
1825            Self::PLANE_2 => wgt::TextureAspect::Plane2,
1826            _ => unreachable!(),
1827        }
1828    }
1829}
1830
1831impl From<wgt::TextureFormat> for FormatAspects {
1832    fn from(format: wgt::TextureFormat) -> Self {
1833        match format {
1834            wgt::TextureFormat::Stencil8 => Self::STENCIL,
1835            wgt::TextureFormat::Depth16Unorm
1836            | wgt::TextureFormat::Depth32Float
1837            | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1838            wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1839                Self::DEPTH_STENCIL
1840            }
1841            wgt::TextureFormat::NV12 | wgt::TextureFormat::P010 => Self::PLANE_0 | Self::PLANE_1,
1842            _ => Self::COLOR,
1843        }
1844    }
1845}
1846
1847bitflags!(
1848    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1849    pub struct MemoryFlags: u32 {
1850        const TRANSIENT = 1 << 0;
1851        const PREFER_COHERENT = 1 << 1;
1852    }
1853);
1854
1855bitflags!(
1856    /// Attachment load and store operations.
1857    ///
1858    /// There must be at least one flag from the LOAD group and one from the STORE group set.
1859    #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1860    pub struct AttachmentOps: u8 {
1861        /// Load the existing contents of the attachment.
1862        const LOAD = 1 << 0;
1863        /// Clear the attachment to a specified value.
1864        const LOAD_CLEAR = 1 << 1;
1865        /// The contents of the attachment are undefined.
1866        const LOAD_DONT_CARE = 1 << 2;
1867        /// Store the contents of the attachment.
1868        const STORE = 1 << 3;
1869        /// The contents of the attachment are undefined after the pass.
1870        const STORE_DISCARD = 1 << 4;
1871    }
1872);
1873
1874#[derive(Clone, Debug)]
1875pub struct InstanceDescriptor<'a> {
1876    pub name: &'a str,
1877    pub flags: wgt::InstanceFlags,
1878    pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1879    pub backend_options: wgt::BackendOptions,
1880}
1881
1882#[derive(Clone, Debug)]
1883pub struct Alignments {
1884    /// The alignment of the start of the buffer used as a GPU copy source.
1885    pub buffer_copy_offset: wgt::BufferSize,
1886
1887    /// The alignment of the row pitch of the texture data stored in a buffer that is
1888    /// used in a GPU copy operation.
1889    pub buffer_copy_pitch: wgt::BufferSize,
1890
1891    /// The finest alignment of bound range checking for uniform buffers.
1892    ///
1893    /// When `wgpu_hal` restricts shader references to the [accessible
1894    /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1895    /// is the bind group binding's stated [size], rounded up to the next
1896    /// multiple of this value.
1897    ///
1898    /// We don't need an analogous field for storage buffer bindings, because
1899    /// all our backends promise to enforce the size at least to a four-byte
1900    /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1901    /// of four anyway.
1902    ///
1903    /// [ar]: struct.BufferBinding.html#accessible-region
1904    /// [`Uniform`]: wgt::BufferBindingType::Uniform
1905    /// [size]: BufferBinding::size
1906    pub uniform_bounds_check_alignment: wgt::BufferSize,
1907
1908    /// The size of the raw TLAS instance
1909    pub raw_tlas_instance_size: usize,
1910
1911    /// What the scratch buffer for building an acceleration structure must be aligned to
1912    pub ray_tracing_scratch_buffer_alignment: u32,
1913}
1914
1915#[derive(Clone, Debug)]
1916pub struct Capabilities {
1917    pub limits: wgt::Limits,
1918    pub alignments: Alignments,
1919    pub downlevel: wgt::DownlevelCapabilities,
1920}
1921
1922/// An adapter with all the information needed to reason about its capabilities.
1923///
1924/// These are either made by [`Instance::enumerate_adapters`] or by backend specific
1925/// methods on the backend [`Instance`] or [`Adapter`].
1926#[derive(Debug)]
1927pub struct ExposedAdapter<A: Api> {
1928    pub adapter: A::Adapter,
1929    pub info: wgt::AdapterInfo,
1930    pub features: wgt::Features,
1931    pub capabilities: Capabilities,
1932}
1933
1934/// Describes information about what a `Surface`'s presentation capabilities are.
1935/// Fetch this with [Adapter::surface_capabilities].
1936#[derive(Debug, Clone)]
1937pub struct SurfaceCapabilities {
1938    /// List of supported texture formats.
1939    ///
1940    /// Must be at least one.
1941    pub formats: Vec<wgt::TextureFormat>,
1942
1943    /// Range for the number of queued frames.
1944    ///
1945    /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1946    /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1947    ///
1948    /// - `maximum_frame_latency.start` must be at least 1.
1949    /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1950    pub maximum_frame_latency: RangeInclusive<u32>,
1951
1952    /// Current extent of the surface, if known.
1953    pub current_extent: Option<wgt::Extent3d>,
1954
1955    /// Supported texture usage flags.
1956    ///
1957    /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1958    pub usage: wgt::TextureUses,
1959
1960    /// List of supported V-sync modes.
1961    ///
1962    /// Must be at least one.
1963    pub present_modes: Vec<wgt::PresentMode>,
1964
1965    /// List of supported alpha composition modes.
1966    ///
1967    /// Must be at least one.
1968    pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1969}
1970
1971#[derive(Debug)]
1972pub struct AcquiredSurfaceTexture<A: Api> {
1973    pub texture: A::SurfaceTexture,
1974    /// The presentation configuration no longer matches
1975    /// the surface properties exactly, but can still be used to present
1976    /// to the surface successfully.
1977    pub suboptimal: bool,
1978}
1979
1980/// An open connection to a device and a queue.
1981///
1982/// This can be created from [`Adapter::open`] or backend
1983/// specific methods on the backend's [`Instance`] or [`Adapter`].
1984#[derive(Debug)]
1985pub struct OpenDevice<A: Api> {
1986    pub device: A::Device,
1987    pub queue: A::Queue,
1988}
1989
1990#[derive(Clone, Debug)]
1991pub struct BufferMapping {
1992    pub ptr: NonNull<u8>,
1993    pub is_coherent: bool,
1994}
1995
1996#[derive(Clone, Debug)]
1997pub struct BufferDescriptor<'a> {
1998    pub label: Label<'a>,
1999    pub size: wgt::BufferAddress,
2000    pub usage: wgt::BufferUses,
2001    pub memory_flags: MemoryFlags,
2002}
2003
2004#[derive(Clone, Debug)]
2005pub struct TextureDescriptor<'a> {
2006    pub label: Label<'a>,
2007    pub size: wgt::Extent3d,
2008    pub mip_level_count: u32,
2009    pub sample_count: u32,
2010    pub dimension: wgt::TextureDimension,
2011    pub format: wgt::TextureFormat,
2012    pub usage: wgt::TextureUses,
2013    pub memory_flags: MemoryFlags,
2014    /// Allows views of this texture to have a different format
2015    /// than the texture does.
2016    pub view_formats: Vec<wgt::TextureFormat>,
2017}
2018
2019impl TextureDescriptor<'_> {
2020    pub fn copy_extent(&self) -> CopyExtent {
2021        CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
2022    }
2023
2024    pub fn is_cube_compatible(&self) -> bool {
2025        self.dimension == wgt::TextureDimension::D2
2026            && self.size.depth_or_array_layers % 6 == 0
2027            && self.sample_count == 1
2028            && self.size.width == self.size.height
2029    }
2030
2031    pub fn array_layer_count(&self) -> u32 {
2032        match self.dimension {
2033            wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
2034            wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
2035        }
2036    }
2037}
2038
2039/// TextureView descriptor.
2040///
2041/// Valid usage:
2042///. - `format` has to be the same as `TextureDescriptor::format`
2043///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
2044///. - `usage` has to be a subset of `TextureDescriptor::usage`
2045///. - `range` has to be a subset of parent texture
2046#[derive(Clone, Debug)]
2047pub struct TextureViewDescriptor<'a> {
2048    pub label: Label<'a>,
2049    pub format: wgt::TextureFormat,
2050    pub dimension: wgt::TextureViewDimension,
2051    pub usage: wgt::TextureUses,
2052    pub range: wgt::ImageSubresourceRange,
2053}
2054
2055#[derive(Clone, Debug)]
2056pub struct SamplerDescriptor<'a> {
2057    pub label: Label<'a>,
2058    pub address_modes: [wgt::AddressMode; 3],
2059    pub mag_filter: wgt::FilterMode,
2060    pub min_filter: wgt::FilterMode,
2061    pub mipmap_filter: wgt::MipmapFilterMode,
2062    pub lod_clamp: Range<f32>,
2063    pub compare: Option<wgt::CompareFunction>,
2064    // Must in the range [1, 16].
2065    //
2066    // Anisotropic filtering must be supported if this is not 1.
2067    pub anisotropy_clamp: u16,
2068    pub border_color: Option<wgt::SamplerBorderColor>,
2069}
2070
2071/// BindGroupLayout descriptor.
2072///
2073/// Valid usage:
2074/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
2075#[derive(Clone, Debug)]
2076pub struct BindGroupLayoutDescriptor<'a> {
2077    pub label: Label<'a>,
2078    pub flags: BindGroupLayoutFlags,
2079    pub entries: &'a [wgt::BindGroupLayoutEntry],
2080}
2081
2082#[derive(Clone, Debug)]
2083pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
2084    pub label: Label<'a>,
2085    pub flags: PipelineLayoutFlags,
2086    pub bind_group_layouts: &'a [&'a B],
2087    pub immediates_ranges: &'a [wgt::ImmediateRange],
2088}
2089
2090/// A region of a buffer made visible to shaders via a [`BindGroup`].
2091///
2092/// [`BindGroup`]: Api::BindGroup
2093///
2094/// ## Construction
2095///
2096/// The recommended way to construct a `BufferBinding` is using the `binding`
2097/// method on a wgpu-core `Buffer`, which will validate the binding size
2098/// against the buffer size. A `new_unchecked` constructor is also provided for
2099/// cases where direct construction is necessary.
2100///
2101/// ## Accessible region
2102///
2103/// `wgpu_hal` guarantees that shaders compiled with
2104/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
2105/// write data via this binding outside the *accessible region* of a buffer:
2106///
2107/// - The accessible region starts at [`offset`].
2108///
2109/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
2110///   which must be a multiple of 4.
2111///
2112/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
2113///   rounded up to the next multiple of
2114///   [`Alignments::uniform_bounds_check_alignment`].
2115///
2116/// Note that this guarantee is stricter than WGSL's requirements for
2117/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
2118/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
2119/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
2120/// never be read by the application before they are overwritten. This
2121/// optimization consults bind group buffer binding regions to determine which
2122/// parts of which buffers shaders might observe. This optimization is only
2123/// sound if shader access is bounds-checked.
2124///
2125/// ## Zero-length bindings
2126///
2127/// Some back ends cannot tolerate zero-length regions; for example, see
2128/// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2129/// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2130/// documentation for GLES's [glBindBufferRange][bbr]. This documentation
2131/// previously stated that a `BufferBinding` must have `offset` strictly less
2132/// than the size of the buffer, but this restriction was not honored elsewhere
2133/// in the code, so has been removed. However, it remains the case that
2134/// some backends do not support zero-length bindings, so additional
2135/// logic is needed somewhere to handle this properly. See
2136/// [#3170](https://github.com/gfx-rs/wgpu/issues/3170).
2137///
2138/// [`offset`]: BufferBinding::offset
2139/// [`size`]: BufferBinding::size
2140/// [`Storage`]: wgt::BufferBindingType::Storage
2141/// [`Uniform`]: wgt::BufferBindingType::Uniform
2142/// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2143/// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2144/// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2145/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2146#[derive(Debug)]
2147pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2148    /// The buffer being bound.
2149    ///
2150    /// This is not fully `pub` to prevent direct construction of
2151    /// `BufferBinding`s, while still allowing public read access to the `offset`
2152    /// and `size` properties.
2153    pub(crate) buffer: &'a B,
2154
2155    /// The offset at which the bound region starts.
2156    ///
2157    /// This must be less or equal to the size of the buffer.
2158    pub offset: wgt::BufferAddress,
2159
2160    /// The size of the region bound, in bytes.
2161    ///
2162    /// If `None`, the region extends from `offset` to the end of the
2163    /// buffer. Given the restrictions on `offset`, this means that
2164    /// the size is always greater than zero.
2165    pub size: Option<wgt::BufferSize>,
2166}
2167
2168// We must implement this manually because `B` is not necessarily `Clone`.
2169impl<B: DynBuffer + ?Sized> Clone for BufferBinding<'_, B> {
2170    fn clone(&self) -> Self {
2171        BufferBinding {
2172            buffer: self.buffer,
2173            offset: self.offset,
2174            size: self.size,
2175        }
2176    }
2177}
2178
2179/// Temporary convenience trait to let us call `.get()` on `u64`s in code that
2180/// really wants to be using `NonZeroU64`.
2181/// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove this
2182pub trait ShouldBeNonZeroExt {
2183    fn get(&self) -> u64;
2184}
2185
2186impl ShouldBeNonZeroExt for NonZeroU64 {
2187    fn get(&self) -> u64 {
2188        NonZeroU64::get(*self)
2189    }
2190}
2191
2192impl ShouldBeNonZeroExt for u64 {
2193    fn get(&self) -> u64 {
2194        *self
2195    }
2196}
2197
2198impl ShouldBeNonZeroExt for Option<NonZeroU64> {
2199    fn get(&self) -> u64 {
2200        match *self {
2201            Some(non_zero) => non_zero.get(),
2202            None => 0,
2203        }
2204    }
2205}
2206
2207impl<'a, B: DynBuffer + ?Sized> BufferBinding<'a, B> {
2208    /// Construct a `BufferBinding` with the given contents.
2209    ///
2210    /// When possible, use the `binding` method on a wgpu-core `Buffer` instead
2211    /// of this method. `Buffer::binding` validates the size of the binding
2212    /// against the size of the buffer.
2213    ///
2214    /// It is more difficult to provide a validating constructor here, due to
2215    /// not having direct access to the size of a `DynBuffer`.
2216    ///
2217    /// SAFETY: The caller is responsible for ensuring that a binding of `size`
2218    /// bytes starting at `offset` is contained within the buffer.
2219    ///
2220    /// The `S` type parameter is a temporary convenience to allow callers to
2221    /// pass a zero size. When the zero-size binding issue is resolved, the
2222    /// argument should just match the type of the member.
2223    /// TODO(<https://github.com/gfx-rs/wgpu/issues/3170>): remove the parameter
2224    pub fn new_unchecked<S: Into<Option<NonZeroU64>>>(
2225        buffer: &'a B,
2226        offset: wgt::BufferAddress,
2227        size: S,
2228    ) -> Self {
2229        Self {
2230            buffer,
2231            offset,
2232            size: size.into(),
2233        }
2234    }
2235}
2236
2237#[derive(Debug)]
2238pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2239    pub view: &'a T,
2240    pub usage: wgt::TextureUses,
2241}
2242
2243impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2244    fn clone(&self) -> Self {
2245        TextureBinding {
2246            view: self.view,
2247            usage: self.usage,
2248        }
2249    }
2250}
2251
2252#[derive(Debug)]
2253pub struct ExternalTextureBinding<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> {
2254    pub planes: [TextureBinding<'a, T>; 3],
2255    pub params: BufferBinding<'a, B>,
2256}
2257
2258impl<'a, B: DynBuffer + ?Sized, T: DynTextureView + ?Sized> Clone
2259    for ExternalTextureBinding<'a, B, T>
2260{
2261    fn clone(&self) -> Self {
2262        ExternalTextureBinding {
2263            planes: self.planes.clone(),
2264            params: self.params.clone(),
2265        }
2266    }
2267}
2268
2269/// cbindgen:ignore
2270#[derive(Clone, Debug)]
2271pub struct BindGroupEntry {
2272    pub binding: u32,
2273    pub resource_index: u32,
2274    pub count: u32,
2275}
2276
2277/// BindGroup descriptor.
2278///
2279/// Valid usage:
2280///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2281///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2282///. - each entry has to be compatible with the `layout`
2283///. - each entry's `BindGroupEntry::resource_index` is within range
2284///    of the corresponding resource array, selected by the relevant
2285///    `BindGroupLayoutEntry`.
2286#[derive(Clone, Debug)]
2287pub struct BindGroupDescriptor<
2288    'a,
2289    Bgl: DynBindGroupLayout + ?Sized,
2290    B: DynBuffer + ?Sized,
2291    S: DynSampler + ?Sized,
2292    T: DynTextureView + ?Sized,
2293    A: DynAccelerationStructure + ?Sized,
2294> {
2295    pub label: Label<'a>,
2296    pub layout: &'a Bgl,
2297    pub buffers: &'a [BufferBinding<'a, B>],
2298    pub samplers: &'a [&'a S],
2299    pub textures: &'a [TextureBinding<'a, T>],
2300    pub entries: &'a [BindGroupEntry],
2301    pub acceleration_structures: &'a [&'a A],
2302    pub external_textures: &'a [ExternalTextureBinding<'a, B, T>],
2303}
2304
2305#[derive(Clone, Debug)]
2306pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2307    pub label: Label<'a>,
2308    pub queue: &'a Q,
2309}
2310
2311/// Naga shader module.
2312#[derive(Default)]
2313pub struct NagaShader {
2314    /// Shader module IR.
2315    pub module: Cow<'static, naga::Module>,
2316    /// Analysis information of the module.
2317    pub info: naga::valid::ModuleInfo,
2318    /// Source codes for debug
2319    pub debug_source: Option<DebugSource>,
2320}
2321
2322// Custom implementation avoids the need to generate Debug impl code
2323// for the whole Naga module and info.
2324impl fmt::Debug for NagaShader {
2325    fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2326        write!(formatter, "Naga shader")
2327    }
2328}
2329
2330/// Shader input.
2331#[allow(clippy::large_enum_variant)]
2332pub enum ShaderInput<'a> {
2333    Naga(NagaShader),
2334    Msl {
2335        shader: &'a str,
2336        entry_point: String,
2337        num_workgroups: (u32, u32, u32),
2338    },
2339    SpirV(&'a [u32]),
2340    Dxil {
2341        shader: &'a [u8],
2342        entry_point: String,
2343        num_workgroups: (u32, u32, u32),
2344    },
2345    Hlsl {
2346        shader: &'a str,
2347        entry_point: String,
2348        num_workgroups: (u32, u32, u32),
2349    },
2350    Glsl {
2351        shader: &'a str,
2352        entry_point: String,
2353        num_workgroups: (u32, u32, u32),
2354    },
2355}
2356
2357pub struct ShaderModuleDescriptor<'a> {
2358    pub label: Label<'a>,
2359
2360    /// # Safety
2361    ///
2362    /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2363    ///
2364    /// [src]: wgt::ShaderRuntimeChecks
2365    pub runtime_checks: wgt::ShaderRuntimeChecks,
2366}
2367
2368#[derive(Debug, Clone)]
2369pub struct DebugSource {
2370    pub file_name: Cow<'static, str>,
2371    pub source_code: Cow<'static, str>,
2372}
2373
2374/// Describes a programmable pipeline stage.
2375#[derive(Debug)]
2376pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2377    /// The compiled shader module for this stage.
2378    pub module: &'a M,
2379    /// The name of the entry point in the compiled shader. There must be a function with this name
2380    ///  in the shader.
2381    pub entry_point: &'a str,
2382    /// Pipeline constants
2383    pub constants: &'a naga::back::PipelineConstants,
2384    /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2385    ///
2386    /// This is required by the WebGPU spec, but may have overhead which can be avoided
2387    /// for cross-platform applications
2388    pub zero_initialize_workgroup_memory: bool,
2389}
2390
2391impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2392    fn clone(&self) -> Self {
2393        Self {
2394            module: self.module,
2395            entry_point: self.entry_point,
2396            constants: self.constants,
2397            zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2398        }
2399    }
2400}
2401
2402/// Describes a compute pipeline.
2403#[derive(Clone, Debug)]
2404pub struct ComputePipelineDescriptor<
2405    'a,
2406    Pl: DynPipelineLayout + ?Sized,
2407    M: DynShaderModule + ?Sized,
2408    Pc: DynPipelineCache + ?Sized,
2409> {
2410    pub label: Label<'a>,
2411    /// The layout of bind groups for this pipeline.
2412    pub layout: &'a Pl,
2413    /// The compiled compute stage and its entry point.
2414    pub stage: ProgrammableStage<'a, M>,
2415    /// The cache which will be used and filled when compiling this pipeline
2416    pub cache: Option<&'a Pc>,
2417}
2418
2419pub struct PipelineCacheDescriptor<'a> {
2420    pub label: Label<'a>,
2421    pub data: Option<&'a [u8]>,
2422}
2423
2424/// Describes how the vertex buffer is interpreted.
2425#[derive(Clone, Debug)]
2426pub struct VertexBufferLayout<'a> {
2427    /// The stride, in bytes, between elements of this buffer.
2428    pub array_stride: wgt::BufferAddress,
2429    /// How often this vertex buffer is "stepped" forward.
2430    pub step_mode: wgt::VertexStepMode,
2431    /// The list of attributes which comprise a single vertex.
2432    pub attributes: &'a [wgt::VertexAttribute],
2433}
2434
2435#[derive(Clone, Debug)]
2436pub enum VertexProcessor<'a, M: DynShaderModule + ?Sized> {
2437    Standard {
2438        /// The format of any vertex buffers used with this pipeline.
2439        vertex_buffers: &'a [VertexBufferLayout<'a>],
2440        /// The vertex stage for this pipeline.
2441        vertex_stage: ProgrammableStage<'a, M>,
2442    },
2443    Mesh {
2444        task_stage: Option<ProgrammableStage<'a, M>>,
2445        mesh_stage: ProgrammableStage<'a, M>,
2446    },
2447}
2448
2449/// Describes a render (graphics) pipeline.
2450#[derive(Clone, Debug)]
2451pub struct RenderPipelineDescriptor<
2452    'a,
2453    Pl: DynPipelineLayout + ?Sized,
2454    M: DynShaderModule + ?Sized,
2455    Pc: DynPipelineCache + ?Sized,
2456> {
2457    pub label: Label<'a>,
2458    /// The layout of bind groups for this pipeline.
2459    pub layout: &'a Pl,
2460    /// The vertex processing state(vertex shader + buffers or task + mesh shaders)
2461    pub vertex_processor: VertexProcessor<'a, M>,
2462    /// The properties of the pipeline at the primitive assembly and rasterization level.
2463    pub primitive: wgt::PrimitiveState,
2464    /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2465    pub depth_stencil: Option<wgt::DepthStencilState>,
2466    /// The multi-sampling properties of the pipeline.
2467    pub multisample: wgt::MultisampleState,
2468    /// The fragment stage for this pipeline.
2469    pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2470    /// The effect of draw calls on the color aspect of the output target.
2471    pub color_targets: &'a [Option<wgt::ColorTargetState>],
2472    /// If the pipeline will be used with a multiview render pass, this indicates how many array
2473    /// layers the attachments will have.
2474    pub multiview_mask: Option<NonZeroU32>,
2475    /// The cache which will be used and filled when compiling this pipeline
2476    pub cache: Option<&'a Pc>,
2477}
2478
2479#[derive(Debug, Clone)]
2480pub struct SurfaceConfiguration {
2481    /// Maximum number of queued frames. Must be in
2482    /// `SurfaceCapabilities::maximum_frame_latency` range.
2483    pub maximum_frame_latency: u32,
2484    /// Vertical synchronization mode.
2485    pub present_mode: wgt::PresentMode,
2486    /// Alpha composition mode.
2487    pub composite_alpha_mode: wgt::CompositeAlphaMode,
2488    /// Format of the surface textures.
2489    pub format: wgt::TextureFormat,
2490    /// Requested texture extent. Must be in
2491    /// `SurfaceCapabilities::extents` range.
2492    pub extent: wgt::Extent3d,
2493    /// Allowed usage of surface textures,
2494    pub usage: wgt::TextureUses,
2495    /// Allows views of swapchain texture to have a different format
2496    /// than the texture does.
2497    pub view_formats: Vec<wgt::TextureFormat>,
2498}
2499
2500#[derive(Debug, Clone)]
2501pub struct Rect<T> {
2502    pub x: T,
2503    pub y: T,
2504    pub w: T,
2505    pub h: T,
2506}
2507
2508#[derive(Debug, Clone, PartialEq)]
2509pub struct StateTransition<T> {
2510    pub from: T,
2511    pub to: T,
2512}
2513
2514#[derive(Debug, Clone)]
2515pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2516    pub buffer: &'a B,
2517    pub usage: StateTransition<wgt::BufferUses>,
2518}
2519
2520#[derive(Debug, Clone)]
2521pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2522    pub texture: &'a T,
2523    pub range: wgt::ImageSubresourceRange,
2524    pub usage: StateTransition<wgt::TextureUses>,
2525}
2526
2527#[derive(Clone, Copy, Debug)]
2528pub struct BufferCopy {
2529    pub src_offset: wgt::BufferAddress,
2530    pub dst_offset: wgt::BufferAddress,
2531    pub size: wgt::BufferSize,
2532}
2533
2534#[derive(Clone, Debug)]
2535pub struct TextureCopyBase {
2536    pub mip_level: u32,
2537    pub array_layer: u32,
2538    /// Origin within a texture.
2539    /// Note: for 1D and 2D textures, Z must be 0.
2540    pub origin: wgt::Origin3d,
2541    pub aspect: FormatAspects,
2542}
2543
2544#[derive(Clone, Copy, Debug)]
2545pub struct CopyExtent {
2546    pub width: u32,
2547    pub height: u32,
2548    pub depth: u32,
2549}
2550
2551impl From<wgt::Extent3d> for CopyExtent {
2552    fn from(value: wgt::Extent3d) -> Self {
2553        let wgt::Extent3d {
2554            width,
2555            height,
2556            depth_or_array_layers,
2557        } = value;
2558        Self {
2559            width,
2560            height,
2561            depth: depth_or_array_layers,
2562        }
2563    }
2564}
2565
2566impl From<CopyExtent> for wgt::Extent3d {
2567    fn from(value: CopyExtent) -> Self {
2568        let CopyExtent {
2569            width,
2570            height,
2571            depth,
2572        } = value;
2573        Self {
2574            width,
2575            height,
2576            depth_or_array_layers: depth,
2577        }
2578    }
2579}
2580
2581#[derive(Clone, Debug)]
2582pub struct TextureCopy {
2583    pub src_base: TextureCopyBase,
2584    pub dst_base: TextureCopyBase,
2585    pub size: CopyExtent,
2586}
2587
2588#[derive(Clone, Debug)]
2589pub struct BufferTextureCopy {
2590    pub buffer_layout: wgt::TexelCopyBufferLayout,
2591    pub texture_base: TextureCopyBase,
2592    pub size: CopyExtent,
2593}
2594
2595#[derive(Clone, Debug)]
2596pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2597    pub view: &'a T,
2598    /// Contains either a single mutating usage as a target,
2599    /// or a valid combination of read-only usages.
2600    pub usage: wgt::TextureUses,
2601}
2602
2603#[derive(Clone, Debug)]
2604pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2605    pub target: Attachment<'a, T>,
2606    pub depth_slice: Option<u32>,
2607    pub resolve_target: Option<Attachment<'a, T>>,
2608    pub ops: AttachmentOps,
2609    pub clear_value: wgt::Color,
2610}
2611
2612#[derive(Clone, Debug)]
2613pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2614    pub target: Attachment<'a, T>,
2615    pub depth_ops: AttachmentOps,
2616    pub stencil_ops: AttachmentOps,
2617    pub clear_value: (f32, u32),
2618}
2619
2620#[derive(Clone, Debug)]
2621pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2622    pub query_set: &'a Q,
2623    pub beginning_of_pass_write_index: Option<u32>,
2624    pub end_of_pass_write_index: Option<u32>,
2625}
2626
2627#[derive(Clone, Debug)]
2628pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2629    pub label: Label<'a>,
2630    pub extent: wgt::Extent3d,
2631    pub sample_count: u32,
2632    pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2633    pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2634    pub multiview_mask: Option<NonZeroU32>,
2635    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2636    pub occlusion_query_set: Option<&'a Q>,
2637}
2638
2639#[derive(Clone, Debug)]
2640pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2641    pub label: Label<'a>,
2642    pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2643}
2644
2645#[test]
2646fn test_default_limits() {
2647    let limits = wgt::Limits::default();
2648    assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2649}
2650
2651#[derive(Clone, Debug)]
2652pub struct AccelerationStructureDescriptor<'a> {
2653    pub label: Label<'a>,
2654    pub size: wgt::BufferAddress,
2655    pub format: AccelerationStructureFormat,
2656    pub allow_compaction: bool,
2657}
2658
2659#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2660pub enum AccelerationStructureFormat {
2661    TopLevel,
2662    BottomLevel,
2663}
2664
2665#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2666pub enum AccelerationStructureBuildMode {
2667    Build,
2668    Update,
2669}
2670
2671/// Information of the required size for a corresponding entries struct (+ flags)
2672#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2673pub struct AccelerationStructureBuildSizes {
2674    pub acceleration_structure_size: wgt::BufferAddress,
2675    pub update_scratch_size: wgt::BufferAddress,
2676    pub build_scratch_size: wgt::BufferAddress,
2677}
2678
2679/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2680/// For updates, only the data is allowed to change (not the meta data or sizes).
2681#[derive(Clone, Debug)]
2682pub struct BuildAccelerationStructureDescriptor<
2683    'a,
2684    B: DynBuffer + ?Sized,
2685    A: DynAccelerationStructure + ?Sized,
2686> {
2687    pub entries: &'a AccelerationStructureEntries<'a, B>,
2688    pub mode: AccelerationStructureBuildMode,
2689    pub flags: AccelerationStructureBuildFlags,
2690    pub source_acceleration_structure: Option<&'a A>,
2691    pub destination_acceleration_structure: &'a A,
2692    pub scratch_buffer: &'a B,
2693    pub scratch_buffer_offset: wgt::BufferAddress,
2694}
2695
2696/// - All buffers, buffer addresses and offsets will be ignored.
2697/// - The build mode will be ignored.
2698/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2699///   may result in reduced size requirements.
2700/// - Any other change may result in a bigger or smaller size requirement.
2701#[derive(Clone, Debug)]
2702pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2703    pub entries: &'a AccelerationStructureEntries<'a, B>,
2704    pub flags: AccelerationStructureBuildFlags,
2705}
2706
2707/// Entries for a single descriptor
2708/// * `Instances` - Multiple instances for a top level acceleration structure
2709/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2710/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2711#[derive(Debug)]
2712pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2713    Instances(AccelerationStructureInstances<'a, B>),
2714    Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2715    AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2716}
2717
2718/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2719/// * `indices` - optional index buffer with attributes
2720/// * `transform` - optional transform
2721#[derive(Clone, Debug)]
2722pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2723    pub vertex_buffer: Option<&'a B>,
2724    pub vertex_format: wgt::VertexFormat,
2725    pub first_vertex: u32,
2726    pub vertex_count: u32,
2727    pub vertex_stride: wgt::BufferAddress,
2728    pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2729    pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2730    pub flags: AccelerationStructureGeometryFlags,
2731}
2732
2733/// * `offset` - offset in bytes
2734#[derive(Clone, Debug)]
2735pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2736    pub buffer: Option<&'a B>,
2737    pub offset: u32,
2738    pub count: u32,
2739    pub stride: wgt::BufferAddress,
2740    pub flags: AccelerationStructureGeometryFlags,
2741}
2742
2743pub struct AccelerationStructureCopy {
2744    pub copy_flags: wgt::AccelerationStructureCopy,
2745    pub type_flags: wgt::AccelerationStructureType,
2746}
2747
2748/// * `offset` - offset in bytes
2749#[derive(Clone, Debug)]
2750pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2751    pub buffer: Option<&'a B>,
2752    pub offset: u32,
2753    pub count: u32,
2754}
2755
2756/// * `offset` - offset in bytes
2757#[derive(Clone, Debug)]
2758pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2759    pub format: wgt::IndexFormat,
2760    pub buffer: Option<&'a B>,
2761    pub offset: u32,
2762    pub count: u32,
2763}
2764
2765/// * `offset` - offset in bytes
2766#[derive(Clone, Debug)]
2767pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2768    pub buffer: &'a B,
2769    pub offset: u32,
2770}
2771
2772pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2773pub use wgt::AccelerationStructureGeometryFlags;
2774
2775bitflags::bitflags! {
2776    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2777    pub struct AccelerationStructureUses: u8 {
2778        // For blas used as input for tlas
2779        const BUILD_INPUT = 1 << 0;
2780        // Target for acceleration structure build
2781        const BUILD_OUTPUT = 1 << 1;
2782        // Tlas used in a shader
2783        const SHADER_INPUT = 1 << 2;
2784        // Blas used to query compacted size
2785        const QUERY_INPUT = 1 << 3;
2786        // BLAS used as a src for a copy operation
2787        const COPY_SRC = 1 << 4;
2788        // BLAS used as a dst for a copy operation
2789        const COPY_DST = 1 << 5;
2790    }
2791}
2792
2793#[derive(Debug, Clone)]
2794pub struct AccelerationStructureBarrier {
2795    pub usage: StateTransition<AccelerationStructureUses>,
2796}
2797
2798#[derive(Debug, Copy, Clone)]
2799pub struct TlasInstance {
2800    pub transform: [f32; 12],
2801    pub custom_data: u32,
2802    pub mask: u8,
2803    pub blas_address: u64,
2804}