wgpu_core/
hub.rs

1/*! Allocating resource ids, and tracking the resources they refer to.
2
3The `wgpu_core` API uses identifiers of type [`Id<R>`] to refer to
4resources of type `R`. For example, [`id::DeviceId`] is an alias for
5`Id<markers::Device>`, and [`id::BufferId`] is an alias for
6`Id<markers::Buffer>`. `Id` implements `Copy`, `Hash`, `Eq`, `Ord`, and
7of course `Debug`.
8
9[`id::DeviceId`]: crate::id::DeviceId
10[`id::BufferId`]: crate::id::BufferId
11
12Each `Id` contains not only an index for the resource it denotes but
13also a Backend indicating which `wgpu` backend it belongs to.
14
15`Id`s also incorporate a generation number, for additional validation.
16
17The resources to which identifiers refer are freed explicitly.
18Attempting to use an identifier for a resource that has been freed
19elicits an error result.
20
21Eventually, we would like to remove numeric IDs from wgpu-core.
22See <https://github.com/gfx-rs/wgpu/issues/5121>.
23
24## Assigning ids to resources
25
26The users of `wgpu_core` generally want resource ids to be assigned
27in one of two ways:
28
29- Users like `wgpu` want `wgpu_core` to assign ids to resources itself.
30  For example, `wgpu` expects to call `Global::device_create_buffer`
31  and have the return value indicate the newly created buffer's id.
32
33- Users like Firefox want to allocate ids themselves, and pass
34  `Global::device_create_buffer` and friends the id to assign the new
35  resource.
36
37To accommodate either pattern, `wgpu_core` methods that create
38resources all expect an `id_in` argument that the caller can use to
39specify the id, and they all return the id used. For example, the
40declaration of `Global::device_create_buffer` looks like this:
41
42```ignore
43impl Global {
44    /* ... */
45    pub fn device_create_buffer<A: HalApi>(
46        &self,
47        device_id: id::DeviceId,
48        desc: &resource::BufferDescriptor,
49        id_in: Input<G>,
50    ) -> (id::BufferId, Option<resource::CreateBufferError>) {
51        /* ... */
52    }
53    /* ... */
54}
55```
56
57Users that want to assign resource ids themselves pass in the id they
58want as the `id_in` argument, whereas users that want `wgpu_core`
59itself to choose ids always pass `()`. In either case, the id
60ultimately assigned is returned as the first element of the tuple.
61
62Producing true identifiers from `id_in` values is the job of an
63[`crate::identity::IdentityManager`] or ids will be received from outside through `Option<Id>` arguments.
64
65## Id allocation and streaming
66
67Perhaps surprisingly, allowing users to assign resource ids themselves
68enables major performance improvements in some applications.
69
70The `wgpu_core` API is designed for use by Firefox's [WebGPU]
71implementation. For security, web content and GPU use must be kept
72segregated in separate processes, with all interaction between them
73mediated by an inter-process communication protocol. As web content uses
74the WebGPU API, the content process sends messages to the GPU process,
75which interacts with the platform's GPU APIs on content's behalf,
76occasionally sending results back.
77
78In a classic Rust API, a resource allocation function takes parameters
79describing the resource to create, and if creation succeeds, it returns
80the resource id in a `Result::Ok` value. However, this design is a poor
81fit for the split-process design described above: content must wait for
82the reply to its buffer-creation message (say) before it can know which
83id it can use in the next message that uses that buffer. On a common
84usage pattern, the classic Rust design imposes the latency of a full
85cross-process round trip.
86
87We can avoid incurring these round-trip latencies simply by letting the
88content process assign resource ids itself. With this approach, content
89can choose an id for the new buffer, send a message to create the
90buffer, and then immediately send the next message operating on that
91buffer, since it already knows its id. Allowing content and GPU process
92activity to be pipelined greatly improves throughput.
93
94To help propagate errors correctly in this style of usage, when resource
95creation fails, the id supplied for that resource is marked to indicate
96as much, allowing subsequent operations using that id to be properly
97flagged as errors as well.
98
99[`process`]: crate::identity::IdentityManager::process
100[`Id<R>`]: crate::id::Id
101[wrapped in a mutex]: trait.IdentityHandler.html#impl-IdentityHandler%3CI%3E-for-Mutex%3CIdentityManager%3E
102[WebGPU]: https://www.w3.org/TR/webgpu/
103
104## IDs and tracing
105
106As of `wgpu` v27, commands are encoded all at once when
107`CommandEncoder::finish` is called, not when the encoding methods are
108called for each command. This implies storing a representation of the
109commands in memory until `finish` is called.  `Arc`s are more suitable
110for this purpose than numeric ids. Rather than redundantly store both
111`Id`s and `Arc`s, tracing has been changed to work with `Arc`s. The
112serialized trace identifies resources by the integer value of
113`Arc::as_ptr`. These IDs have the type [`crate::id::PointerId`]. The
114trace player uses hash maps to go from `PointerId`s to `Arc`s
115when replaying a trace.
116
117*/
118
119use alloc::sync::Arc;
120use core::fmt::Debug;
121
122use crate::{
123    binding_model::{BindGroup, BindGroupLayout, PipelineLayout},
124    command::{CommandBuffer, CommandEncoder, RenderBundle},
125    device::{queue::Queue, Device},
126    instance::Adapter,
127    pipeline::{ComputePipeline, PipelineCache, RenderPipeline, ShaderModule},
128    registry::{Registry, RegistryReport},
129    resource::{
130        Blas, Buffer, ExternalTexture, Fallible, QuerySet, Sampler, StagingBuffer, Texture,
131        TextureView, Tlas,
132    },
133};
134
135#[derive(Debug, PartialEq, Eq)]
136pub struct HubReport {
137    pub adapters: RegistryReport,
138    pub devices: RegistryReport,
139    pub queues: RegistryReport,
140    pub pipeline_layouts: RegistryReport,
141    pub shader_modules: RegistryReport,
142    pub bind_group_layouts: RegistryReport,
143    pub bind_groups: RegistryReport,
144    pub command_encoders: RegistryReport,
145    pub command_buffers: RegistryReport,
146    pub render_bundles: RegistryReport,
147    pub render_pipelines: RegistryReport,
148    pub compute_pipelines: RegistryReport,
149    pub pipeline_caches: RegistryReport,
150    pub query_sets: RegistryReport,
151    pub buffers: RegistryReport,
152    pub textures: RegistryReport,
153    pub texture_views: RegistryReport,
154    pub external_textures: RegistryReport,
155    pub samplers: RegistryReport,
156}
157
158impl HubReport {
159    pub fn is_empty(&self) -> bool {
160        self.adapters.is_empty()
161    }
162}
163
164#[allow(rustdoc::private_intra_doc_links)]
165/// All the resources tracked by a [`crate::global::Global`].
166///
167/// ## Locking
168///
169/// Each field in `Hub` is a [`Registry`] holding all the values of a
170/// particular type of resource, all protected by a single RwLock.
171/// So for example, to access any [`Buffer`], you must acquire a read
172/// lock on the `Hub`s entire buffers registry. The lock guard
173/// gives you access to the `Registry`'s [`Storage`], which you can
174/// then index with the buffer's id. (Yes, this design causes
175/// contention; see [#2272].)
176///
177/// But most `wgpu` operations require access to several different
178/// kinds of resource, so you often need to hold locks on several
179/// different fields of your [`Hub`] simultaneously.
180///
181/// Inside the `Registry` there are `Arc<T>` where `T` is a Resource
182/// Lock of `Registry` happens only when accessing to get the specific resource
183///
184/// [`Storage`]: crate::storage::Storage
185pub struct Hub {
186    pub(crate) adapters: Registry<Arc<Adapter>>,
187    pub(crate) devices: Registry<Arc<Device>>,
188    pub(crate) queues: Registry<Arc<Queue>>,
189    pub(crate) pipeline_layouts: Registry<Fallible<PipelineLayout>>,
190    pub(crate) shader_modules: Registry<Fallible<ShaderModule>>,
191    pub(crate) bind_group_layouts: Registry<Fallible<BindGroupLayout>>,
192    pub(crate) bind_groups: Registry<Fallible<BindGroup>>,
193    pub(crate) command_encoders: Registry<Arc<CommandEncoder>>,
194    pub(crate) command_buffers: Registry<Arc<CommandBuffer>>,
195    pub(crate) render_bundles: Registry<Fallible<RenderBundle>>,
196    pub(crate) render_pipelines: Registry<Fallible<RenderPipeline>>,
197    pub(crate) compute_pipelines: Registry<Fallible<ComputePipeline>>,
198    pub(crate) pipeline_caches: Registry<Fallible<PipelineCache>>,
199    pub(crate) query_sets: Registry<Fallible<QuerySet>>,
200    pub(crate) buffers: Registry<Fallible<Buffer>>,
201    pub(crate) staging_buffers: Registry<StagingBuffer>,
202    pub(crate) textures: Registry<Fallible<Texture>>,
203    pub(crate) texture_views: Registry<Fallible<TextureView>>,
204    pub(crate) external_textures: Registry<Fallible<ExternalTexture>>,
205    pub(crate) samplers: Registry<Fallible<Sampler>>,
206    pub(crate) blas_s: Registry<Fallible<Blas>>,
207    pub(crate) tlas_s: Registry<Fallible<Tlas>>,
208}
209
210impl Hub {
211    pub(crate) fn new() -> Self {
212        Self {
213            adapters: Registry::new(),
214            devices: Registry::new(),
215            queues: Registry::new(),
216            pipeline_layouts: Registry::new(),
217            shader_modules: Registry::new(),
218            bind_group_layouts: Registry::new(),
219            bind_groups: Registry::new(),
220            command_encoders: Registry::new(),
221            command_buffers: Registry::new(),
222            render_bundles: Registry::new(),
223            render_pipelines: Registry::new(),
224            compute_pipelines: Registry::new(),
225            pipeline_caches: Registry::new(),
226            query_sets: Registry::new(),
227            buffers: Registry::new(),
228            staging_buffers: Registry::new(),
229            textures: Registry::new(),
230            texture_views: Registry::new(),
231            external_textures: Registry::new(),
232            samplers: Registry::new(),
233            blas_s: Registry::new(),
234            tlas_s: Registry::new(),
235        }
236    }
237
238    pub fn generate_report(&self) -> HubReport {
239        HubReport {
240            adapters: self.adapters.generate_report(),
241            devices: self.devices.generate_report(),
242            queues: self.queues.generate_report(),
243            pipeline_layouts: self.pipeline_layouts.generate_report(),
244            shader_modules: self.shader_modules.generate_report(),
245            bind_group_layouts: self.bind_group_layouts.generate_report(),
246            bind_groups: self.bind_groups.generate_report(),
247            command_encoders: self.command_encoders.generate_report(),
248            command_buffers: self.command_buffers.generate_report(),
249            render_bundles: self.render_bundles.generate_report(),
250            render_pipelines: self.render_pipelines.generate_report(),
251            compute_pipelines: self.compute_pipelines.generate_report(),
252            pipeline_caches: self.pipeline_caches.generate_report(),
253            query_sets: self.query_sets.generate_report(),
254            buffers: self.buffers.generate_report(),
255            textures: self.textures.generate_report(),
256            texture_views: self.texture_views.generate_report(),
257            external_textures: self.external_textures.generate_report(),
258            samplers: self.samplers.generate_report(),
259        }
260    }
261}