wgpu_core/hub.rs
1/*! Allocating resource ids, and tracking the resources they refer to.
2
3The `wgpu_core` API uses identifiers of type [`Id<R>`] to refer to
4resources of type `R`. For example, [`id::DeviceId`] is an alias for
5`Id<markers::Device>`, and [`id::BufferId`] is an alias for
6`Id<markers::Buffer>`. `Id` implements `Copy`, `Hash`, `Eq`, `Ord`, and
7of course `Debug`.
8
9[`id::DeviceId`]: crate::id::DeviceId
10[`id::BufferId`]: crate::id::BufferId
11
12Each `Id` contains not only an index for the resource it denotes but
13also a Backend indicating which `wgpu` backend it belongs to.
14
15`Id`s also incorporate a generation number, for additional validation.
16
17The resources to which identifiers refer are freed explicitly.
18Attempting to use an identifier for a resource that has been freed
19elicits an error result.
20
21## Assigning ids to resources
22
23The users of `wgpu_core` generally want resource ids to be assigned
24in one of two ways:
25
26- Users like `wgpu` want `wgpu_core` to assign ids to resources itself.
27 For example, `wgpu` expects to call `Global::device_create_buffer`
28 and have the return value indicate the newly created buffer's id.
29
30- Users like `player` and Firefox want to allocate ids themselves, and
31 pass `Global::device_create_buffer` and friends the id to assign the
32 new resource.
33
34To accommodate either pattern, `wgpu_core` methods that create
35resources all expect an `id_in` argument that the caller can use to
36specify the id, and they all return the id used. For example, the
37declaration of `Global::device_create_buffer` looks like this:
38
39```ignore
40impl Global {
41 /* ... */
42 pub fn device_create_buffer<A: HalApi>(
43 &self,
44 device_id: id::DeviceId,
45 desc: &resource::BufferDescriptor,
46 id_in: Input<G>,
47 ) -> (id::BufferId, Option<resource::CreateBufferError>) {
48 /* ... */
49 }
50 /* ... */
51}
52```
53
54Users that want to assign resource ids themselves pass in the id they
55want as the `id_in` argument, whereas users that want `wgpu_core`
56itself to choose ids always pass `()`. In either case, the id
57ultimately assigned is returned as the first element of the tuple.
58
59Producing true identifiers from `id_in` values is the job of an
60[`crate::identity::IdentityManager`] or ids will be received from outside through `Option<Id>` arguments.
61
62## Id allocation and streaming
63
64Perhaps surprisingly, allowing users to assign resource ids themselves
65enables major performance improvements in some applications.
66
67The `wgpu_core` API is designed for use by Firefox's [WebGPU]
68implementation. For security, web content and GPU use must be kept
69segregated in separate processes, with all interaction between them
70mediated by an inter-process communication protocol. As web content uses
71the WebGPU API, the content process sends messages to the GPU process,
72which interacts with the platform's GPU APIs on content's behalf,
73occasionally sending results back.
74
75In a classic Rust API, a resource allocation function takes parameters
76describing the resource to create, and if creation succeeds, it returns
77the resource id in a `Result::Ok` value. However, this design is a poor
78fit for the split-process design described above: content must wait for
79the reply to its buffer-creation message (say) before it can know which
80id it can use in the next message that uses that buffer. On a common
81usage pattern, the classic Rust design imposes the latency of a full
82cross-process round trip.
83
84We can avoid incurring these round-trip latencies simply by letting the
85content process assign resource ids itself. With this approach, content
86can choose an id for the new buffer, send a message to create the
87buffer, and then immediately send the next message operating on that
88buffer, since it already knows its id. Allowing content and GPU process
89activity to be pipelined greatly improves throughput.
90
91To help propagate errors correctly in this style of usage, when resource
92creation fails, the id supplied for that resource is marked to indicate
93as much, allowing subsequent operations using that id to be properly
94flagged as errors as well.
95
96[`process`]: crate::identity::IdentityManager::process
97[`Id<R>`]: crate::id::Id
98[wrapped in a mutex]: trait.IdentityHandler.html#impl-IdentityHandler%3CI%3E-for-Mutex%3CIdentityManager%3E
99[WebGPU]: https://www.w3.org/TR/webgpu/
100
101*/
102
103use alloc::sync::Arc;
104use core::fmt::Debug;
105
106use crate::{
107 binding_model::{BindGroup, BindGroupLayout, PipelineLayout},
108 command::{CommandBuffer, CommandEncoder, RenderBundle},
109 device::{queue::Queue, Device},
110 instance::Adapter,
111 pipeline::{ComputePipeline, PipelineCache, RenderPipeline, ShaderModule},
112 registry::{Registry, RegistryReport},
113 resource::{
114 Blas, Buffer, ExternalTexture, Fallible, QuerySet, Sampler, StagingBuffer, Texture,
115 TextureView, Tlas,
116 },
117};
118
119#[derive(Debug, PartialEq, Eq)]
120pub struct HubReport {
121 pub adapters: RegistryReport,
122 pub devices: RegistryReport,
123 pub queues: RegistryReport,
124 pub pipeline_layouts: RegistryReport,
125 pub shader_modules: RegistryReport,
126 pub bind_group_layouts: RegistryReport,
127 pub bind_groups: RegistryReport,
128 pub command_encoders: RegistryReport,
129 pub command_buffers: RegistryReport,
130 pub render_bundles: RegistryReport,
131 pub render_pipelines: RegistryReport,
132 pub compute_pipelines: RegistryReport,
133 pub pipeline_caches: RegistryReport,
134 pub query_sets: RegistryReport,
135 pub buffers: RegistryReport,
136 pub textures: RegistryReport,
137 pub texture_views: RegistryReport,
138 pub external_textures: RegistryReport,
139 pub samplers: RegistryReport,
140}
141
142impl HubReport {
143 pub fn is_empty(&self) -> bool {
144 self.adapters.is_empty()
145 }
146}
147
148#[allow(rustdoc::private_intra_doc_links)]
149/// All the resources tracked by a [`crate::global::Global`].
150///
151/// ## Locking
152///
153/// Each field in `Hub` is a [`Registry`] holding all the values of a
154/// particular type of resource, all protected by a single RwLock.
155/// So for example, to access any [`Buffer`], you must acquire a read
156/// lock on the `Hub`s entire buffers registry. The lock guard
157/// gives you access to the `Registry`'s [`Storage`], which you can
158/// then index with the buffer's id. (Yes, this design causes
159/// contention; see [#2272].)
160///
161/// But most `wgpu` operations require access to several different
162/// kinds of resource, so you often need to hold locks on several
163/// different fields of your [`Hub`] simultaneously.
164///
165/// Inside the `Registry` there are `Arc<T>` where `T` is a Resource
166/// Lock of `Registry` happens only when accessing to get the specific resource
167///
168/// [`Storage`]: crate::storage::Storage
169pub struct Hub {
170 pub(crate) adapters: Registry<Arc<Adapter>>,
171 pub(crate) devices: Registry<Arc<Device>>,
172 pub(crate) queues: Registry<Arc<Queue>>,
173 pub(crate) pipeline_layouts: Registry<Fallible<PipelineLayout>>,
174 pub(crate) shader_modules: Registry<Fallible<ShaderModule>>,
175 pub(crate) bind_group_layouts: Registry<Fallible<BindGroupLayout>>,
176 pub(crate) bind_groups: Registry<Fallible<BindGroup>>,
177 pub(crate) command_encoders: Registry<Arc<CommandEncoder>>,
178 pub(crate) command_buffers: Registry<Arc<CommandBuffer>>,
179 pub(crate) render_bundles: Registry<Fallible<RenderBundle>>,
180 pub(crate) render_pipelines: Registry<Fallible<RenderPipeline>>,
181 pub(crate) compute_pipelines: Registry<Fallible<ComputePipeline>>,
182 pub(crate) pipeline_caches: Registry<Fallible<PipelineCache>>,
183 pub(crate) query_sets: Registry<Fallible<QuerySet>>,
184 pub(crate) buffers: Registry<Fallible<Buffer>>,
185 pub(crate) staging_buffers: Registry<StagingBuffer>,
186 pub(crate) textures: Registry<Fallible<Texture>>,
187 pub(crate) texture_views: Registry<Fallible<TextureView>>,
188 pub(crate) external_textures: Registry<Fallible<ExternalTexture>>,
189 pub(crate) samplers: Registry<Fallible<Sampler>>,
190 pub(crate) blas_s: Registry<Fallible<Blas>>,
191 pub(crate) tlas_s: Registry<Fallible<Tlas>>,
192}
193
194impl Hub {
195 pub(crate) fn new() -> Self {
196 Self {
197 adapters: Registry::new(),
198 devices: Registry::new(),
199 queues: Registry::new(),
200 pipeline_layouts: Registry::new(),
201 shader_modules: Registry::new(),
202 bind_group_layouts: Registry::new(),
203 bind_groups: Registry::new(),
204 command_encoders: Registry::new(),
205 command_buffers: Registry::new(),
206 render_bundles: Registry::new(),
207 render_pipelines: Registry::new(),
208 compute_pipelines: Registry::new(),
209 pipeline_caches: Registry::new(),
210 query_sets: Registry::new(),
211 buffers: Registry::new(),
212 staging_buffers: Registry::new(),
213 textures: Registry::new(),
214 texture_views: Registry::new(),
215 external_textures: Registry::new(),
216 samplers: Registry::new(),
217 blas_s: Registry::new(),
218 tlas_s: Registry::new(),
219 }
220 }
221
222 pub fn generate_report(&self) -> HubReport {
223 HubReport {
224 adapters: self.adapters.generate_report(),
225 devices: self.devices.generate_report(),
226 queues: self.queues.generate_report(),
227 pipeline_layouts: self.pipeline_layouts.generate_report(),
228 shader_modules: self.shader_modules.generate_report(),
229 bind_group_layouts: self.bind_group_layouts.generate_report(),
230 bind_groups: self.bind_groups.generate_report(),
231 command_encoders: self.command_encoders.generate_report(),
232 command_buffers: self.command_buffers.generate_report(),
233 render_bundles: self.render_bundles.generate_report(),
234 render_pipelines: self.render_pipelines.generate_report(),
235 compute_pipelines: self.compute_pipelines.generate_report(),
236 pipeline_caches: self.pipeline_caches.generate_report(),
237 query_sets: self.query_sets.generate_report(),
238 buffers: self.buffers.generate_report(),
239 textures: self.textures.generate_report(),
240 texture_views: self.texture_views.generate_report(),
241 external_textures: self.external_textures.generate_report(),
242 samplers: self.samplers.generate_report(),
243 }
244 }
245}