wgpu/api/pipeline_cache.rs
1use alloc::vec::Vec;
2
3use crate::*;
4
5/// Handle to a pipeline cache, which is used to accelerate
6/// creating [`RenderPipeline`]s and [`ComputePipeline`]s
7/// in subsequent executions
8///
9/// This reuse is only applicable for the same or similar devices.
10/// See [`util::pipeline_cache_key`] for some details and a suggested workflow.
11///
12/// Created using [`Device::create_pipeline_cache`].
13///
14/// # Background
15///
16/// In most GPU drivers, shader code must be converted into a machine code
17/// which can be executed on the GPU.
18/// Generating this machine code can require a lot of computation.
19/// Pipeline caches allow this computation to be reused between executions
20/// of the program.
21/// This can be very useful for reducing program startup time.
22///
23/// Note that most desktop GPU drivers will manage their own caches,
24/// meaning that little advantage can be gained from this on those platforms.
25/// However, on some platforms, especially Android, drivers leave this to the
26/// application to implement.
27///
28/// Unfortunately, drivers do not expose whether they manage their own caches.
29/// Some reasonable policies for applications to use are:
30/// - Manage their own pipeline cache on all platforms
31/// - Only manage pipeline caches on Android
32///
33/// # Usage
34///
35/// This is used as [`RenderPipelineDescriptor::cache`] or [`ComputePipelineDescriptor::cache`].
36/// It is valid to use this resource when creating multiple pipelines, in
37/// which case it will likely cache each of those pipelines.
38/// It is also valid to create a new cache for each pipeline.
39///
40/// This resource is most useful when the data produced from it (using
41/// [`PipelineCache::get_data`]) is persisted.
42/// Care should be taken that pipeline caches are only used for the same device,
43/// as pipeline caches from compatible devices are unlikely to provide any advantage.
44/// `util::pipeline_cache_key` can be used as a file/directory name to help ensure that.
45///
46/// It is recommended to store pipeline caches atomically. If persisting to disk,
47/// this can usually be achieved by creating a temporary file, then moving/[renaming]
48/// the temporary file over the existing cache
49///
50/// # Storage Usage
51///
52/// There is not currently an API available to reduce the size of a cache.
53/// This is due to limitations in the underlying graphics APIs used.
54/// This is especially impactful if your application is being updated, so
55/// previous caches are no longer being used.
56///
57/// One option to work around this is to regenerate the cache.
58/// That is, creating the pipelines which your program runs using
59/// with the stored cached data, then recreating the *same* pipelines
60/// using a new cache, which your application then store.
61///
62/// # Implementations
63///
64/// This resource currently only works on the following backends:
65/// - Vulkan
66///
67/// This type is unique to the Rust API of `wgpu`.
68///
69/// [renaming]: std::fs::rename
70#[derive(Debug, Clone)]
71pub struct PipelineCache {
72 pub(crate) inner: crate::dispatch::DispatchPipelineCache,
73}
74
75#[cfg(send_sync)]
76static_assertions::assert_impl_all!(PipelineCache: Send, Sync);
77
78crate::cmp::impl_eq_ord_hash_proxy!(PipelineCache => .inner);
79
80impl PipelineCache {
81 /// Get the data associated with this pipeline cache.
82 /// The data format is an implementation detail of `wgpu`.
83 /// The only defined operation on this data setting it as the `data` field
84 /// on [`PipelineCacheDescriptor`], then to [`Device::create_pipeline_cache`].
85 ///
86 /// This function is unique to the Rust API of `wgpu`.
87 pub fn get_data(&self) -> Option<Vec<u8>> {
88 self.inner.get_data()
89 }
90
91 #[cfg(custom)]
92 /// Returns custom implementation of PipelineCache (if custom backend and is internally T)
93 pub fn as_custom<T: custom::PipelineCacheInterface>(&self) -> Option<&T> {
94 self.inner.as_custom()
95 }
96}