1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
use std::{sync::Arc, thread};
use crate::*;
/// Handle to a pipeline cache, which is used to accelerate
/// creating [`RenderPipeline`]s and [`ComputePipeline`]s
/// in subsequent executions
///
/// This reuse is only applicable for the same or similar devices.
/// See [`util::pipeline_cache_key`] for some details.
///
/// # Background
///
/// In most GPU drivers, shader code must be converted into a machine code
/// which can be executed on the GPU.
/// Generating this machine code can require a lot of computation.
/// Pipeline caches allow this computation to be reused between executions
/// of the program.
/// This can be very useful for reducing program startup time.
///
/// Note that most desktop GPU drivers will manage their own caches,
/// meaning that little advantage can be gained from this on those platforms.
/// However, on some platforms, especially Android, drivers leave this to the
/// application to implement.
///
/// Unfortunately, drivers do not expose whether they manage their own caches.
/// Some reasonable policies for applications to use are:
/// - Manage their own pipeline cache on all platforms
/// - Only manage pipeline caches on Android
///
/// # Usage
///
/// It is valid to use this resource when creating multiple pipelines, in
/// which case it will likely cache each of those pipelines.
/// It is also valid to create a new cache for each pipeline.
///
/// This resource is most useful when the data produced from it (using
/// [`PipelineCache::get_data`]) is persisted.
/// Care should be taken that pipeline caches are only used for the same device,
/// as pipeline caches from compatible devices are unlikely to provide any advantage.
/// `util::pipeline_cache_key` can be used as a file/directory name to help ensure that.
///
/// It is recommended to store pipeline caches atomically. If persisting to disk,
/// this can usually be achieved by creating a temporary file, then moving/[renaming]
/// the temporary file over the existing cache
///
/// # Storage Usage
///
/// There is not currently an API available to reduce the size of a cache.
/// This is due to limitations in the underlying graphics APIs used.
/// This is especially impactful if your application is being updated, so
/// previous caches are no longer being used.
///
/// One option to work around this is to regenerate the cache.
/// That is, creating the pipelines which your program runs using
/// with the stored cached data, then recreating the *same* pipelines
/// using a new cache, which your application then store.
///
/// # Implementations
///
/// This resource currently only works on the following backends:
/// - Vulkan
///
/// This type is unique to the Rust API of `wgpu`.
///
/// [renaming]: std::fs::rename
#[derive(Debug)]
pub struct PipelineCache {
pub(crate) context: Arc<C>,
pub(crate) data: Box<Data>,
}
#[cfg(send_sync)]
static_assertions::assert_impl_all!(PipelineCache: Send, Sync);
impl PipelineCache {
/// Get the data associated with this pipeline cache.
/// The data format is an implementation detail of `wgpu`.
/// The only defined operation on this data setting it as the `data` field
/// on [`PipelineCacheDescriptor`], then to [`Device::create_pipeline_cache`].
///
/// This function is unique to the Rust API of `wgpu`.
pub fn get_data(&self) -> Option<Vec<u8>> {
self.context.pipeline_cache_get_data(self.data.as_ref())
}
}
impl Drop for PipelineCache {
fn drop(&mut self) {
if !thread::panicking() {
self.context.pipeline_cache_drop(self.data.as_ref());
}
}
}