wgpu::util

Struct StagingBelt

source
pub struct StagingBelt {
    chunk_size: BufferAddress,
    active_chunks: Vec<Chunk>,
    closed_chunks: Vec<Chunk>,
    free_chunks: Vec<Chunk>,
    sender: Exclusive<Sender<Chunk>>,
    receiver: Exclusive<Receiver<Chunk>>,
}
Expand description

Efficiently performs many buffer writes by sharing and reusing temporary buffers.

Internally it uses a ring-buffer of staging buffers that are sub-allocated. Its advantage over Queue::write_buffer_with() is that the individual allocations are cheaper; StagingBelt is most useful when you are writing very many small pieces of data. It can be understood as a sort of arena allocator.

Using a staging belt is slightly complicated, and generally goes as follows:

  1. Use StagingBelt::write_buffer() or StagingBelt::allocate() to allocate buffer slices, then write your data to them.
  2. Call StagingBelt::finish().
  3. Submit all command encoders that were used in step 1.
  4. Call StagingBelt::recall().

Fields§

§chunk_size: BufferAddress§active_chunks: Vec<Chunk>

Chunks into which we are accumulating data to be transferred.

§closed_chunks: Vec<Chunk>

Chunks that have scheduled transfers already; they are unmapped and some command encoder has one or more commands with them as source.

§free_chunks: Vec<Chunk>

Chunks that are back from the GPU and ready to be mapped for write and put into active_chunks.

§sender: Exclusive<Sender<Chunk>>

When closed chunks are mapped again, the map callback sends them here.

§receiver: Exclusive<Receiver<Chunk>>

Free chunks are received here to be put on self.free_chunks.

Implementations§

source§

impl StagingBelt

source

pub fn new(chunk_size: BufferAddress) -> Self

Create a new staging belt.

The chunk_size is the unit of internal buffer allocation; writes will be sub-allocated within each chunk. Therefore, for optimal use of memory, the chunk size should be:

source

pub fn write_buffer( &mut self, encoder: &mut CommandEncoder, target: &Buffer, offset: BufferAddress, size: BufferSize, device: &Device, ) -> BufferViewMut<'_>

Allocate a staging belt slice of size to be copied into the target buffer at the specified offset.

The upload will be placed into the provided command encoder. This encoder must be submitted after StagingBelt::finish() is called and before StagingBelt::recall() is called.

If the size is greater than the size of any free internal buffer, a new buffer will be allocated for it. Therefore, the chunk_size passed to StagingBelt::new() should ideally be larger than every such size.

source

pub fn allocate( &mut self, size: BufferSize, alignment: BufferSize, device: &Device, ) -> BufferSlice<'_>

Allocate a staging belt slice with the given size and alignment and return it.

To use this slice, call BufferSlice::get_mapped_range_mut() and write your data into that BufferViewMut. (The view must be dropped before StagingBelt::finish() is called.)

You can then record your own GPU commands to perform with the slice, such as copying it to a texture or executing a compute shader that reads it (whereas StagingBelt::write_buffer() can only write to other buffers). All commands involving this slice must be submitted after StagingBelt::finish() is called and before StagingBelt::recall() is called.

If the size is greater than the space available in any free internal buffer, a new buffer will be allocated for it. Therefore, the chunk_size passed to StagingBelt::new() should ideally be larger than every such size.

The chosen slice will be positioned within the buffer at a multiple of alignment, which may be used to meet alignment requirements for the operation you wish to perform with the slice. This does not necessarily affect the alignment of the BufferViewMut.

source

pub fn finish(&mut self)

Prepare currently mapped buffers for use in a submission.

This must be called before the command encoder(s) provided to StagingBelt::write_buffer() are submitted.

At this point, all the partially used staging buffers are closed (cannot be used for further writes) until after StagingBelt::recall() is called and the GPU is done copying the data from them.

source

pub fn recall(&mut self)

Recall all of the closed buffers back to be reused.

This must only be called after the command encoder(s) provided to StagingBelt::write_buffer() are submitted. Additional calls are harmless. Not calling this as soon as possible may result in increased buffer memory usage.

source

fn receive_chunks(&mut self)

Move all chunks that the GPU is done with (and are now mapped again) from self.receiver to self.free_chunks.

Trait Implementations§

source§

impl Debug for StagingBelt

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> Downcast<T> for T

§

fn downcast(&self) -> &T

source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

source§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T> Upcast<T> for T

§

fn upcast(&self) -> Option<&T>

source§

impl<T> WasmNotSend for T
where T: Send,

source§

impl<T> WasmNotSendSync for T

source§

impl<T> WasmNotSync for T
where T: Sync,