wgpu::api::queue

Struct Queue

Source
pub struct Queue {
    pub(crate) inner: DispatchQueue,
}
Expand description

Handle to a command queue on a device.

A Queue executes recorded CommandBuffer objects and provides convenience methods for writing to buffers and textures. It can be created along with a Device by calling Adapter::request_device.

Corresponds to WebGPU GPUQueue.

Fields§

§inner: DispatchQueue

Implementations§

Source§

impl Queue

Source

pub fn from_custom<T: QueueInterface>(queue: T) -> Self

Available on custom only.

Creates Queue from custom implementation

Source

pub fn write_buffer(&self, buffer: &Buffer, offset: BufferAddress, data: &[u8])

Copies the bytes of data into buffer starting at offset.

The data must be written fully in-bounds, that is, offset + data.len() <= buffer.len().

§Performance considerations
  • Calls to write_buffer() do not submit the transfer to the GPU immediately. They begin GPU execution only on the next call to Queue::submit(), just before the explicitly submitted commands. To get a set of scheduled transfers started immediately, it’s fine to call submit with no command buffers at all:

    queue.write_buffer(&buffer, 0, &data);
    queue.submit([]);

    However, data will be immediately copied into staging memory, so the caller may discard it any time after this call completes.

  • Consider using Queue::write_buffer_with() instead. That method allows you to prepare your data directly within the staging memory, rather than first placing it in a separate [u8] to be copied. That is, queue.write_buffer(b, offset, data) is approximately equivalent to queue.write_buffer_with(b, offset, data.len()).copy_from_slice(data), so use write_buffer_with() if you can do something smarter than that copy_from_slice(). However, for small values (e.g. a typical uniform buffer whose contents come from a struct), there will likely be no difference, since the compiler will be able to optimize out unnecessary copies regardless.

  • Currently on native platforms, for both of these methods, the staging memory will be a new allocation. This will then be released after the next submission finishes. To entirely avoid short-lived allocations, you might be able to use StagingBelt, or buffers you explicitly create, map, and unmap yourself.

Source

pub fn write_buffer_with<'a>( &'a self, buffer: &'a Buffer, offset: BufferAddress, size: BufferSize, ) -> Option<QueueWriteBufferView<'a>>

Prepares to write data to a buffer via a mapped staging buffer.

This operation allocates a temporary buffer and then returns a QueueWriteBufferView, which

  • dereferences to a [u8] of length size, and
  • when dropped, schedules a copy of its contents into buffer at offset.

Therefore, this obtains the same result as Queue::write_buffer(), but may allow you to skip one allocation and one copy of your data, if you are able to assemble your data directly into the returned QueueWriteBufferView instead of into a separate allocation like a Vec first.

The data must be written fully in-bounds, that is, offset + size <= buffer.len().

§Performance considerations
  • For small data not separately heap-allocated, there is no advantage of this over Queue::write_buffer().

  • Reading from the returned view may be slow, and will not yield the current contents of buffer. You should treat it as “write-only”.

  • Dropping the QueueWriteBufferView does not submit the transfer to the GPU immediately. The transfer begins only on the next call to Queue::submit() after the view is dropped, just before the explicitly submitted commands. To get a set of scheduled transfers started immediately, it’s fine to call queue.submit([]) with no command buffers at all.

  • Currently on native platforms, the staging memory will be a new allocation, which will then be released after the next submission finishes. To entirely avoid short-lived allocations, you might be able to use StagingBelt, or buffers you explicitly create, map, and unmap yourself.

Source

pub fn write_texture( &self, texture: TexelCopyTextureInfo<'_>, data: &[u8], data_layout: TexelCopyBufferLayout, size: Extent3d, )

Copies the bytes of data into into a texture.

  • data contains the texels to be written, which must be in the same format as the texture.
  • data_layout describes the memory layout of data, which does not necessarily have to have tightly packed rows.
  • texture specifies the texture to write into, and the location within the texture (coordinate offset, mip level) that will be overwritten.
  • size is the size, in texels, of the region to be written.

This method fails if size overruns the size of texture, or if data is too short.

§Performance considerations

This operation has the same performance considerations as Queue::write_buffer(); see its documentation for details.

However, since there is no “mapped texture” like a mapped buffer, alternate techniques for writing to textures will generally consist of first copying the data to a buffer, then using CommandEncoder::copy_buffer_to_texture(), or in some cases a compute shader, to copy texels from that buffer to the texture.

Source

pub fn submit<I: IntoIterator<Item = CommandBuffer>>( &self, command_buffers: I, ) -> SubmissionIndex

Submits a series of finished command buffers for execution.

Source

pub fn get_timestamp_period(&self) -> f32

Gets the amount of nanoseconds each tick of a timestamp query represents.

Returns zero if timestamp queries are unsupported.

Timestamp values are represented in nanosecond values on WebGPU, see <https://gpuweb.github.io/gpuweb/#timestamp> Therefore, this is always 1.0 on the web, but on wgpu-core a manual conversion is required.

Source

pub fn on_submitted_work_done(&self, callback: impl FnOnce() + Send + 'static)

Registers a callback when the previous call to submit finishes running on the gpu. This callback being called implies that all mapped buffer callbacks which were registered before this call will have been called.

For the callback to complete, either queue.submit(..), instance.poll_all(..), or device.poll(..) must be called elsewhere in the runtime, possibly integrated into an event loop or run on a separate thread.

The callback will be called on the thread that first calls the above functions after the gpu work has completed. There are no restrictions on the code you can run in the callback, however on native the call to the function will not complete until the callback returns, so prefer keeping callbacks short and used to set flags, send messages, etc.

Source

pub unsafe fn as_hal<A: HalApi, F: FnOnce(Option<&A::Queue>) -> R, R>( &self, hal_queue_callback: F, ) -> R

Available on wgpu_core only.

Returns the inner hal Queue using a callback. The hal queue will be None if the backend type argument does not match with this wgpu Queue

§Safety
  • The raw handle obtained from the hal Queue must not be manually destroyed

Trait Implementations§

Source§

impl Clone for Queue

Source§

fn clone(&self) -> Queue

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Queue

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Hash for Queue

Source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl Ord for Queue

Source§

fn cmp(&self, other: &Self) -> Ordering

This method returns an Ordering between self and other. Read more
1.21.0 · Source§

fn max(self, other: Self) -> Self
where Self: Sized,

Compares and returns the maximum of two values. Read more
1.21.0 · Source§

fn min(self, other: Self) -> Self
where Self: Sized,

Compares and returns the minimum of two values. Read more
1.50.0 · Source§

fn clamp(self, min: Self, max: Self) -> Self
where Self: Sized,

Restrict a value to a certain interval. Read more
Source§

impl PartialEq for Queue

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl PartialOrd for Queue

Source§

fn partial_cmp(&self, other: &Self) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
Source§

impl Eq for Queue

Auto Trait Implementations§

§

impl Freeze for Queue

§

impl !RefUnwindSafe for Queue

§

impl Send for Queue

§

impl Sync for Queue

§

impl Unpin for Queue

§

impl !UnwindSafe for Queue

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
§

impl<Q, K> Comparable<K> for Q
where Q: Ord + ?Sized, K: Borrow<Q> + ?Sized,

§

fn compare(&self, key: &K) -> Ordering

Compare self to key and return their ordering.
§

impl<T> Downcast<T> for T

§

fn downcast(&self) -> &T

§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

§

fn equivalent(&self, key: &K) -> bool

Checks if this value is equivalent to the given key. Read more
§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

§

fn equivalent(&self, key: &K) -> bool

Compare self to key and return true if they are equal.
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T> Upcast<T> for T

§

fn upcast(&self) -> Option<&T>

Source§

impl<T> CommonTraits for T
where T: Any + Debug + WasmNotSendSync,

Source§

impl<T> WasmNotSend for T
where T: Send,

Source§

impl<T> WasmNotSendSync for T

Source§

impl<T> WasmNotSync for T
where T: Sync,