pub struct Queue {
pub(crate) inner: DispatchQueue,
}
Expand description
Handle to a command queue on a device.
A Queue
executes recorded CommandBuffer
objects and provides convenience methods
for writing to buffers and textures.
It can be created along with a Device
by calling Adapter::request_device
.
Corresponds to WebGPU GPUQueue
.
Fields§
§inner: DispatchQueue
Implementations§
Source§impl Queue
impl Queue
Sourcepub fn from_custom<T: QueueInterface>(queue: T) -> Self
Available on custom
only.
pub fn from_custom<T: QueueInterface>(queue: T) -> Self
custom
only.Creates Queue from custom implementation
Sourcepub fn write_buffer(&self, buffer: &Buffer, offset: BufferAddress, data: &[u8])
pub fn write_buffer(&self, buffer: &Buffer, offset: BufferAddress, data: &[u8])
Copies the bytes of data
into buffer
starting at offset
.
The data must be written fully in-bounds, that is, offset + data.len() <= buffer.len()
.
§Performance considerations
-
Calls to
write_buffer()
do not submit the transfer to the GPU immediately. They begin GPU execution only on the next call toQueue::submit()
, just before the explicitly submitted commands. To get a set of scheduled transfers started immediately, it’s fine to callsubmit
with no command buffers at all:queue.write_buffer(&buffer, 0, &data); queue.submit([]);
However,
data
will be immediately copied into staging memory, so the caller may discard it any time after this call completes. -
Consider using
Queue::write_buffer_with()
instead. That method allows you to prepare your data directly within the staging memory, rather than first placing it in a separate[u8]
to be copied. That is,queue.write_buffer(b, offset, data)
is approximately equivalent toqueue.write_buffer_with(b, offset, data.len()).copy_from_slice(data)
, so usewrite_buffer_with()
if you can do something smarter than thatcopy_from_slice()
. However, for small values (e.g. a typical uniform buffer whose contents come from astruct
), there will likely be no difference, since the compiler will be able to optimize out unnecessary copies regardless. -
Currently on native platforms, for both of these methods, the staging memory will be a new allocation. This will then be released after the next submission finishes. To entirely avoid short-lived allocations, you might be able to use
StagingBelt
, or buffers you explicitly create, map, and unmap yourself.
Sourcepub fn write_buffer_with<'a>(
&'a self,
buffer: &'a Buffer,
offset: BufferAddress,
size: BufferSize,
) -> Option<QueueWriteBufferView<'a>>
pub fn write_buffer_with<'a>( &'a self, buffer: &'a Buffer, offset: BufferAddress, size: BufferSize, ) -> Option<QueueWriteBufferView<'a>>
Prepares to write data to a buffer via a mapped staging buffer.
This operation allocates a temporary buffer and then returns a
QueueWriteBufferView
, which
- dereferences to a
[u8]
of lengthsize
, and - when dropped, schedules a copy of its contents into
buffer
atoffset
.
Therefore, this obtains the same result as Queue::write_buffer()
, but may
allow you to skip one allocation and one copy of your data, if you are able to
assemble your data directly into the returned QueueWriteBufferView
instead of
into a separate allocation like a Vec
first.
The data must be written fully in-bounds, that is, offset + size <= buffer.len()
.
§Performance considerations
-
For small data not separately heap-allocated, there is no advantage of this over
Queue::write_buffer()
. -
Reading from the returned view may be slow, and will not yield the current contents of
buffer
. You should treat it as “write-only”. -
Dropping the
QueueWriteBufferView
does not submit the transfer to the GPU immediately. The transfer begins only on the next call toQueue::submit()
after the view is dropped, just before the explicitly submitted commands. To get a set of scheduled transfers started immediately, it’s fine to callqueue.submit([])
with no command buffers at all. -
Currently on native platforms, the staging memory will be a new allocation, which will then be released after the next submission finishes. To entirely avoid short-lived allocations, you might be able to use
StagingBelt
, or buffers you explicitly create, map, and unmap yourself.
Sourcepub fn write_texture(
&self,
texture: TexelCopyTextureInfo<'_>,
data: &[u8],
data_layout: TexelCopyBufferLayout,
size: Extent3d,
)
pub fn write_texture( &self, texture: TexelCopyTextureInfo<'_>, data: &[u8], data_layout: TexelCopyBufferLayout, size: Extent3d, )
Copies the bytes of data
into into a texture.
data
contains the texels to be written, which must be in the same format as the texture.data_layout
describes the memory layout ofdata
, which does not necessarily have to have tightly packed rows.texture
specifies the texture to write into, and the location within the texture (coordinate offset, mip level) that will be overwritten.size
is the size, in texels, of the region to be written.
This method fails if size
overruns the size of texture
, or if data
is too short.
§Performance considerations
This operation has the same performance considerations as Queue::write_buffer()
;
see its documentation for details.
However, since there is no “mapped texture” like a mapped buffer,
alternate techniques for writing to textures will generally consist of first copying
the data to a buffer, then using CommandEncoder::copy_buffer_to_texture()
, or in
some cases a compute shader, to copy texels from that buffer to the texture.
Sourcepub fn submit<I: IntoIterator<Item = CommandBuffer>>(
&self,
command_buffers: I,
) -> SubmissionIndex
pub fn submit<I: IntoIterator<Item = CommandBuffer>>( &self, command_buffers: I, ) -> SubmissionIndex
Submits a series of finished command buffers for execution.
Sourcepub fn get_timestamp_period(&self) -> f32
pub fn get_timestamp_period(&self) -> f32
Gets the amount of nanoseconds each tick of a timestamp query represents.
Returns zero if timestamp queries are unsupported.
Timestamp values are represented in nanosecond values on WebGPU, see <https://gpuweb.github.io/gpuweb/#timestamp>
Therefore, this is always 1.0 on the web, but on wgpu-core a manual conversion is required.
Sourcepub fn on_submitted_work_done(&self, callback: impl FnOnce() + Send + 'static)
pub fn on_submitted_work_done(&self, callback: impl FnOnce() + Send + 'static)
Registers a callback when the previous call to submit finishes running on the gpu. This callback being called implies that all mapped buffer callbacks which were registered before this call will have been called.
For the callback to complete, either queue.submit(..)
, instance.poll_all(..)
, or device.poll(..)
must be called elsewhere in the runtime, possibly integrated into an event loop or run on a separate thread.
The callback will be called on the thread that first calls the above functions after the gpu work has completed. There are no restrictions on the code you can run in the callback, however on native the call to the function will not complete until the callback returns, so prefer keeping callbacks short and used to set flags, send messages, etc.
Sourcepub unsafe fn as_hal<A: HalApi, F: FnOnce(Option<&A::Queue>) -> R, R>(
&self,
hal_queue_callback: F,
) -> R
Available on wgpu_core
only.
pub unsafe fn as_hal<A: HalApi, F: FnOnce(Option<&A::Queue>) -> R, R>( &self, hal_queue_callback: F, ) -> R
wgpu_core
only.Returns the inner hal Queue using a callback. The hal queue will be None
if the
backend type argument does not match with this wgpu Queue
§Safety
- The raw handle obtained from the hal Queue must not be manually destroyed
Trait Implementations§
Source§impl Ord for Queue
impl Ord for Queue
Source§impl PartialOrd for Queue
impl PartialOrd for Queue
impl Eq for Queue
Auto Trait Implementations§
impl Freeze for Queue
impl !RefUnwindSafe for Queue
impl Send for Queue
impl Sync for Queue
impl Unpin for Queue
impl !UnwindSafe for Queue
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<Q, K> Comparable<K> for Q
impl<Q, K> Comparable<K> for Q
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.