Trait wgpu_hal::Device

source ·
pub trait Device: WasmNotSendSync {
    type A: Api;

Show 45 methods // Required methods unsafe fn exit(self, queue: <Self::A as Api>::Queue); unsafe fn create_buffer( &self, desc: &BufferDescriptor<'_> ) -> Result<<Self::A as Api>::Buffer, DeviceError>; unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer); unsafe fn map_buffer( &self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange ) -> Result<BufferMapping, DeviceError>; unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer); unsafe fn flush_mapped_ranges<I>( &self, buffer: &<Self::A as Api>::Buffer, ranges: I ) where I: Iterator<Item = MemoryRange>; unsafe fn invalidate_mapped_ranges<I>( &self, buffer: &<Self::A as Api>::Buffer, ranges: I ) where I: Iterator<Item = MemoryRange>; unsafe fn create_texture( &self, desc: &TextureDescriptor<'_> ) -> Result<<Self::A as Api>::Texture, DeviceError>; unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture); unsafe fn create_texture_view( &self, texture: &<Self::A as Api>::Texture, desc: &TextureViewDescriptor<'_> ) -> Result<<Self::A as Api>::TextureView, DeviceError>; unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView); unsafe fn create_sampler( &self, desc: &SamplerDescriptor<'_> ) -> Result<<Self::A as Api>::Sampler, DeviceError>; unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler); unsafe fn create_command_encoder( &self, desc: &CommandEncoderDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>; unsafe fn destroy_command_encoder( &self, pool: <Self::A as Api>::CommandEncoder ); unsafe fn create_bind_group_layout( &self, desc: &BindGroupLayoutDescriptor<'_> ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>; unsafe fn destroy_bind_group_layout( &self, bg_layout: <Self::A as Api>::BindGroupLayout ); unsafe fn create_pipeline_layout( &self, desc: &PipelineLayoutDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>; unsafe fn destroy_pipeline_layout( &self, pipeline_layout: <Self::A as Api>::PipelineLayout ); unsafe fn create_bind_group( &self, desc: &BindGroupDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::BindGroup, DeviceError>; unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup); unsafe fn create_shader_module( &self, desc: &ShaderModuleDescriptor<'_>, shader: ShaderInput<'_> ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>; unsafe fn destroy_shader_module( &self, module: <Self::A as Api>::ShaderModule ); unsafe fn create_render_pipeline( &self, desc: &RenderPipelineDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>; unsafe fn destroy_render_pipeline( &self, pipeline: <Self::A as Api>::RenderPipeline ); unsafe fn create_compute_pipeline( &self, desc: &ComputePipelineDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>; unsafe fn destroy_compute_pipeline( &self, pipeline: <Self::A as Api>::ComputePipeline ); unsafe fn create_pipeline_cache( &self, desc: &PipelineCacheDescriptor<'_> ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>; unsafe fn destroy_pipeline_cache( &self, cache: <Self::A as Api>::PipelineCache ); unsafe fn create_query_set( &self, desc: &QuerySetDescriptor<Label<'_>> ) -> Result<<Self::A as Api>::QuerySet, DeviceError>; unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet); unsafe fn create_fence( &self ) -> Result<<Self::A as Api>::Fence, DeviceError>; unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence); unsafe fn get_fence_value( &self, fence: &<Self::A as Api>::Fence ) -> Result<FenceValue, DeviceError>; unsafe fn wait( &self, fence: &<Self::A as Api>::Fence, value: FenceValue, timeout_ms: u32 ) -> Result<bool, DeviceError>; unsafe fn start_capture(&self) -> bool; unsafe fn stop_capture(&self); unsafe fn create_acceleration_structure( &self, desc: &AccelerationStructureDescriptor<'_> ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>; unsafe fn get_acceleration_structure_build_sizes( &self, desc: &GetAccelerationStructureBuildSizesDescriptor<'_, Self::A> ) -> AccelerationStructureBuildSizes; unsafe fn get_acceleration_structure_device_address( &self, acceleration_structure: &<Self::A as Api>::AccelerationStructure ) -> BufferAddress; unsafe fn destroy_acceleration_structure( &self, acceleration_structure: <Self::A as Api>::AccelerationStructure ); fn get_internal_counters(&self) -> HalCounters; // Provided methods fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> { ... } unsafe fn pipeline_cache_get_data( &self, cache: &<Self::A as Api>::PipelineCache ) -> Option<Vec<u8>> { ... } fn generate_allocator_report(&self) -> Option<AllocatorReport> { ... }
}
Expand description

A connection to a GPU and a pool of resources to use with it.

A wgpu-hal Device represents an open connection to a specific graphics processor, controlled via the backend Device::A. A Device is mostly used for creating resources. Each Device has an associated Queue used for command submission.

On Vulkan a Device corresponds to a logical device (VkDevice). Other backends don’t have an exact analog: for example, ID3D12Devices and MTLDevices are owned by the backends’ wgpu_hal::Adapter implementations, and shared by all wgpu_hal::Devices created from that Adapter.

A Device’s life cycle is generally:

  1. Obtain a Device and its associated Queue by calling Adapter::open.

    Alternatively, the backend-specific types that implement Adapter often have methods for creating a wgpu-hal Device from a platform-specific handle. For example, vulkan::Adapter::device_from_raw can create a vulkan::Device from an ash::Device.

  2. Create resources to use on the device by calling methods like Device::create_texture or Device::create_shader_module.

  3. Call Device::create_command_encoder to obtain a CommandEncoder, which you can use to build CommandBuffers holding commands to be executed on the GPU.

  4. Call Queue::submit on the Device’s associated Queue to submit CommandBuffers for execution on the GPU. If needed, call Device::wait to wait for them to finish execution.

  5. Free resources with methods like Device::destroy_texture or Device::destroy_shader_module.

  6. Shut down the device by calling Device::exit.

Safety

As with other wgpu-hal APIs, validation is the caller’s responsibility. Here are the general requirements for all Device methods:

  • Any resource passed to a Device method must have been created by that Device. For example, a Texture passed to Device::destroy_texture must have been created with the Device passed as self.

  • Resources may not be destroyed if they are used by any submitted command buffers that have not yet finished execution.

Required Associated Types§

source

type A: Api

Required Methods§

source

unsafe fn exit(self, queue: <Self::A as Api>::Queue)

Exit connection to this logical device.

source

unsafe fn create_buffer( &self, desc: &BufferDescriptor<'_> ) -> Result<<Self::A as Api>::Buffer, DeviceError>

Creates a new buffer.

The initial usage is BufferUses::empty().

source

unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer)

Free buffer and any GPU resources it owns.

Note that backends are allowed to allocate GPU memory for buffers from allocation pools, and this call is permitted to simply return buffer’s storage to that pool, without making it available to other applications.

Safety
  • The given buffer must not currently be mapped.
source

unsafe fn map_buffer( &self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange ) -> Result<BufferMapping, DeviceError>

Return a pointer to CPU memory mapping the contents of buffer.

Buffer mappings are persistent: the buffer may remain mapped on the CPU while the GPU reads or writes to it. (Note that wgpu_core does not use this feature: when a wgpu_core::Buffer is unmapped, the underlying wgpu_hal buffer is also unmapped.)

If this function returns Ok(mapping), then:

  • mapping.ptr is the CPU address of the start of the mapped memory.

  • If mapping.is_coherent is true, then CPU writes to the mapped memory are immediately visible on the GPU, and vice versa.

Safety
  • The given buffer must have been created with the MAP_READ or MAP_WRITE flags set in BufferDescriptor::usage.

  • The given range must fall within the size of buffer.

  • The caller must avoid data races between the CPU and the GPU. A data race is any pair of accesses to a particular byte, one of which is a write, that are not ordered with respect to each other by some sort of synchronization operation.

  • If this function returns Ok(mapping) and mapping.is_coherent is false, then:

    • Every CPU write to a mapped byte followed by a GPU read of that byte must have at least one call to Device::flush_mapped_ranges covering that byte that occurs between those two accesses.

    • Every GPU write to a mapped byte followed by a CPU read of that byte must have at least one call to Device::invalidate_mapped_ranges covering that byte that occurs between those two accesses.

    Note that the data race rule above requires that all such access pairs be ordered, so it is meaningful to talk about what must occur “between” them.

  • Zero-sized mappings are not allowed.

  • The returned BufferMapping::ptr must not be used after a call to Device::unmap_buffer.

source

unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer)

Remove the mapping established by the last call to Device::map_buffer.

Safety
  • The given buffer must be currently mapped.
source

unsafe fn flush_mapped_ranges<I>( &self, buffer: &<Self::A as Api>::Buffer, ranges: I )
where I: Iterator<Item = MemoryRange>,

Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.

Safety
  • The given buffer must be currently mapped.

  • All ranges produced by ranges must fall within buffer’s size.

source

unsafe fn invalidate_mapped_ranges<I>( &self, buffer: &<Self::A as Api>::Buffer, ranges: I )
where I: Iterator<Item = MemoryRange>,

Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.

Safety
  • The given buffer must be currently mapped.

  • All ranges produced by ranges must fall within buffer’s size.

source

unsafe fn create_texture( &self, desc: &TextureDescriptor<'_> ) -> Result<<Self::A as Api>::Texture, DeviceError>

Creates a new texture.

The initial usage for all subresources is TextureUses::UNINITIALIZED.

source

unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture)

source

unsafe fn create_texture_view( &self, texture: &<Self::A as Api>::Texture, desc: &TextureViewDescriptor<'_> ) -> Result<<Self::A as Api>::TextureView, DeviceError>

source

unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView)

source

unsafe fn create_sampler( &self, desc: &SamplerDescriptor<'_> ) -> Result<<Self::A as Api>::Sampler, DeviceError>

source

unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler)

source

unsafe fn create_command_encoder( &self, desc: &CommandEncoderDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>

Create a fresh CommandEncoder.

The new CommandEncoder is in the “closed” state.

source

unsafe fn destroy_command_encoder(&self, pool: <Self::A as Api>::CommandEncoder)

source

unsafe fn create_bind_group_layout( &self, desc: &BindGroupLayoutDescriptor<'_> ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>

Creates a bind group layout.

source

unsafe fn destroy_bind_group_layout( &self, bg_layout: <Self::A as Api>::BindGroupLayout )

source

unsafe fn create_pipeline_layout( &self, desc: &PipelineLayoutDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>

source

unsafe fn destroy_pipeline_layout( &self, pipeline_layout: <Self::A as Api>::PipelineLayout )

source

unsafe fn create_bind_group( &self, desc: &BindGroupDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::BindGroup, DeviceError>

source

unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup)

source

unsafe fn create_shader_module( &self, desc: &ShaderModuleDescriptor<'_>, shader: ShaderInput<'_> ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>

source

unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule)

source

unsafe fn create_render_pipeline( &self, desc: &RenderPipelineDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>

source

unsafe fn destroy_render_pipeline( &self, pipeline: <Self::A as Api>::RenderPipeline )

source

unsafe fn create_compute_pipeline( &self, desc: &ComputePipelineDescriptor<'_, Self::A> ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>

source

unsafe fn destroy_compute_pipeline( &self, pipeline: <Self::A as Api>::ComputePipeline )

source

unsafe fn create_pipeline_cache( &self, desc: &PipelineCacheDescriptor<'_> ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>

source

unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache)

source

unsafe fn create_query_set( &self, desc: &QuerySetDescriptor<Label<'_>> ) -> Result<<Self::A as Api>::QuerySet, DeviceError>

source

unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet)

source

unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>

source

unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence)

source

unsafe fn get_fence_value( &self, fence: &<Self::A as Api>::Fence ) -> Result<FenceValue, DeviceError>

source

unsafe fn wait( &self, fence: &<Self::A as Api>::Fence, value: FenceValue, timeout_ms: u32 ) -> Result<bool, DeviceError>

Wait for fence to reach value.

Operations like Queue::submit can accept a Fence and a FenceValue to store in it, so you can use this wait function to wait for a given queue submission to finish execution.

The value argument must be a value that some actual operation you have already presented to the device is going to store in fence. You cannot wait for values yet to be submitted. (This restriction accommodates implementations like the vulkan backend’s FencePool that must allocate a distinct synchronization object for each fence value one is able to wait for.)

Calling wait with a lower FenceValue than fence’s current value returns immediately.

source

unsafe fn start_capture(&self) -> bool

source

unsafe fn stop_capture(&self)

source

unsafe fn create_acceleration_structure( &self, desc: &AccelerationStructureDescriptor<'_> ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>

source

unsafe fn get_acceleration_structure_build_sizes( &self, desc: &GetAccelerationStructureBuildSizesDescriptor<'_, Self::A> ) -> AccelerationStructureBuildSizes

source

unsafe fn get_acceleration_structure_device_address( &self, acceleration_structure: &<Self::A as Api>::AccelerationStructure ) -> BufferAddress

source

unsafe fn destroy_acceleration_structure( &self, acceleration_structure: <Self::A as Api>::AccelerationStructure )

source

fn get_internal_counters(&self) -> HalCounters

Provided Methods§

source

fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]>

source

unsafe fn pipeline_cache_get_data( &self, cache: &<Self::A as Api>::PipelineCache ) -> Option<Vec<u8>>

source

fn generate_allocator_report(&self) -> Option<AllocatorReport>

Object Safety§

This trait is not object safe.

Implementors§

source§

impl Device for Context

§

type A = Api

source§

impl Device for wgpu_hal::gles::Device

Available on gles only.
§

type A = Api

source§

impl Device for wgpu_hal::vulkan::Device

Available on vulkan only.
§

type A = Api