naga/back/hlsl/mod.rs
1/*!
2Backend for [HLSL][hlsl] (High-Level Shading Language).
3
4# Supported shader model versions:
5- 5.0
6- 5.1
7- 6.0
8
9# Layout of values in `uniform` buffers
10
11WGSL's ["Internal Layout of Values"][ilov] rules specify how each WGSL
12type should be stored in `uniform` and `storage` buffers. The HLSL we
13generate must access values in that form, even when it is not what
14HLSL would use normally.
15
16Matching the WGSL memory layout is a concern only for `uniform`
17variables. WGSL `storage` buffers are translated as HLSL
18`ByteAddressBuffers`, for which we generate `Load` and `Store` method
19calls with explicit byte offsets. WGSL pipeline inputs must be scalars
20or vectors; they cannot be matrices, which is where the interesting
21problems arise. However, when an affected type appears in a struct
22definition, the transformations described here are applied without
23consideration of where the struct is used.
24
25Access to storage buffers is implemented in `storage.rs`. Access to
26uniform buffers is implemented where applicable in `writer.rs`.
27
28## Row- and column-major ordering for matrices
29
30WGSL specifies that matrices in uniform buffers are stored in
31column-major order. This matches HLSL's default, so one might expect
32things to be straightforward. Unfortunately, WGSL and HLSL disagree on
33what indexing a matrix means: in WGSL, `m[i]` retrieves the `i`'th
34*column* of `m`, whereas in HLSL it retrieves the `i`'th *row*. We
35want to avoid translating `m[i]` into some complicated reassembly of a
36vector from individually fetched components, so this is a problem.
37
38However, with a bit of trickery, it is possible to use HLSL's `m[i]`
39as the translation of WGSL's `m[i]`:
40
41- We declare all matrices in uniform buffers in HLSL with the
42 `row_major` qualifier, and transpose the row and column counts: a
43 WGSL `mat3x4<f32>`, say, becomes an HLSL `row_major float3x4`. (Note
44 that WGSL and HLSL type names put the row and column in reverse
45 order.) Since the HLSL type is the transpose of how WebGPU directs
46 the user to store the data, HLSL will load all matrices transposed.
47
48- Since matrices are transposed, an HLSL indexing expression retrieves
49 the "columns" of the intended WGSL value, as desired.
50
51- For vector-matrix multiplication, since `mul(transpose(m), v)` is
52 equivalent to `mul(v, m)` (note the reversal of the arguments), and
53 `mul(v, transpose(m))` is equivalent to `mul(m, v)`, we can
54 translate WGSL `m * v` and `v * m` to HLSL by simply reversing the
55 arguments to `mul`.
56
57## Padding in two-row matrices
58
59An HLSL `row_major floatKx2` matrix has padding between its rows that
60the WGSL `matKx2<f32>` matrix it represents does not. HLSL stores all
61matrix rows [aligned on 16-byte boundaries][16bb], whereas WGSL says
62that the columns of a `matKx2<f32>` need only be [aligned as required
63for `vec2<f32>`][ilov], which is [eight-byte alignment][8bb].
64
65To compensate for this, any time a `matKx2<f32>` appears in a WGSL
66`uniform` value or as part of a struct/array, we actually emit `K`
67separate `float2` members, and assemble/disassemble the matrix from its
68columns (in WGSL; rows in HLSL) upon load and store.
69
70For example, the following WGSL struct type:
71
72```ignore
73struct Baz {
74 m: mat3x2<f32>,
75}
76```
77
78is rendered as the HLSL struct type:
79
80```ignore
81struct Baz {
82 float2 m_0; float2 m_1; float2 m_2;
83};
84```
85
86The `wrapped_struct_matrix` functions in `help.rs` generate HLSL
87helper functions to access such members, converting between the stored
88form and the HLSL matrix types appropriately. For example, for reading
89the member `m` of the `Baz` struct above, we emit:
90
91```ignore
92float3x2 GetMatmOnBaz(Baz obj) {
93 return float3x2(obj.m_0, obj.m_1, obj.m_2);
94}
95```
96
97We also emit an analogous `Set` function, as well as functions for
98accessing individual columns by dynamic index.
99
100## Sampler Handling
101
102Due to limitations in how sampler heaps work in D3D12, we need to access samplers
103through a layer of indirection. Instead of directly binding samplers, we bind the entire
104sampler heap as both a standard and a comparison sampler heap. We then use a sampler
105index buffer for each bind group. This buffer is accessed in the shader to get the actual
106sampler index within the heap. See the wgpu_hal dx12 backend documentation for more
107information.
108
109# External textures
110
111Support for [`crate::ImageClass::External`] textures is implemented by lowering
112each external texture global variable to 3 `Texture2D<float4>`s, and a `cbuffer`
113of type `NagaExternalTextureParams`. This provides up to 3 planes of texture
114data (for example single planar RGBA, or separate Y, Cb, and Cr planes), and the
115parameters buffer containing information describing how to handle these
116correctly. The bind target to use for each of these globals is specified via
117[`Options::external_texture_binding_map`].
118
119External textures are supported by WGSL's `textureDimensions()`,
120`textureLoad()`, and `textureSampleBaseClampToEdge()` built-in functions. These
121are implemented using helper functions. See the following functions for how
122these are generated:
123 * `Writer::write_wrapped_image_query_function`
124 * `Writer::write_wrapped_image_load_function`
125 * `Writer::write_wrapped_image_sample_function`
126
127Ideally the set of global variables could be wrapped in a single struct that
128could conveniently be passed around. But, alas, HLSL does not allow structs to
129have `Texture2D` members. Fortunately, however, external textures can only be
130used as arguments to either built-in or user-defined functions. We therefore
131expand any external texture function argument to four consecutive arguments (3
132textures and the params struct) when declaring user-defined functions, and
133ensure our built-in function implementations take the same arguments. Then,
134whenever we need to emit an external texture in `Writer::write_expr`, which
135fortunately can only ever be for a global variable or function argument, we
136simply emit the variable name of each of the three textures and the parameters
137struct in a comma-separated list. This won't win any awards for elegance, but
138it works for our purposes.
139
140[hlsl]: https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl
141[ilov]: https://gpuweb.github.io/gpuweb/wgsl/#internal-value-layout
142[16bb]: https://github.com/microsoft/DirectXShaderCompiler/wiki/Buffer-Packing#constant-buffer-packing
143[8bb]: https://gpuweb.github.io/gpuweb/wgsl/#alignment-and-size
144*/
145
146mod conv;
147mod help;
148mod keywords;
149mod ray;
150mod storage;
151mod writer;
152
153use alloc::{string::String, vec::Vec};
154use core::fmt::Error as FmtError;
155
156use thiserror::Error;
157
158use crate::{back, ir, proc};
159
160/// Direct3D 12 binding information for a global variable.
161///
162/// This type provides the HLSL-specific information Naga needs to declare and
163/// access an HLSL global variable that cannot be derived from the `Module`
164/// itself.
165///
166/// An HLSL global variable declaration includes details that the Direct3D API
167/// will use to refer to it. For example:
168///
169/// RWByteAddressBuffer s_sasm : register(u0, space2);
170///
171/// This defines a global `s_sasm` that a Direct3D root signature would refer to
172/// as register `0` in register space `2` in a `UAV` descriptor range. Naga can
173/// infer the register's descriptor range type from the variable's address class
174/// (writable [`Storage`] variables are implemented by Direct3D Unordered Access
175/// Views, the `u` register type), but the register number and register space
176/// must be supplied by the user.
177///
178/// The [`back::hlsl::Options`] structure provides `BindTarget`s for various
179/// situations in which Naga may need to generate an HLSL global variable, like
180/// [`binding_map`] for Naga global variables, or [`push_constants_target`] for
181/// a module's sole [`PushConstant`] variable. See those fields' documentation
182/// for details.
183///
184/// [`Storage`]: crate::ir::AddressSpace::Storage
185/// [`back::hlsl::Options`]: Options
186/// [`binding_map`]: Options::binding_map
187/// [`push_constants_target`]: Options::push_constants_target
188/// [`PushConstant`]: crate::ir::AddressSpace::PushConstant
189#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, Hash)]
190#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
191#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
192pub struct BindTarget {
193 pub space: u8,
194 /// For regular bindings this is the register number.
195 ///
196 /// For sampler bindings, this is the index to use into the bind group's sampler index buffer.
197 pub register: u32,
198 /// If the binding is an unsized binding array, this overrides the size.
199 pub binding_array_size: Option<u32>,
200 /// This is the index in the buffer at [`Options::dynamic_storage_buffer_offsets_targets`].
201 pub dynamic_storage_buffer_offsets_index: Option<u32>,
202 /// This is a hint that we need to restrict indexing of vectors, matrices and arrays.
203 ///
204 /// If [`Options::restrict_indexing`] is also `true`, we will restrict indexing.
205 #[cfg_attr(any(feature = "serialize", feature = "deserialize"), serde(default))]
206 pub restrict_indexing: bool,
207}
208
209#[derive(Clone, Debug, Default, PartialEq, Eq, Hash)]
210#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
211#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
212/// BindTarget for dynamic storage buffer offsets
213pub struct OffsetsBindTarget {
214 pub space: u8,
215 pub register: u32,
216 pub size: u32,
217}
218
219#[cfg(any(feature = "serialize", feature = "deserialize"))]
220#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
221#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
222struct BindingMapSerialization {
223 resource_binding: crate::ResourceBinding,
224 bind_target: BindTarget,
225}
226
227#[cfg(feature = "deserialize")]
228fn deserialize_binding_map<'de, D>(deserializer: D) -> Result<BindingMap, D::Error>
229where
230 D: serde::Deserializer<'de>,
231{
232 use serde::Deserialize;
233
234 let vec = Vec::<BindingMapSerialization>::deserialize(deserializer)?;
235 let mut map = BindingMap::default();
236 for item in vec {
237 map.insert(item.resource_binding, item.bind_target);
238 }
239 Ok(map)
240}
241
242// Using `BTreeMap` instead of `HashMap` so that we can hash itself.
243pub type BindingMap = alloc::collections::BTreeMap<crate::ResourceBinding, BindTarget>;
244
245/// A HLSL shader model version.
246#[allow(non_snake_case, non_camel_case_types)]
247#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd)]
248#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
249#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
250pub enum ShaderModel {
251 V5_0,
252 V5_1,
253 V6_0,
254 V6_1,
255 V6_2,
256 V6_3,
257 V6_4,
258 V6_5,
259 V6_6,
260 V6_7,
261}
262
263impl ShaderModel {
264 pub const fn to_str(self) -> &'static str {
265 match self {
266 Self::V5_0 => "5_0",
267 Self::V5_1 => "5_1",
268 Self::V6_0 => "6_0",
269 Self::V6_1 => "6_1",
270 Self::V6_2 => "6_2",
271 Self::V6_3 => "6_3",
272 Self::V6_4 => "6_4",
273 Self::V6_5 => "6_5",
274 Self::V6_6 => "6_6",
275 Self::V6_7 => "6_7",
276 }
277 }
278}
279
280impl crate::ShaderStage {
281 pub const fn to_hlsl_str(self) -> &'static str {
282 match self {
283 Self::Vertex => "vs",
284 Self::Fragment => "ps",
285 Self::Compute => "cs",
286 Self::Task | Self::Mesh => unreachable!(),
287 }
288 }
289}
290
291impl crate::ImageDimension {
292 const fn to_hlsl_str(self) -> &'static str {
293 match self {
294 Self::D1 => "1D",
295 Self::D2 => "2D",
296 Self::D3 => "3D",
297 Self::Cube => "Cube",
298 }
299 }
300}
301
302#[derive(Clone, Copy, Debug, Hash, Eq, Ord, PartialEq, PartialOrd)]
303#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
304#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
305pub struct SamplerIndexBufferKey {
306 pub group: u32,
307}
308
309#[derive(Clone, Debug, Hash, PartialEq, Eq)]
310#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
311#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
312#[cfg_attr(feature = "deserialize", serde(default))]
313pub struct SamplerHeapBindTargets {
314 pub standard_samplers: BindTarget,
315 pub comparison_samplers: BindTarget,
316}
317
318impl Default for SamplerHeapBindTargets {
319 fn default() -> Self {
320 Self {
321 standard_samplers: BindTarget {
322 space: 0,
323 register: 0,
324 binding_array_size: None,
325 dynamic_storage_buffer_offsets_index: None,
326 restrict_indexing: false,
327 },
328 comparison_samplers: BindTarget {
329 space: 1,
330 register: 0,
331 binding_array_size: None,
332 dynamic_storage_buffer_offsets_index: None,
333 restrict_indexing: false,
334 },
335 }
336 }
337}
338
339#[cfg(any(feature = "serialize", feature = "deserialize"))]
340#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
341#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
342struct SamplerIndexBufferBindingSerialization {
343 group: u32,
344 bind_target: BindTarget,
345}
346
347#[cfg(feature = "deserialize")]
348fn deserialize_sampler_index_buffer_bindings<'de, D>(
349 deserializer: D,
350) -> Result<SamplerIndexBufferBindingMap, D::Error>
351where
352 D: serde::Deserializer<'de>,
353{
354 use serde::Deserialize;
355
356 let vec = Vec::<SamplerIndexBufferBindingSerialization>::deserialize(deserializer)?;
357 let mut map = SamplerIndexBufferBindingMap::default();
358 for item in vec {
359 map.insert(
360 SamplerIndexBufferKey { group: item.group },
361 item.bind_target,
362 );
363 }
364 Ok(map)
365}
366
367// We use a BTreeMap here so that we can hash it.
368pub type SamplerIndexBufferBindingMap =
369 alloc::collections::BTreeMap<SamplerIndexBufferKey, BindTarget>;
370
371#[cfg(any(feature = "serialize", feature = "deserialize"))]
372#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
373#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
374struct DynamicStorageBufferOffsetTargetSerialization {
375 index: u32,
376 bind_target: OffsetsBindTarget,
377}
378
379#[cfg(feature = "deserialize")]
380fn deserialize_storage_buffer_offsets<'de, D>(
381 deserializer: D,
382) -> Result<DynamicStorageBufferOffsetsTargets, D::Error>
383where
384 D: serde::Deserializer<'de>,
385{
386 use serde::Deserialize;
387
388 let vec = Vec::<DynamicStorageBufferOffsetTargetSerialization>::deserialize(deserializer)?;
389 let mut map = DynamicStorageBufferOffsetsTargets::default();
390 for item in vec {
391 map.insert(item.index, item.bind_target);
392 }
393 Ok(map)
394}
395
396pub type DynamicStorageBufferOffsetsTargets = alloc::collections::BTreeMap<u32, OffsetsBindTarget>;
397
398/// HLSL binding information for a Naga [`External`] image global variable.
399///
400/// See the module documentation's section on [External textures][mod] for details.
401///
402/// [`External`]: crate::ir::ImageClass::External
403/// [mod]: #external-textures
404#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, Hash)]
405#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
406#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
407pub struct ExternalTextureBindTarget {
408 /// HLSL binding information for the individual plane textures.
409 ///
410 /// Each of these should refer to an HLSL `Texture2D<float4>` holding one
411 /// plane of data for the external texture. The exact meaning of each plane
412 /// varies at runtime depending on where the external texture's data
413 /// originated.
414 pub planes: [BindTarget; 3],
415
416 /// HLSL binding information for a buffer holding the sampling parameters.
417 ///
418 /// This should refer to a cbuffer of type `NagaExternalTextureParams`, that
419 /// the code Naga generates for `textureSampleBaseClampToEdge` consults to
420 /// decide how to combine the data in [`planes`] to get the result required
421 /// by the spec.
422 ///
423 /// [`planes`]: Self::planes
424 pub params: BindTarget,
425}
426
427#[cfg(any(feature = "serialize", feature = "deserialize"))]
428#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
429#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
430struct ExternalTextureBindingMapSerialization {
431 resource_binding: crate::ResourceBinding,
432 bind_target: ExternalTextureBindTarget,
433}
434
435#[cfg(feature = "deserialize")]
436fn deserialize_external_texture_binding_map<'de, D>(
437 deserializer: D,
438) -> Result<ExternalTextureBindingMap, D::Error>
439where
440 D: serde::Deserializer<'de>,
441{
442 use serde::Deserialize;
443
444 let vec = Vec::<ExternalTextureBindingMapSerialization>::deserialize(deserializer)?;
445 let mut map = ExternalTextureBindingMap::default();
446 for item in vec {
447 map.insert(item.resource_binding, item.bind_target);
448 }
449 Ok(map)
450}
451pub type ExternalTextureBindingMap =
452 alloc::collections::BTreeMap<crate::ResourceBinding, ExternalTextureBindTarget>;
453
454/// Shorthand result used internally by the backend
455type BackendResult = Result<(), Error>;
456
457#[derive(Clone, Debug, PartialEq, thiserror::Error)]
458#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
459#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
460pub enum EntryPointError {
461 #[error("mapping of {0:?} is missing")]
462 MissingBinding(crate::ResourceBinding),
463}
464
465/// Configuration used in the [`Writer`].
466#[derive(Clone, Debug, Hash, PartialEq, Eq)]
467#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
468#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
469#[cfg_attr(feature = "deserialize", serde(default))]
470pub struct Options {
471 /// The hlsl shader model to be used
472 pub shader_model: ShaderModel,
473
474 /// HLSL binding information for each Naga global variable.
475 ///
476 /// This maps Naga [`GlobalVariable`]'s [`ResourceBinding`]s to a
477 /// [`BindTarget`] specifying its register number and space, along with
478 /// other details necessary to generate a full HLSL declaration for it,
479 /// or to access its value.
480 ///
481 /// This must provide a [`BindTarget`] for every [`GlobalVariable`] in the
482 /// [`Module`] that has a [`binding`].
483 ///
484 /// [`GlobalVariable`]: crate::ir::GlobalVariable
485 /// [`ResourceBinding`]: crate::ir::ResourceBinding
486 /// [`Module`]: crate::ir::Module
487 /// [`binding`]: crate::ir::GlobalVariable::binding
488 #[cfg_attr(
489 feature = "deserialize",
490 serde(deserialize_with = "deserialize_binding_map")
491 )]
492 pub binding_map: BindingMap,
493
494 /// Don't panic on missing bindings, instead generate any HLSL.
495 pub fake_missing_bindings: bool,
496 /// Add special constants to `SV_VertexIndex` and `SV_InstanceIndex`,
497 /// to make them work like in Vulkan/Metal, with help of the host.
498 pub special_constants_binding: Option<BindTarget>,
499
500 /// HLSL binding information for the [`PushConstant`] global, if present.
501 ///
502 /// If a module contains a global in the [`PushConstant`] address space, the
503 /// `dx12` backend stores its value directly in the root signature as a
504 /// series of [`D3D12_ROOT_PARAMETER_TYPE_32BIT_CONSTANTS`], whose binding
505 /// information is given here.
506 ///
507 /// [`PushConstant`]: crate::ir::AddressSpace::PushConstant
508 /// [`D3D12_ROOT_PARAMETER_TYPE_32BIT_CONSTANTS`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_root_parameter_type
509 pub push_constants_target: Option<BindTarget>,
510
511 /// HLSL binding information for the sampler heap and comparison sampler heap.
512 pub sampler_heap_target: SamplerHeapBindTargets,
513
514 /// Mapping of each bind group's sampler index buffer to a bind target.
515 #[cfg_attr(
516 feature = "deserialize",
517 serde(deserialize_with = "deserialize_sampler_index_buffer_bindings")
518 )]
519 pub sampler_buffer_binding_map: SamplerIndexBufferBindingMap,
520 /// Bind target for dynamic storage buffer offsets
521 #[cfg_attr(
522 feature = "deserialize",
523 serde(deserialize_with = "deserialize_storage_buffer_offsets")
524 )]
525 pub dynamic_storage_buffer_offsets_targets: DynamicStorageBufferOffsetsTargets,
526 #[cfg_attr(
527 feature = "deserialize",
528 serde(deserialize_with = "deserialize_external_texture_binding_map")
529 )]
530
531 /// HLSL binding information for [`External`] image global variables.
532 ///
533 /// See [`ExternalTextureBindTarget`] for details.
534 ///
535 /// [`External`]: crate::ir::ImageClass::External
536 pub external_texture_binding_map: ExternalTextureBindingMap,
537
538 /// Should workgroup variables be zero initialized (by polyfilling)?
539 pub zero_initialize_workgroup_memory: bool,
540 /// Should we restrict indexing of vectors, matrices and arrays?
541 pub restrict_indexing: bool,
542 /// If set, loops will have code injected into them, forcing the compiler
543 /// to think the number of iterations is bounded.
544 pub force_loop_bounding: bool,
545}
546
547impl Default for Options {
548 fn default() -> Self {
549 Options {
550 shader_model: ShaderModel::V5_1,
551 binding_map: BindingMap::default(),
552 fake_missing_bindings: true,
553 special_constants_binding: None,
554 sampler_heap_target: SamplerHeapBindTargets::default(),
555 sampler_buffer_binding_map: alloc::collections::BTreeMap::default(),
556 push_constants_target: None,
557 dynamic_storage_buffer_offsets_targets: alloc::collections::BTreeMap::new(),
558 external_texture_binding_map: ExternalTextureBindingMap::default(),
559 zero_initialize_workgroup_memory: true,
560 restrict_indexing: true,
561 force_loop_bounding: true,
562 }
563 }
564}
565
566impl Options {
567 fn resolve_resource_binding(
568 &self,
569 res_binding: &crate::ResourceBinding,
570 ) -> Result<BindTarget, EntryPointError> {
571 match self.binding_map.get(res_binding) {
572 Some(target) => Ok(*target),
573 None if self.fake_missing_bindings => Ok(BindTarget {
574 space: res_binding.group as u8,
575 register: res_binding.binding,
576 binding_array_size: None,
577 dynamic_storage_buffer_offsets_index: None,
578 restrict_indexing: false,
579 }),
580 None => Err(EntryPointError::MissingBinding(*res_binding)),
581 }
582 }
583
584 fn resolve_external_texture_resource_binding(
585 &self,
586 res_binding: &crate::ResourceBinding,
587 ) -> Result<ExternalTextureBindTarget, EntryPointError> {
588 match self.external_texture_binding_map.get(res_binding) {
589 Some(target) => Ok(*target),
590 None if self.fake_missing_bindings => {
591 let fake = BindTarget {
592 space: res_binding.group as u8,
593 register: res_binding.binding,
594 binding_array_size: None,
595 dynamic_storage_buffer_offsets_index: None,
596 restrict_indexing: false,
597 };
598 Ok(ExternalTextureBindTarget {
599 planes: [fake, fake, fake],
600 params: fake,
601 })
602 }
603 None => Err(EntryPointError::MissingBinding(*res_binding)),
604 }
605 }
606}
607
608/// Reflection info for entry point names.
609#[derive(Default)]
610pub struct ReflectionInfo {
611 /// Mapping of the entry point names.
612 ///
613 /// Each item in the array corresponds to an entry point index. The real entry point name may be different if one of the
614 /// reserved words are used.
615 ///
616 /// Note: Some entry points may fail translation because of missing bindings.
617 pub entry_point_names: Vec<Result<String, EntryPointError>>,
618}
619
620/// A subset of options that are meant to be changed per pipeline.
621#[derive(Debug, Default, Clone)]
622#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
623#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
624#[cfg_attr(feature = "deserialize", serde(default))]
625pub struct PipelineOptions {
626 /// The entry point to write.
627 ///
628 /// Entry points are identified by a shader stage specification,
629 /// and a name.
630 ///
631 /// If `None`, all entry points will be written. If `Some` and the entry
632 /// point is not found, an error will be thrown while writing.
633 pub entry_point: Option<(ir::ShaderStage, String)>,
634}
635
636#[derive(Error, Debug)]
637pub enum Error {
638 #[error(transparent)]
639 IoError(#[from] FmtError),
640 #[error("A scalar with an unsupported width was requested: {0:?}")]
641 UnsupportedScalar(crate::Scalar),
642 #[error("{0}")]
643 Unimplemented(String), // TODO: Error used only during development
644 #[error("{0}")]
645 Custom(String),
646 #[error("overrides should not be present at this stage")]
647 Override,
648 #[error(transparent)]
649 ResolveArraySizeError(#[from] proc::ResolveArraySizeError),
650 #[error("entry point with stage {0:?} and name '{1}' not found")]
651 EntryPointNotFound(ir::ShaderStage, String),
652}
653
654#[derive(PartialEq, Eq, Hash)]
655enum WrappedType {
656 ZeroValue(help::WrappedZeroValue),
657 ArrayLength(help::WrappedArrayLength),
658 ImageSample(help::WrappedImageSample),
659 ImageQuery(help::WrappedImageQuery),
660 ImageLoad(help::WrappedImageLoad),
661 ImageLoadScalar(crate::Scalar),
662 Constructor(help::WrappedConstructor),
663 StructMatrixAccess(help::WrappedStructMatrixAccess),
664 MatCx2(help::WrappedMatCx2),
665 Math(help::WrappedMath),
666 UnaryOp(help::WrappedUnaryOp),
667 BinaryOp(help::WrappedBinaryOp),
668 Cast(help::WrappedCast),
669}
670
671#[derive(Default)]
672struct Wrapped {
673 types: crate::FastHashSet<WrappedType>,
674 /// If true, the sampler heaps have been written out.
675 sampler_heaps: bool,
676 // Mapping from SamplerIndexBufferKey to the name the namer returned.
677 sampler_index_buffers: crate::FastHashMap<SamplerIndexBufferKey, String>,
678}
679
680impl Wrapped {
681 fn insert(&mut self, r#type: WrappedType) -> bool {
682 self.types.insert(r#type)
683 }
684
685 fn clear(&mut self) {
686 self.types.clear();
687 }
688}
689
690/// A fragment entry point to be considered when generating HLSL for the output interface of vertex
691/// entry points.
692///
693/// This is provided as an optional parameter to [`Writer::write`].
694///
695/// If this is provided, vertex outputs will be removed if they are not inputs of this fragment
696/// entry point. This is necessary for generating correct HLSL when some of the vertex shader
697/// outputs are not consumed by the fragment shader.
698pub struct FragmentEntryPoint<'a> {
699 module: &'a crate::Module,
700 func: &'a crate::Function,
701}
702
703impl<'a> FragmentEntryPoint<'a> {
704 /// Returns `None` if the entry point with the provided name can't be found or isn't a fragment
705 /// entry point.
706 pub fn new(module: &'a crate::Module, ep_name: &'a str) -> Option<Self> {
707 module
708 .entry_points
709 .iter()
710 .find(|ep| ep.name == ep_name)
711 .filter(|ep| ep.stage == crate::ShaderStage::Fragment)
712 .map(|ep| Self {
713 module,
714 func: &ep.function,
715 })
716 }
717}
718
719pub struct Writer<'a, W> {
720 out: W,
721 names: crate::FastHashMap<proc::NameKey, String>,
722 namer: proc::Namer,
723 /// HLSL backend options
724 options: &'a Options,
725 /// Per-stage backend options
726 pipeline_options: &'a PipelineOptions,
727 /// Information about entry point arguments and result types.
728 entry_point_io: crate::FastHashMap<usize, writer::EntryPointInterface>,
729 /// Set of expressions that have associated temporary variables
730 named_expressions: crate::NamedExpressions,
731 wrapped: Wrapped,
732 written_committed_intersection: bool,
733 written_candidate_intersection: bool,
734 continue_ctx: back::continue_forward::ContinueCtx,
735
736 /// A reference to some part of a global variable, lowered to a series of
737 /// byte offset calculations.
738 ///
739 /// See the [`storage`] module for background on why we need this.
740 ///
741 /// Each [`SubAccess`] in the vector is a lowering of some [`Access`] or
742 /// [`AccessIndex`] expression to the level of byte strides and offsets. See
743 /// [`SubAccess`] for details.
744 ///
745 /// This field is a member of [`Writer`] solely to allow re-use of
746 /// the `Vec`'s dynamic allocation. The value is no longer needed
747 /// once HLSL for the access has been generated.
748 ///
749 /// [`Storage`]: crate::AddressSpace::Storage
750 /// [`SubAccess`]: storage::SubAccess
751 /// [`Access`]: crate::Expression::Access
752 /// [`AccessIndex`]: crate::Expression::AccessIndex
753 temp_access_chain: Vec<storage::SubAccess>,
754 need_bake_expressions: back::NeedBakeExpressions,
755}