naga/back/hlsl/
mod.rs

1/*!
2Backend for [HLSL][hlsl] (High-Level Shading Language).
3
4# Supported shader model versions:
5- 5.0
6- 5.1
7- 6.0
8
9# Layout of values in `uniform` buffers
10
11WGSL's ["Internal Layout of Values"][ilov] rules specify how each WGSL
12type should be stored in `uniform` and `storage` buffers. The HLSL we
13generate must access values in that form, even when it is not what
14HLSL would use normally.
15
16Matching the WGSL memory layout is a concern only for `uniform`
17variables. WGSL `storage` buffers are translated as HLSL
18`ByteAddressBuffers`, for which we generate `Load` and `Store` method
19calls with explicit byte offsets. WGSL pipeline inputs must be scalars
20or vectors; they cannot be matrices, which is where the interesting
21problems arise. However, when an affected type appears in a struct
22definition, the transformations described here are applied without
23consideration of where the struct is used.
24
25Access to storage buffers is implemented in `storage.rs`. Access to
26uniform buffers is implemented where applicable in `writer.rs`.
27
28## Row- and column-major ordering for matrices
29
30WGSL specifies that matrices in uniform buffers are stored in
31column-major order. This matches HLSL's default, so one might expect
32things to be straightforward. Unfortunately, WGSL and HLSL disagree on
33what indexing a matrix means: in WGSL, `m[i]` retrieves the `i`'th
34*column* of `m`, whereas in HLSL it retrieves the `i`'th *row*. We
35want to avoid translating `m[i]` into some complicated reassembly of a
36vector from individually fetched components, so this is a problem.
37
38However, with a bit of trickery, it is possible to use HLSL's `m[i]`
39as the translation of WGSL's `m[i]`:
40
41- We declare all matrices in uniform buffers in HLSL with the
42  `row_major` qualifier, and transpose the row and column counts: a
43  WGSL `mat3x4<f32>`, say, becomes an HLSL `row_major float3x4`. (Note
44  that WGSL and HLSL type names put the row and column in reverse
45  order.) Since the HLSL type is the transpose of how WebGPU directs
46  the user to store the data, HLSL will load all matrices transposed.
47
48- Since matrices are transposed, an HLSL indexing expression retrieves
49  the "columns" of the intended WGSL value, as desired.
50
51- For vector-matrix multiplication, since `mul(transpose(m), v)` is
52  equivalent to `mul(v, m)` (note the reversal of the arguments), and
53  `mul(v, transpose(m))` is equivalent to `mul(m, v)`, we can
54  translate WGSL `m * v` and `v * m` to HLSL by simply reversing the
55  arguments to `mul`.
56
57## Padding in two-row matrices
58
59An HLSL `row_major floatKx2` matrix has padding between its rows that
60the WGSL `matKx2<f32>` matrix it represents does not. HLSL stores all
61matrix rows [aligned on 16-byte boundaries][16bb], whereas WGSL says
62that the columns of a `matKx2<f32>` need only be [aligned as required
63for `vec2<f32>`][ilov], which is [eight-byte alignment][8bb].
64
65To compensate for this, any time a `matKx2<f32>` appears in a WGSL
66`uniform` value or as part of a struct/array, we actually emit `K`
67separate `float2` members, and assemble/disassemble the matrix from its
68columns (in WGSL; rows in HLSL) upon load and store.
69
70For example, the following WGSL struct type:
71
72```ignore
73struct Baz {
74        m: mat3x2<f32>,
75}
76```
77
78is rendered as the HLSL struct type:
79
80```ignore
81struct Baz {
82    float2 m_0; float2 m_1; float2 m_2;
83};
84```
85
86The `wrapped_struct_matrix` functions in `help.rs` generate HLSL
87helper functions to access such members, converting between the stored
88form and the HLSL matrix types appropriately. For example, for reading
89the member `m` of the `Baz` struct above, we emit:
90
91```ignore
92float3x2 GetMatmOnBaz(Baz obj) {
93    return float3x2(obj.m_0, obj.m_1, obj.m_2);
94}
95```
96
97We also emit an analogous `Set` function, as well as functions for
98accessing individual columns by dynamic index.
99
100## Sampler Handling
101
102Due to limitations in how sampler heaps work in D3D12, we need to access samplers
103through a layer of indirection. Instead of directly binding samplers, we bind the entire
104sampler heap as both a standard and a comparison sampler heap. We then use a sampler
105index buffer for each bind group. This buffer is accessed in the shader to get the actual
106sampler index within the heap. See the wgpu_hal dx12 backend documentation for more
107information.
108
109# External textures
110
111Support for [`crate::ImageClass::External`] textures is implemented by lowering
112each external texture global variable to 3 `Texture2D<float4>`s, and a `cbuffer`
113of type `NagaExternalTextureParams`. This provides up to 3 planes of texture
114data (for example single planar RGBA, or separate Y, Cb, and Cr planes), and the
115parameters buffer containing information describing how to handle these
116correctly. The bind target to use for each of these globals is specified via
117[`Options::external_texture_binding_map`].
118
119External textures are supported by WGSL's `textureDimensions()`,
120`textureLoad()`, and `textureSampleBaseClampToEdge()` built-in functions. These
121are implemented using helper functions. See the following functions for how
122these are generated:
123 * `Writer::write_wrapped_image_query_function`
124 * `Writer::write_wrapped_image_load_function`
125 * `Writer::write_wrapped_image_sample_function`
126
127Ideally the set of global variables could be wrapped in a single struct that
128could conveniently be passed around. But, alas, HLSL does not allow structs to
129have `Texture2D` members. Fortunately, however, external textures can only be
130used as arguments to either built-in or user-defined functions. We therefore
131expand any external texture function argument to four consecutive arguments (3
132textures and the params struct) when declaring user-defined functions, and
133ensure our built-in function implementations take the same arguments. Then,
134whenever we need to emit an external texture in `Writer::write_expr`, which
135fortunately can only ever be for a global variable or function argument, we
136simply emit the variable name of each of the three textures and the parameters
137struct in a comma-separated list. This won't win any awards for elegance, but
138it works for our purposes.
139
140[hlsl]: https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl
141[ilov]: https://gpuweb.github.io/gpuweb/wgsl/#internal-value-layout
142[16bb]: https://github.com/microsoft/DirectXShaderCompiler/wiki/Buffer-Packing#constant-buffer-packing
143[8bb]: https://gpuweb.github.io/gpuweb/wgsl/#alignment-and-size
144*/
145
146mod conv;
147mod help;
148mod keywords;
149mod ray;
150mod storage;
151mod writer;
152
153use alloc::{string::String, vec::Vec};
154use core::fmt::Error as FmtError;
155
156use thiserror::Error;
157
158use crate::{back, ir, proc};
159
160/// Direct3D 12 binding information for a global variable.
161///
162/// This type provides the HLSL-specific information Naga needs to declare and
163/// access an HLSL global variable that cannot be derived from the `Module`
164/// itself.
165///
166/// An HLSL global variable declaration includes details that the Direct3D API
167/// will use to refer to it. For example:
168///
169///    RWByteAddressBuffer s_sasm : register(u0, space2);
170///
171/// This defines a global `s_sasm` that a Direct3D root signature would refer to
172/// as register `0` in register space `2` in a `UAV` descriptor range. Naga can
173/// infer the register's descriptor range type from the variable's address class
174/// (writable [`Storage`] variables are implemented by Direct3D Unordered Access
175/// Views, the `u` register type), but the register number and register space
176/// must be supplied by the user.
177///
178/// The [`back::hlsl::Options`] structure provides `BindTarget`s for various
179/// situations in which Naga may need to generate an HLSL global variable, like
180/// [`binding_map`] for Naga global variables, or [`immediates_target`] for
181/// a module's sole [`Immediate`] variable. See those fields' documentation
182/// for details.
183///
184/// [`Storage`]: crate::ir::AddressSpace::Storage
185/// [`back::hlsl::Options`]: Options
186/// [`binding_map`]: Options::binding_map
187/// [`immediates_target`]: Options::immediates_target
188/// [`Immediate`]: crate::ir::AddressSpace::Immediate
189#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, Hash)]
190#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
191#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
192pub struct BindTarget {
193    pub space: u8,
194    /// For regular bindings this is the register number.
195    ///
196    /// For sampler bindings, this is the index to use into the bind group's sampler index buffer.
197    pub register: u32,
198    /// If the binding is an unsized binding array, this overrides the size.
199    pub binding_array_size: Option<u32>,
200    /// This is the index in the buffer at [`Options::dynamic_storage_buffer_offsets_targets`].
201    pub dynamic_storage_buffer_offsets_index: Option<u32>,
202    /// This is a hint that we need to restrict indexing of vectors, matrices and arrays.
203    ///
204    /// If [`Options::restrict_indexing`] is also `true`, we will restrict indexing.
205    #[cfg_attr(any(feature = "serialize", feature = "deserialize"), serde(default))]
206    pub restrict_indexing: bool,
207}
208
209#[derive(Clone, Debug, Default, PartialEq, Eq, Hash)]
210#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
211#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
212/// BindTarget for dynamic storage buffer offsets
213pub struct OffsetsBindTarget {
214    pub space: u8,
215    pub register: u32,
216    pub size: u32,
217}
218
219#[cfg(feature = "deserialize")]
220#[derive(serde::Deserialize)]
221struct BindingMapSerialization {
222    resource_binding: crate::ResourceBinding,
223    bind_target: BindTarget,
224}
225
226#[cfg(feature = "deserialize")]
227fn deserialize_binding_map<'de, D>(deserializer: D) -> Result<BindingMap, D::Error>
228where
229    D: serde::Deserializer<'de>,
230{
231    use serde::Deserialize;
232
233    let vec = Vec::<BindingMapSerialization>::deserialize(deserializer)?;
234    let mut map = BindingMap::default();
235    for item in vec {
236        map.insert(item.resource_binding, item.bind_target);
237    }
238    Ok(map)
239}
240
241// Using `BTreeMap` instead of `HashMap` so that we can hash itself.
242pub type BindingMap = alloc::collections::BTreeMap<crate::ResourceBinding, BindTarget>;
243
244/// A HLSL shader model version.
245#[allow(non_snake_case, non_camel_case_types)]
246#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd)]
247#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
248#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
249pub enum ShaderModel {
250    V5_0,
251    V5_1,
252    V6_0,
253    V6_1,
254    V6_2,
255    V6_3,
256    V6_4,
257    V6_5,
258    V6_6,
259    V6_7,
260}
261
262impl ShaderModel {
263    pub const fn to_str(self) -> &'static str {
264        match self {
265            Self::V5_0 => "5_0",
266            Self::V5_1 => "5_1",
267            Self::V6_0 => "6_0",
268            Self::V6_1 => "6_1",
269            Self::V6_2 => "6_2",
270            Self::V6_3 => "6_3",
271            Self::V6_4 => "6_4",
272            Self::V6_5 => "6_5",
273            Self::V6_6 => "6_6",
274            Self::V6_7 => "6_7",
275        }
276    }
277}
278
279impl crate::ShaderStage {
280    pub const fn to_hlsl_str(self) -> &'static str {
281        match self {
282            Self::Vertex => "vs",
283            Self::Fragment => "ps",
284            Self::Compute => "cs",
285            Self::Task => "as",
286            Self::Mesh => "ms",
287        }
288    }
289}
290
291impl crate::ImageDimension {
292    const fn to_hlsl_str(self) -> &'static str {
293        match self {
294            Self::D1 => "1D",
295            Self::D2 => "2D",
296            Self::D3 => "3D",
297            Self::Cube => "Cube",
298        }
299    }
300}
301
302#[derive(Clone, Copy, Debug, Hash, Eq, Ord, PartialEq, PartialOrd)]
303#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
304#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
305pub struct SamplerIndexBufferKey {
306    pub group: u32,
307}
308
309#[derive(Clone, Debug, Hash, PartialEq, Eq)]
310#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
311#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
312#[cfg_attr(feature = "deserialize", serde(default))]
313pub struct SamplerHeapBindTargets {
314    pub standard_samplers: BindTarget,
315    pub comparison_samplers: BindTarget,
316}
317
318impl Default for SamplerHeapBindTargets {
319    fn default() -> Self {
320        Self {
321            standard_samplers: BindTarget {
322                space: 0,
323                register: 0,
324                binding_array_size: None,
325                dynamic_storage_buffer_offsets_index: None,
326                restrict_indexing: false,
327            },
328            comparison_samplers: BindTarget {
329                space: 1,
330                register: 0,
331                binding_array_size: None,
332                dynamic_storage_buffer_offsets_index: None,
333                restrict_indexing: false,
334            },
335        }
336    }
337}
338
339#[cfg(feature = "deserialize")]
340#[derive(serde::Deserialize)]
341struct SamplerIndexBufferBindingSerialization {
342    group: u32,
343    bind_target: BindTarget,
344}
345
346#[cfg(feature = "deserialize")]
347fn deserialize_sampler_index_buffer_bindings<'de, D>(
348    deserializer: D,
349) -> Result<SamplerIndexBufferBindingMap, D::Error>
350where
351    D: serde::Deserializer<'de>,
352{
353    use serde::Deserialize;
354
355    let vec = Vec::<SamplerIndexBufferBindingSerialization>::deserialize(deserializer)?;
356    let mut map = SamplerIndexBufferBindingMap::default();
357    for item in vec {
358        map.insert(
359            SamplerIndexBufferKey { group: item.group },
360            item.bind_target,
361        );
362    }
363    Ok(map)
364}
365
366// We use a BTreeMap here so that we can hash it.
367pub type SamplerIndexBufferBindingMap =
368    alloc::collections::BTreeMap<SamplerIndexBufferKey, BindTarget>;
369
370#[cfg(feature = "deserialize")]
371#[derive(serde::Deserialize)]
372struct DynamicStorageBufferOffsetTargetSerialization {
373    index: u32,
374    bind_target: OffsetsBindTarget,
375}
376
377#[cfg(feature = "deserialize")]
378fn deserialize_storage_buffer_offsets<'de, D>(
379    deserializer: D,
380) -> Result<DynamicStorageBufferOffsetsTargets, D::Error>
381where
382    D: serde::Deserializer<'de>,
383{
384    use serde::Deserialize;
385
386    let vec = Vec::<DynamicStorageBufferOffsetTargetSerialization>::deserialize(deserializer)?;
387    let mut map = DynamicStorageBufferOffsetsTargets::default();
388    for item in vec {
389        map.insert(item.index, item.bind_target);
390    }
391    Ok(map)
392}
393
394pub type DynamicStorageBufferOffsetsTargets = alloc::collections::BTreeMap<u32, OffsetsBindTarget>;
395
396/// HLSL binding information for a Naga [`External`] image global variable.
397///
398/// See the module documentation's section on [External textures][mod] for details.
399///
400/// [`External`]: crate::ir::ImageClass::External
401/// [mod]: #external-textures
402#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, Hash)]
403#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
404#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
405pub struct ExternalTextureBindTarget {
406    /// HLSL binding information for the individual plane textures.
407    ///
408    /// Each of these should refer to an HLSL `Texture2D<float4>` holding one
409    /// plane of data for the external texture. The exact meaning of each plane
410    /// varies at runtime depending on where the external texture's data
411    /// originated.
412    pub planes: [BindTarget; 3],
413
414    /// HLSL binding information for a buffer holding the sampling parameters.
415    ///
416    /// This should refer to a cbuffer of type `NagaExternalTextureParams`, that
417    /// the code Naga generates for `textureSampleBaseClampToEdge` consults to
418    /// decide how to combine the data in [`planes`] to get the result required
419    /// by the spec.
420    ///
421    /// [`planes`]: Self::planes
422    pub params: BindTarget,
423}
424
425#[cfg(feature = "deserialize")]
426#[derive(serde::Deserialize)]
427struct ExternalTextureBindingMapSerialization {
428    resource_binding: crate::ResourceBinding,
429    bind_target: ExternalTextureBindTarget,
430}
431
432#[cfg(feature = "deserialize")]
433fn deserialize_external_texture_binding_map<'de, D>(
434    deserializer: D,
435) -> Result<ExternalTextureBindingMap, D::Error>
436where
437    D: serde::Deserializer<'de>,
438{
439    use serde::Deserialize;
440
441    let vec = Vec::<ExternalTextureBindingMapSerialization>::deserialize(deserializer)?;
442    let mut map = ExternalTextureBindingMap::default();
443    for item in vec {
444        map.insert(item.resource_binding, item.bind_target);
445    }
446    Ok(map)
447}
448pub type ExternalTextureBindingMap =
449    alloc::collections::BTreeMap<crate::ResourceBinding, ExternalTextureBindTarget>;
450
451/// Shorthand result used internally by the backend
452type BackendResult = Result<(), Error>;
453
454#[derive(Clone, Debug, PartialEq, thiserror::Error)]
455#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
456#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
457pub enum EntryPointError {
458    #[error("mapping of {0:?} is missing")]
459    MissingBinding(crate::ResourceBinding),
460}
461
462/// Configuration used in the [`Writer`].
463#[derive(Clone, Debug, Hash, PartialEq, Eq)]
464#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
465#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
466#[cfg_attr(feature = "deserialize", serde(default))]
467pub struct Options {
468    /// The hlsl shader model to be used
469    pub shader_model: ShaderModel,
470
471    /// HLSL binding information for each Naga global variable.
472    ///
473    /// This maps Naga [`GlobalVariable`]'s [`ResourceBinding`]s to a
474    /// [`BindTarget`] specifying its register number and space, along with
475    /// other details necessary to generate a full HLSL declaration for it,
476    /// or to access its value.
477    ///
478    /// This must provide a [`BindTarget`] for every [`GlobalVariable`] in the
479    /// [`Module`] that has a [`binding`].
480    ///
481    /// [`GlobalVariable`]: crate::ir::GlobalVariable
482    /// [`ResourceBinding`]: crate::ir::ResourceBinding
483    /// [`Module`]: crate::ir::Module
484    /// [`binding`]: crate::ir::GlobalVariable::binding
485    #[cfg_attr(
486        feature = "deserialize",
487        serde(deserialize_with = "deserialize_binding_map")
488    )]
489    pub binding_map: BindingMap,
490
491    /// Don't panic on missing bindings, instead generate any HLSL.
492    pub fake_missing_bindings: bool,
493    /// Add special constants to `SV_VertexIndex` and `SV_InstanceIndex`,
494    /// to make them work like in Vulkan/Metal, with help of the host.
495    pub special_constants_binding: Option<BindTarget>,
496
497    /// HLSL binding information for the [`Immediate`] global, if present.
498    ///
499    /// If a module contains a global in the [`Immediate`] address space, the
500    /// `dx12` backend stores its value directly in the root signature as a
501    /// series of [`D3D12_ROOT_PARAMETER_TYPE_32BIT_CONSTANTS`], whose binding
502    /// information is given here.
503    ///
504    /// [`Immediate`]: crate::ir::AddressSpace::Immediate
505    /// [`D3D12_ROOT_PARAMETER_TYPE_32BIT_CONSTANTS`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_root_parameter_type
506    pub immediates_target: Option<BindTarget>,
507
508    /// HLSL binding information for the sampler heap and comparison sampler heap.
509    pub sampler_heap_target: SamplerHeapBindTargets,
510
511    /// Mapping of each bind group's sampler index buffer to a bind target.
512    #[cfg_attr(
513        feature = "deserialize",
514        serde(deserialize_with = "deserialize_sampler_index_buffer_bindings")
515    )]
516    pub sampler_buffer_binding_map: SamplerIndexBufferBindingMap,
517    /// Bind target for dynamic storage buffer offsets
518    #[cfg_attr(
519        feature = "deserialize",
520        serde(deserialize_with = "deserialize_storage_buffer_offsets")
521    )]
522    pub dynamic_storage_buffer_offsets_targets: DynamicStorageBufferOffsetsTargets,
523    #[cfg_attr(
524        feature = "deserialize",
525        serde(deserialize_with = "deserialize_external_texture_binding_map")
526    )]
527
528    /// HLSL binding information for [`External`] image global variables.
529    ///
530    /// See [`ExternalTextureBindTarget`] for details.
531    ///
532    /// [`External`]: crate::ir::ImageClass::External
533    pub external_texture_binding_map: ExternalTextureBindingMap,
534
535    /// Should workgroup variables be zero initialized (by polyfilling)?
536    pub zero_initialize_workgroup_memory: bool,
537    /// Should we restrict indexing of vectors, matrices and arrays?
538    pub restrict_indexing: bool,
539    /// If set, loops will have code injected into them, forcing the compiler
540    /// to think the number of iterations is bounded.
541    pub force_loop_bounding: bool,
542    /// if set, ray queries will get a variable to track their state to prevent
543    /// misuse.
544    pub ray_query_initialization_tracking: bool,
545}
546
547impl Default for Options {
548    fn default() -> Self {
549        Options {
550            shader_model: ShaderModel::V5_1,
551            binding_map: BindingMap::default(),
552            fake_missing_bindings: true,
553            special_constants_binding: None,
554            sampler_heap_target: SamplerHeapBindTargets::default(),
555            sampler_buffer_binding_map: alloc::collections::BTreeMap::default(),
556            immediates_target: None,
557            dynamic_storage_buffer_offsets_targets: alloc::collections::BTreeMap::new(),
558            external_texture_binding_map: ExternalTextureBindingMap::default(),
559            zero_initialize_workgroup_memory: true,
560            restrict_indexing: true,
561            force_loop_bounding: true,
562            ray_query_initialization_tracking: true,
563        }
564    }
565}
566
567impl Options {
568    fn resolve_resource_binding(
569        &self,
570        res_binding: &crate::ResourceBinding,
571    ) -> Result<BindTarget, EntryPointError> {
572        match self.binding_map.get(res_binding) {
573            Some(target) => Ok(*target),
574            None if self.fake_missing_bindings => Ok(BindTarget {
575                space: res_binding.group as u8,
576                register: res_binding.binding,
577                binding_array_size: None,
578                dynamic_storage_buffer_offsets_index: None,
579                restrict_indexing: false,
580            }),
581            None => Err(EntryPointError::MissingBinding(*res_binding)),
582        }
583    }
584
585    fn resolve_external_texture_resource_binding(
586        &self,
587        res_binding: &crate::ResourceBinding,
588    ) -> Result<ExternalTextureBindTarget, EntryPointError> {
589        match self.external_texture_binding_map.get(res_binding) {
590            Some(target) => Ok(*target),
591            None if self.fake_missing_bindings => {
592                let fake = BindTarget {
593                    space: res_binding.group as u8,
594                    register: res_binding.binding,
595                    binding_array_size: None,
596                    dynamic_storage_buffer_offsets_index: None,
597                    restrict_indexing: false,
598                };
599                Ok(ExternalTextureBindTarget {
600                    planes: [fake, fake, fake],
601                    params: fake,
602                })
603            }
604            None => Err(EntryPointError::MissingBinding(*res_binding)),
605        }
606    }
607}
608
609/// Reflection info for entry point names.
610#[derive(Default)]
611pub struct ReflectionInfo {
612    /// Mapping of the entry point names.
613    ///
614    /// Each item in the array corresponds to an entry point index. The real entry point name may be different if one of the
615    /// reserved words are used.
616    ///
617    /// Note: Some entry points may fail translation because of missing bindings.
618    pub entry_point_names: Vec<Result<String, EntryPointError>>,
619}
620
621/// A subset of options that are meant to be changed per pipeline.
622#[derive(Debug, Default, Clone)]
623#[cfg_attr(feature = "serialize", derive(serde::Serialize))]
624#[cfg_attr(feature = "deserialize", derive(serde::Deserialize))]
625#[cfg_attr(feature = "deserialize", serde(default))]
626pub struct PipelineOptions {
627    /// The entry point to write.
628    ///
629    /// Entry points are identified by a shader stage specification,
630    /// and a name.
631    ///
632    /// If `None`, all entry points will be written. If `Some` and the entry
633    /// point is not found, an error will be thrown while writing.
634    pub entry_point: Option<(ir::ShaderStage, String)>,
635}
636
637#[derive(Error, Debug)]
638pub enum Error {
639    #[error(transparent)]
640    IoError(#[from] FmtError),
641    #[error("A scalar with an unsupported width was requested: {0:?}")]
642    UnsupportedScalar(crate::Scalar),
643    #[error("{0}")]
644    Unimplemented(String), // TODO: Error used only during development
645    #[error("{0}")]
646    Custom(String),
647    #[error("overrides should not be present at this stage")]
648    Override,
649    #[error(transparent)]
650    ResolveArraySizeError(#[from] proc::ResolveArraySizeError),
651    #[error("entry point with stage {0:?} and name '{1}' not found")]
652    EntryPointNotFound(ir::ShaderStage, String),
653    #[error("requires shader model {1:?} for reason: {0}")]
654    ShaderModelTooLow(String, ShaderModel),
655}
656
657#[derive(PartialEq, Eq, Hash)]
658enum WrappedType {
659    ZeroValue(help::WrappedZeroValue),
660    ArrayLength(help::WrappedArrayLength),
661    ImageSample(help::WrappedImageSample),
662    ImageQuery(help::WrappedImageQuery),
663    ImageLoad(help::WrappedImageLoad),
664    ImageLoadScalar(crate::Scalar),
665    Constructor(help::WrappedConstructor),
666    StructMatrixAccess(help::WrappedStructMatrixAccess),
667    MatCx2(help::WrappedMatCx2),
668    Math(help::WrappedMath),
669    UnaryOp(help::WrappedUnaryOp),
670    BinaryOp(help::WrappedBinaryOp),
671    Cast(help::WrappedCast),
672}
673
674#[derive(Default)]
675struct Wrapped {
676    types: crate::FastHashSet<WrappedType>,
677    /// If true, the sampler heaps have been written out.
678    sampler_heaps: bool,
679    // Mapping from SamplerIndexBufferKey to the name the namer returned.
680    sampler_index_buffers: crate::FastHashMap<SamplerIndexBufferKey, String>,
681}
682
683impl Wrapped {
684    fn insert(&mut self, r#type: WrappedType) -> bool {
685        self.types.insert(r#type)
686    }
687
688    fn clear(&mut self) {
689        self.types.clear();
690    }
691}
692
693/// A fragment entry point to be considered when generating HLSL for the output interface of vertex
694/// entry points.
695///
696/// This is provided as an optional parameter to [`Writer::write`].
697///
698/// If this is provided, vertex outputs will be removed if they are not inputs of this fragment
699/// entry point. This is necessary for generating correct HLSL when some of the vertex shader
700/// outputs are not consumed by the fragment shader.
701pub struct FragmentEntryPoint<'a> {
702    module: &'a crate::Module,
703    func: &'a crate::Function,
704}
705
706impl<'a> FragmentEntryPoint<'a> {
707    /// Returns `None` if the entry point with the provided name can't be found or isn't a fragment
708    /// entry point.
709    pub fn new(module: &'a crate::Module, ep_name: &'a str) -> Option<Self> {
710        module
711            .entry_points
712            .iter()
713            .find(|ep| ep.name == ep_name)
714            .filter(|ep| ep.stage == crate::ShaderStage::Fragment)
715            .map(|ep| Self {
716                module,
717                func: &ep.function,
718            })
719    }
720}
721
722pub struct Writer<'a, W> {
723    out: W,
724    names: crate::FastHashMap<proc::NameKey, String>,
725    namer: proc::Namer,
726    /// HLSL backend options
727    options: &'a Options,
728    /// Per-stage backend options
729    pipeline_options: &'a PipelineOptions,
730    /// Information about entry point arguments and result types.
731    entry_point_io: crate::FastHashMap<usize, writer::EntryPointInterface>,
732    /// Set of expressions that have associated temporary variables
733    named_expressions: crate::NamedExpressions,
734    wrapped: Wrapped,
735    written_committed_intersection: bool,
736    written_candidate_intersection: bool,
737    continue_ctx: back::continue_forward::ContinueCtx,
738
739    /// A reference to some part of a global variable, lowered to a series of
740    /// byte offset calculations.
741    ///
742    /// See the [`storage`] module for background on why we need this.
743    ///
744    /// Each [`SubAccess`] in the vector is a lowering of some [`Access`] or
745    /// [`AccessIndex`] expression to the level of byte strides and offsets. See
746    /// [`SubAccess`] for details.
747    ///
748    /// This field is a member of [`Writer`] solely to allow re-use of
749    /// the `Vec`'s dynamic allocation. The value is no longer needed
750    /// once HLSL for the access has been generated.
751    ///
752    /// [`Storage`]: crate::AddressSpace::Storage
753    /// [`SubAccess`]: storage::SubAccess
754    /// [`Access`]: crate::Expression::Access
755    /// [`AccessIndex`]: crate::Expression::AccessIndex
756    temp_access_chain: Vec<storage::SubAccess>,
757    need_bake_expressions: back::NeedBakeExpressions,
758}