D3D12: fix shader model comparison
D3D12_SHADER_MODEL is encoded as 0xMm with M the major version and m the minor version. After decoding D3D12_SHADER_MODEL to a custom shader model format as Mm, which is a decimal value, it's meaningless to compare these two values. Bug: dawn:426 Change-Id: I3eb9a2a1392307616a5ac4d0aa49790bcc363629 Reviewed-on: https://dawn-review.googlesource.com/c/dawn/+/27300 Reviewed-by: Corentin Wallez <cwallez@chromium.org> Reviewed-by: Jiawei Shao <jiawei.shao@intel.com> Commit-Queue: Xinghua Cao <xinghua.cao@intel.com>
This commit is contained in:
parent
8575cb3ec7
commit
8c012e8796
|
@ -98,7 +98,7 @@ namespace dawn_native { namespace d3d12 {
|
|||
D3D12_FEATURE_DATA_D3D12_OPTIONS4 featureData4 = {};
|
||||
if (SUCCEEDED(adapter.GetDevice()->CheckFeatureSupport(
|
||||
D3D12_FEATURE_D3D12_OPTIONS4, &featureData4, sizeof(featureData4)))) {
|
||||
info.supportsShaderFloat16 = info.shaderModel >= D3D_SHADER_MODEL_6_2 &&
|
||||
info.supportsShaderFloat16 = driverShaderModel >= D3D_SHADER_MODEL_6_2 &&
|
||||
featureData4.Native16BitShaderOpsSupported;
|
||||
}
|
||||
|
||||
|
|
Loading…
Reference in New Issue