Ben Clayton 1a1b5278d5 tint/transform: Inline HLSL uniform / storage buffers
Change the DecomposeMemoryAccess to behave more like the DirectVariableAccess transform, in that it'll inline the access of buffer variable into the load / store helper functions, instead of passing the array down.

This avoids large array copies observed with FXC, which can have *severe* performance costs.

Fixed: tint:1819
Change-Id: I52eb3f908813f72ab9da446743e24a2637158309
Reviewed-on: https://dawn-review.googlesource.com/c/dawn/+/121460
Kokoro: Kokoro <noreply+kokoro@google.com>
Auto-Submit: Ben Clayton <bclayton@google.com>
Reviewed-by: James Price <jrprice@google.com>
Commit-Queue: James Price <jrprice@google.com>
2023-02-24 17:16:55 +00:00

30 lines
784 B
HLSL

RWByteAddressBuffer sb_rw : register(u0, space0);
struct atomic_compare_exchange_weak_ret_type {
int old_value;
bool exchanged;
};
atomic_compare_exchange_weak_ret_type sb_rwatomicCompareExchangeWeak(uint offset, int compare, int value) {
atomic_compare_exchange_weak_ret_type result=(atomic_compare_exchange_weak_ret_type)0;
sb_rw.InterlockedCompareExchange(offset, compare, value, result.old_value);
result.exchanged = result.old_value == compare;
return result;
}
void atomicCompareExchangeWeak_1bd40a() {
atomic_compare_exchange_weak_ret_type res = sb_rwatomicCompareExchangeWeak(0u, 1, 1);
}
void fragment_main() {
atomicCompareExchangeWeak_1bd40a();
return;
}
[numthreads(1, 1, 1)]
void compute_main() {
atomicCompareExchangeWeak_1bd40a();
return;
}