Introduce SubresourceStorage (1/N)

This CL adds the start of the implementation of a SubresourceStorage<T>
container class that stores per-subresource state in a compressed
fashion. Only the getter methods and Update() modifying methods are
added because they are the first step necessary to test the behavior of
SubresourceStorage.

Subsequent CLs will:
 - add the Merge() operation
 - move the per-aspect storage to be inlined and avoid allocation of
mData and mLayerCompressed if possible
 - use the container where applicable in dawn_native
 - (maybe) move clear-state tracking in the backends as part of barrier
tracking

Bug: dawn:441

Change-Id: Ic93e5af16dd705b260424f05e4dc3e0c9f6fbd0a
Reviewed-on: https://dawn-review.googlesource.com/c/dawn/+/34464
Commit-Queue: Corentin Wallez <cwallez@chromium.org>
Reviewed-by: Ben Clayton <bclayton@google.com>
This commit is contained in:
Corentin Wallez 2020-12-09 15:38:27 +00:00 committed by Commit Bot service account
parent 99d6c14a3b
commit 9229175811
5 changed files with 875 additions and 0 deletions

View File

@ -265,6 +265,7 @@ source_set("dawn_native_sources") {
"StagingBuffer.h",
"Subresource.cpp",
"Subresource.h",
"SubresourceStorage.h",
"Surface.cpp",
"Surface.h",
"SwapChain.cpp",

View File

@ -152,6 +152,7 @@ target_sources(dawn_native PRIVATE
"StagingBuffer.h"
"Subresource.cpp"
"Subresource.h"
"SubresourceStorage.h"
"Surface.cpp"
"Surface.h"
"SwapChain.cpp"

View File

@ -0,0 +1,419 @@
// Copyright 2020 The Dawn Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef DAWNNATIVE_SUBRESOURCESTORAGE_H_
#define DAWNNATIVE_SUBRESOURCESTORAGE_H_
#include "common/Assert.h"
#include "dawn_native/EnumMaskIterator.h"
#include "dawn_native/Subresource.h"
#include <array>
#include <limits>
#include <memory>
#include <vector>
namespace dawn_native {
// SubresourceStorage<T> acts like a simple map from subresource (aspect, layer, level) to a
// value of type T except that it tries to compress similar subresources so that algorithms
// can act on a whole range of subresources at once if they have the same state.
//
// For example a very common case to optimize for is the tracking of the usage of texture
// subresources inside a render pass: the vast majority of texture views will select the whole
// texture while a small minority will select a sub-range. We want to optimize the common case
// by setting and checking a single "usage" value when a full subresource is used but at the
// same time allow per-subresource data when needed.
//
// Another example is barrier tracking per-subresource in the backends: it will often happen
// that during texture upload each mip level will have a different "barrier state". However
// when the texture is fully uploaded and after it is used for sampling (with a full view) for
// the first time, the barrier state will likely be the same across all the subresources.
// That's why some form of "recompression" of subresource state must be possibe.
//
// In order to keep the implementation details private and to avoid iterator-hell, this
// container uses a more functional approach of calling a closure on the interesting ranges.
// This is for example how to look at the state of all subresources.
//
// subresources.Iterate([](const SubresourceRange& range, const T& data) {
// // Do something with the knowledge that all the subresources in `range` have value
// // `data`.
// });
//
// SubresourceStorage internally tracks compression state per aspect and then per layer of each
// aspect. This means that a 2-aspect texture can have the following compression state:
//
// - Aspect 0 is fully compressed.
// - Aspect 1 is partially compressed:
// - Aspect 1 layer 3 is decompressed.
// - Aspect 1 layer 0-2 and 4-42 are compressed.
//
// A useful model to reason about SubresourceStorage is to represent is as a tree:
//
// - SubresourceStorage is the root.
// |-> Nodes 1 deep represent each aspect. If an aspect is compressed, its node doesn't have
// any children because the data is constant across all of the subtree.
// |-> Nodes 2 deep represent layers (for uncompressed aspects). If a layer is compressed,
// its node doesn't have any children because the data is constant across all of the
// subtree.
// |-> Nodes 3 deep represent individial mip levels (for uncompressed layers).
//
// The concept of recompression is the removal of all child nodes of a non-leaf node when the
// data is constant across them. Decompression is the addition of child nodes to a leaf node
// and copying of its data to all its children.
//
// The choice of having secondary compression for array layers is to optimize for the cases
// where transfer operations are used to update specific layers of texture with render or
// transfer operations, while the rest is untouched. It seems much less likely that there
// would be operations that touch all Nth mips of a 2D array texture without touching the
// others.
//
// T must be a copyable type that supports equality comparison with ==.
//
// TODO(cwallez@chromium.org): Add the Merge() operation.
// TODO(cwallez@chromium.org): Inline the storage for aspects to avoid allocating when
// possible.
template <typename T>
class SubresourceStorage {
public:
// Creates the storage with the given "dimensions" and all subresources starting with the
// initial value.
SubresourceStorage(Aspect aspects,
uint32_t arrayLayerCount,
uint32_t mipLevelCount,
T initialValue = {});
// Returns the data for a single subresource. Note that the reference returned might be the
// same for multiple subresources.
const T& Get(Aspect aspect, uint32_t arrayLayer, uint32_t mipLevel) const;
// Given an iterateFunc that's a function or function-like objet that call be called with
// arguments of type (const SubresourceRange& range, const T& data) and returns void,
// calls it with aggregate ranges if possible, such that each subresource is part of
// exactly one of the ranges iterateFunc is called with (and obviously data is the value
// stored for that subresource). For example:
//
// subresources.Iterate([&](const SubresourceRange& range, const T& data) {
// // Do something with range and data.
// });
template <typename F>
void Iterate(F&& iterateFunc) const;
// Given an updateFunc that's a function or function-like objet that call be called with
// arguments of type (const SubresourceRange& range, T* data) and returns void,
// calls it with ranges that in aggregate form `range` and pass for each of the
// sub-ranges a pointer to modify the value for that sub-range. For example:
//
// subresources.Update(view->GetRange(), [](const SubresourceRange&, T* data) {
// *data |= wgpu::TextureUsage::Stuff;
// });
//
// /!\ WARNING: updateFunc should never use range to compute the update to data otherwise
// your code is likely to break when compression happens. Range should only be used for
// side effects like using it to compute a Vulkan pipeline barrier.
template <typename F>
void Update(const SubresourceRange& range, F&& updateFunc);
// Other operations to consider:
//
// - Merge(Range, SubresourceStorage<U>, mergeFunc) that takes the values from the other
// storage and modifies the value of the current storage with it.
// - UpdateTo(Range, T) that updates the range to a constant value.
// Methods to query the internal state of SubresourceStorage for testing.
Aspect GetAspectsForTesting() const;
uint32_t GetArrayLayerCountForTesting() const;
uint32_t GetMipLevelCountForTesting() const;
bool IsAspectCompressedForTesting(Aspect aspect) const;
bool IsLayerCompressedForTesting(Aspect aspect, uint32_t layer) const;
private:
void DecompressAspect(uint32_t aspectIndex);
void RecompressAspect(uint32_t aspectIndex);
void DecompressLayer(uint32_t aspectIndex, uint32_t layer);
void RecompressLayer(uint32_t aspectIndex, uint32_t layer);
SubresourceRange GetFullLayerRange(Aspect aspect, uint32_t layer) const;
bool& LayerCompressed(uint32_t aspectIndex, uint32_t layerIndex);
bool LayerCompressed(uint32_t aspectIndex, uint32_t layerIndex) const;
T& Data(uint32_t aspectIndex, uint32_t layerIndex = 0, uint32_t levelIndex = 0);
const T& Data(uint32_t aspectIndex, uint32_t layerIndex = 0, uint32_t levelIndex = 0) const;
Aspect mAspects;
uint8_t mMipLevelCount;
uint16_t mArrayLayerCount;
// Invariant: if an aspect is marked compressed, then all it's layers are marked as
// compressed.
static constexpr size_t kMaxAspects = 2;
std::array<bool, kMaxAspects> mAspectCompressed;
// Indexed as mLayerCompressed[aspectIndex * mArrayLayerCount + layer].
std::unique_ptr<bool[]> mLayerCompressed;
// Indexed as mData[(aspectIndex * mArrayLayerCount + layer) * mMipLevelCount + level].
// The data for a compressed aspect is stored in the slot for (aspect, 0, 0). Similarly
// the data for a compressed layer of aspect if in the slot for (aspect, layer, 0).
std::unique_ptr<T[]> mData;
};
template <typename T>
SubresourceStorage<T>::SubresourceStorage(Aspect aspects,
uint32_t arrayLayerCount,
uint32_t mipLevelCount,
T initialValue)
: mAspects(aspects), mMipLevelCount(mipLevelCount), mArrayLayerCount(arrayLayerCount) {
ASSERT(arrayLayerCount <= std::numeric_limits<decltype(mArrayLayerCount)>::max());
ASSERT(mipLevelCount <= std::numeric_limits<decltype(mMipLevelCount)>::max());
uint32_t aspectCount = GetAspectCount(aspects);
ASSERT(aspectCount <= kMaxAspects);
mLayerCompressed = std::make_unique<bool[]>(aspectCount * mArrayLayerCount);
mData = std::make_unique<T[]>(aspectCount * mArrayLayerCount * mMipLevelCount);
for (uint32_t aspectIndex = 0; aspectIndex < aspectCount; aspectIndex++) {
mAspectCompressed[aspectIndex] = true;
Data(aspectIndex) = initialValue;
}
for (uint32_t layerIndex = 0; layerIndex < aspectCount * mArrayLayerCount; layerIndex++) {
mLayerCompressed[layerIndex] = true;
}
}
template <typename T>
template <typename F>
void SubresourceStorage<T>::Update(const SubresourceRange& range, F&& updateFunc) {
bool fullLayers = range.baseMipLevel == 0 && range.levelCount == mMipLevelCount;
bool fullAspects =
range.baseArrayLayer == 0 && range.layerCount == mArrayLayerCount && fullLayers;
for (Aspect aspect : IterateEnumMask(range.aspects)) {
uint32_t aspectIndex = GetAspectIndex(aspect);
// Call the updateFunc once for the whole aspect if possible or decompress and fallback
// to per-layer handling.
if (mAspectCompressed[aspectIndex]) {
if (fullAspects) {
SubresourceRange updateRange =
SubresourceRange::MakeFull(aspect, mArrayLayerCount, mMipLevelCount);
updateFunc(updateRange, &Data(aspectIndex));
continue;
}
DecompressAspect(aspectIndex);
}
uint32_t layerEnd = range.baseArrayLayer + range.layerCount;
for (uint32_t layer = range.baseArrayLayer; layer < layerEnd; layer++) {
// Call the updateFunc once for the whole layer if possible or decompress and
// fallback to per-level handling.
if (LayerCompressed(aspectIndex, layer)) {
if (fullLayers) {
SubresourceRange updateRange = GetFullLayerRange(aspect, layer);
updateFunc(updateRange, &Data(aspectIndex, layer));
continue;
}
DecompressLayer(aspectIndex, layer);
}
// Worst case: call updateFunc per level.
uint32_t levelEnd = range.baseMipLevel + range.levelCount;
for (uint32_t level = range.baseMipLevel; level < levelEnd; level++) {
SubresourceRange updateRange =
SubresourceRange::MakeSingle(aspect, layer, level);
updateFunc(updateRange, &Data(aspectIndex, layer, level));
}
// If the range has fullLayers then it is likely we can recompress after the calls
// to updateFunc (this branch is skipped if updateFunc was called for the whole
// layer).
if (fullLayers) {
RecompressLayer(aspectIndex, layer);
}
}
// If the range has fullAspects then it is likely we can recompress after the calls to
// updateFunc (this branch is skipped if updateFunc was called for the whole aspect).
if (fullAspects) {
RecompressAspect(aspectIndex);
}
}
}
template <typename T>
template <typename F>
void SubresourceStorage<T>::Iterate(F&& iterateFunc) const {
for (Aspect aspect : IterateEnumMask(mAspects)) {
uint32_t aspectIndex = GetAspectIndex(aspect);
// Fastest path, call iterateFunc on the whole aspect at once.
if (mAspectCompressed[aspectIndex]) {
SubresourceRange range =
SubresourceRange::MakeFull(aspect, mArrayLayerCount, mMipLevelCount);
iterateFunc(range, Data(aspectIndex));
continue;
}
for (uint32_t layer = 0; layer < mArrayLayerCount; layer++) {
// Fast path, call iterateFunc on the whole array layer at once.
if (LayerCompressed(aspectIndex, layer)) {
SubresourceRange range = GetFullLayerRange(aspect, layer);
iterateFunc(range, Data(aspectIndex, layer));
continue;
}
// Slow path, call iterateFunc for each mip level.
for (uint32_t level = 0; level < mMipLevelCount; level++) {
SubresourceRange range = SubresourceRange::MakeSingle(aspect, layer, level);
iterateFunc(range, Data(aspectIndex, layer, level));
}
}
}
}
template <typename T>
const T& SubresourceStorage<T>::Get(Aspect aspect,
uint32_t arrayLayer,
uint32_t mipLevel) const {
uint32_t aspectIndex = GetAspectIndex(aspect);
ASSERT(aspectIndex < GetAspectCount(mAspects));
ASSERT(arrayLayer < mArrayLayerCount);
ASSERT(mipLevel < mMipLevelCount);
// Fastest path, the aspect is compressed!
uint32_t dataIndex = aspectIndex * mArrayLayerCount * mMipLevelCount;
if (mAspectCompressed[aspectIndex]) {
return Data(aspectIndex);
}
// Fast path, the array layer is compressed.
dataIndex += arrayLayer * mMipLevelCount;
if (LayerCompressed(aspectIndex, arrayLayer)) {
return Data(aspectIndex, arrayLayer);
}
return Data(aspectIndex, arrayLayer, mipLevel);
}
template <typename T>
Aspect SubresourceStorage<T>::GetAspectsForTesting() const {
return mAspects;
}
template <typename T>
uint32_t SubresourceStorage<T>::GetArrayLayerCountForTesting() const {
return mArrayLayerCount;
}
template <typename T>
uint32_t SubresourceStorage<T>::GetMipLevelCountForTesting() const {
return mMipLevelCount;
}
template <typename T>
bool SubresourceStorage<T>::IsAspectCompressedForTesting(Aspect aspect) const {
return mAspectCompressed[GetAspectIndex(aspect)];
}
template <typename T>
bool SubresourceStorage<T>::IsLayerCompressedForTesting(Aspect aspect, uint32_t layer) const {
return mLayerCompressed[GetAspectIndex(aspect) * mArrayLayerCount + layer];
}
template <typename T>
void SubresourceStorage<T>::DecompressAspect(uint32_t aspectIndex) {
ASSERT(mAspectCompressed[aspectIndex]);
ASSERT(LayerCompressed(aspectIndex, 0));
for (uint32_t layer = 1; layer < mArrayLayerCount; layer++) {
Data(aspectIndex, layer) = Data(aspectIndex);
ASSERT(LayerCompressed(aspectIndex, layer));
}
mAspectCompressed[aspectIndex] = false;
}
template <typename T>
void SubresourceStorage<T>::RecompressAspect(uint32_t aspectIndex) {
ASSERT(!mAspectCompressed[aspectIndex]);
for (uint32_t layer = 1; layer < mArrayLayerCount; layer++) {
if (Data(aspectIndex, layer) != Data(aspectIndex) ||
!LayerCompressed(aspectIndex, layer)) {
return;
}
}
mAspectCompressed[aspectIndex] = true;
}
template <typename T>
void SubresourceStorage<T>::DecompressLayer(uint32_t aspectIndex, uint32_t layer) {
ASSERT(LayerCompressed(aspectIndex, layer));
ASSERT(!mAspectCompressed[aspectIndex]);
for (uint32_t level = 1; level < mMipLevelCount; level++) {
Data(aspectIndex, layer, level) = Data(aspectIndex, layer);
}
LayerCompressed(aspectIndex, layer) = false;
}
template <typename T>
void SubresourceStorage<T>::RecompressLayer(uint32_t aspectIndex, uint32_t layer) {
ASSERT(!LayerCompressed(aspectIndex, layer));
ASSERT(!mAspectCompressed[aspectIndex]);
for (uint32_t level = 1; level < mMipLevelCount; level++) {
if (Data(aspectIndex, layer, level) != Data(aspectIndex, layer)) {
return;
}
}
LayerCompressed(aspectIndex, layer) = true;
}
template <typename T>
SubresourceRange SubresourceStorage<T>::GetFullLayerRange(Aspect aspect, uint32_t layer) const {
return {aspect, {layer, 1}, {0, mMipLevelCount}};
}
template <typename T>
bool& SubresourceStorage<T>::LayerCompressed(uint32_t aspectIndex, uint32_t layer) {
return mLayerCompressed[aspectIndex * mArrayLayerCount + layer];
}
template <typename T>
bool SubresourceStorage<T>::LayerCompressed(uint32_t aspectIndex, uint32_t layer) const {
return mLayerCompressed[aspectIndex * mArrayLayerCount + layer];
}
template <typename T>
T& SubresourceStorage<T>::Data(uint32_t aspectIndex, uint32_t layer, uint32_t level) {
return mData[(aspectIndex * mArrayLayerCount + layer) * mMipLevelCount + level];
}
template <typename T>
const T& SubresourceStorage<T>::Data(uint32_t aspectIndex,
uint32_t layer,
uint32_t level) const {
return mData[(aspectIndex * mArrayLayerCount + layer) * mMipLevelCount + level];
}
} // namespace dawn_native
#endif // DAWNNATIVE_SUBRESOURCESTORAGE_H_

View File

@ -177,6 +177,7 @@ test("dawn_unittests") {
"unittests/SerialQueueTests.cpp",
"unittests/SlabAllocatorTests.cpp",
"unittests/StackContainerTests.cpp",
"unittests/SubresourceStorageTests.cpp",
"unittests/SystemUtilsTests.cpp",
"unittests/ToBackendTests.cpp",
"unittests/TypedIntegerTests.cpp",

View File

@ -0,0 +1,453 @@
// Copyright 2020 The Dawn Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <gtest/gtest.h>
#include "dawn_native/SubresourceStorage.h"
#include "common/Log.h"
using namespace dawn_native;
// A fake class that replicates the behavior of SubresourceStorage but without any compression and
// is used to compare the results of operations on SubresourceStorage against the "ground truth" of
// FakeStorage.
template <typename T>
struct FakeStorage {
FakeStorage(Aspect aspects,
uint32_t arrayLayerCount,
uint32_t mipLevelCount,
T initialValue = {})
: mAspects(aspects),
mArrayLayerCount(arrayLayerCount),
mMipLevelCount(mipLevelCount),
mData(GetAspectCount(aspects) * arrayLayerCount * mipLevelCount, initialValue) {
}
template <typename F>
void Update(const SubresourceRange& range, F&& updateFunc) {
for (Aspect aspect : IterateEnumMask(range.aspects)) {
for (uint32_t layer = range.baseArrayLayer;
layer < range.baseArrayLayer + range.layerCount; layer++) {
for (uint32_t level = range.baseMipLevel;
level < range.baseMipLevel + range.levelCount; level++) {
SubresourceRange range = SubresourceRange::MakeSingle(aspect, layer, level);
updateFunc(range, &mData[GetDataIndex(aspect, layer, level)]);
}
}
}
}
const T& Get(Aspect aspect, uint32_t arrayLayer, uint32_t mipLevel) const {
return mData[GetDataIndex(aspect, arrayLayer, mipLevel)];
}
size_t GetDataIndex(Aspect aspect, uint32_t layer, uint32_t level) const {
uint32_t aspectIndex = GetAspectIndex(aspect);
return level + mMipLevelCount * (layer + mArrayLayerCount * aspectIndex);
}
// Method that checks that this and real have exactly the same content. It does so via looping
// on all subresources and calling Get() (hence testing Get()). It also calls Iterate()
// checking that every subresource is mentioned exactly once and that its content is correct
// (hence testing Iterate()).
// Its implementation requires the RangeTracker below that itself needs FakeStorage<int> so it
// cannot be define inline with the other methods.
void CheckSameAs(const SubresourceStorage<T>& real);
Aspect mAspects;
uint32_t mArrayLayerCount;
uint32_t mMipLevelCount;
std::vector<T> mData;
};
// Track a set of ranges that have been seen and can assert that in aggregate they make exactly
// a single range (and that each subresource was seen only once).
struct RangeTracker {
template <typename T>
RangeTracker(const SubresourceStorage<T>& s)
: mTracked(s.GetAspectsForTesting(),
s.GetArrayLayerCountForTesting(),
s.GetMipLevelCountForTesting(),
0) {
}
void Track(const SubresourceRange& range) {
// Add +1 to the subresources tracked.
mTracked.Update(range, [](const SubresourceRange&, uint32_t* counter) {
ASSERT_EQ(*counter, 0u);
*counter += 1;
});
}
void CheckTrackedExactly(const SubresourceRange& range) {
// Check that all subresources in the range were tracked once and set the counter back to 0.
mTracked.Update(range, [](const SubresourceRange&, uint32_t* counter) {
ASSERT_EQ(*counter, 1u);
*counter = 0;
});
// Now all subresources should be at 0.
for (int counter : mTracked.mData) {
ASSERT_EQ(counter, 0);
}
}
FakeStorage<uint32_t> mTracked;
};
template <typename T>
void FakeStorage<T>::CheckSameAs(const SubresourceStorage<T>& real) {
EXPECT_EQ(real.GetAspectsForTesting(), mAspects);
EXPECT_EQ(real.GetArrayLayerCountForTesting(), mArrayLayerCount);
EXPECT_EQ(real.GetMipLevelCountForTesting(), mMipLevelCount);
RangeTracker tracker(real);
real.Iterate([&](const SubresourceRange& range, const T& data) {
// Check that the range is sensical.
EXPECT_TRUE(IsSubset(range.aspects, mAspects));
EXPECT_LT(range.baseArrayLayer, mArrayLayerCount);
EXPECT_LE(range.baseArrayLayer + range.layerCount, mArrayLayerCount);
EXPECT_LT(range.baseMipLevel, mMipLevelCount);
EXPECT_LE(range.baseMipLevel + range.levelCount, mMipLevelCount);
for (Aspect aspect : IterateEnumMask(range.aspects)) {
for (uint32_t layer = range.baseArrayLayer;
layer < range.baseArrayLayer + range.layerCount; layer++) {
for (uint32_t level = range.baseMipLevel;
level < range.baseMipLevel + range.levelCount; level++) {
ASSERT_EQ(data, Get(aspect, layer, level));
ASSERT_EQ(data, real.Get(aspect, layer, level));
}
}
}
tracker.Track(range);
});
tracker.CheckTrackedExactly(
SubresourceRange::MakeFull(mAspects, mArrayLayerCount, mMipLevelCount));
}
template <typename T>
void CheckAspectCompressed(const SubresourceStorage<T>& s, Aspect aspect, bool expected) {
ASSERT(HasOneBit(aspect));
uint32_t levelCount = s.GetMipLevelCountForTesting();
uint32_t layerCount = s.GetArrayLayerCountForTesting();
bool seen = false;
s.Iterate([&](const SubresourceRange& range, const T&) {
if (range.aspects == aspect && range.layerCount == layerCount &&
range.levelCount == levelCount && range.baseArrayLayer == 0 &&
range.baseMipLevel == 0) {
seen = true;
}
});
ASSERT_EQ(seen, expected);
// Check that the internal state of SubresourceStorage matches what we expect.
// If an aspect is compressed, all its layers should be internally tagged as compressed.
ASSERT_EQ(s.IsAspectCompressedForTesting(aspect), expected);
if (expected) {
for (uint32_t layer = 0; layer < s.GetArrayLayerCountForTesting(); layer++) {
ASSERT_TRUE(s.IsLayerCompressedForTesting(aspect, layer));
}
}
}
template <typename T>
void CheckLayerCompressed(const SubresourceStorage<T>& s,
Aspect aspect,
uint32_t layer,
bool expected) {
ASSERT(HasOneBit(aspect));
uint32_t levelCount = s.GetMipLevelCountForTesting();
bool seen = false;
s.Iterate([&](const SubresourceRange& range, const T&) {
if (range.aspects == aspect && range.layerCount == 1 && range.levelCount == levelCount &&
range.baseArrayLayer == layer && range.baseMipLevel == 0) {
seen = true;
}
});
ASSERT_EQ(seen, expected);
ASSERT_EQ(s.IsLayerCompressedForTesting(aspect, layer), expected);
}
struct SmallData {
uint32_t value = 0xF00;
};
bool operator==(const SmallData& a, const SmallData& b) {
return a.value == b.value;
}
// Test that the default value is correctly set.
TEST(SubresourceStorageTest, DefaultValue) {
// Test setting no default value for a primitive type.
{
SubresourceStorage<int> s(Aspect::Color, 3, 5);
EXPECT_EQ(s.Get(Aspect::Color, 1, 2), 0);
FakeStorage<int> f(Aspect::Color, 3, 5);
f.CheckSameAs(s);
}
// Test setting a default value for a primitive type.
{
SubresourceStorage<int> s(Aspect::Color, 3, 5, 42);
EXPECT_EQ(s.Get(Aspect::Color, 1, 2), 42);
FakeStorage<int> f(Aspect::Color, 3, 5, 42);
f.CheckSameAs(s);
}
// Test setting no default value for a type with a default constructor.
{
SubresourceStorage<SmallData> s(Aspect::Color, 3, 5);
EXPECT_EQ(s.Get(Aspect::Color, 1, 2).value, 0xF00u);
FakeStorage<SmallData> f(Aspect::Color, 3, 5);
f.CheckSameAs(s);
}
// Test setting a default value for a type with a default constructor.
{
SubresourceStorage<SmallData> s(Aspect::Color, 3, 5, {007u});
EXPECT_EQ(s.Get(Aspect::Color, 1, 2).value, 007u);
FakeStorage<SmallData> f(Aspect::Color, 3, 5, {007u});
f.CheckSameAs(s);
}
}
// The tests for Update() all follow the same pattern of setting up a real and a fake storage then
// performing one or multiple Update()s on them and checking:
// - They have the same content.
// - The Update() range was correct.
// - The aspects and layers have the expected "compressed" status.
// Calls Update both on the read storage and the fake storage but intercepts the call to updateFunc
// done by the real storage to check their ranges argument aggregate to exactly the update range.
template <typename T, typename F>
void CallUpdateOnBoth(SubresourceStorage<T>* s,
FakeStorage<T>* f,
const SubresourceRange& range,
F&& updateFunc) {
RangeTracker tracker(*s);
s->Update(range, [&](const SubresourceRange& range, T* data) {
tracker.Track(range);
updateFunc(range, data);
});
f->Update(range, updateFunc);
tracker.CheckTrackedExactly(range);
f->CheckSameAs(*s);
}
// Test updating a single subresource on a single-aspect storage.
TEST(SubresourceStorageTest, SingleSubresourceUpdateSingleAspect) {
SubresourceStorage<int> s(Aspect::Color, 5, 7);
FakeStorage<int> f(Aspect::Color, 5, 7);
// Update a single subresource.
SubresourceRange range = SubresourceRange::MakeSingle(Aspect::Color, 3, 2);
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data += 1; });
CheckAspectCompressed(s, Aspect::Color, false);
CheckLayerCompressed(s, Aspect::Color, 2, true);
CheckLayerCompressed(s, Aspect::Color, 3, false);
CheckLayerCompressed(s, Aspect::Color, 4, true);
}
// Test updating a single subresource on a multi-aspect storage.
TEST(SubresourceStorageTest, SingleSubresourceUpdateMultiAspect) {
SubresourceStorage<int> s(Aspect::Depth | Aspect::Stencil, 5, 3);
FakeStorage<int> f(Aspect::Depth | Aspect::Stencil, 5, 3);
SubresourceRange range = SubresourceRange::MakeSingle(Aspect::Stencil, 1, 2);
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data += 1; });
CheckAspectCompressed(s, Aspect::Depth, true);
CheckAspectCompressed(s, Aspect::Stencil, false);
CheckLayerCompressed(s, Aspect::Stencil, 0, true);
CheckLayerCompressed(s, Aspect::Stencil, 1, false);
CheckLayerCompressed(s, Aspect::Stencil, 2, true);
}
// Test updating as a stipple pattern on one of two aspects then updating it completely.
TEST(SubresourceStorageTest, UpdateStipple) {
const uint32_t kLayers = 10;
const uint32_t kLevels = 7;
SubresourceStorage<int> s(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
FakeStorage<int> f(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
// Update with a stipple.
for (uint32_t layer = 0; layer < kLayers; layer++) {
for (uint32_t level = 0; level < kLevels; level++) {
if ((layer + level) % 2 == 0) {
SubresourceRange range = SubresourceRange::MakeSingle(Aspect::Depth, layer, level);
CallUpdateOnBoth(&s, &f, range,
[](const SubresourceRange&, int* data) { *data += 17; });
}
}
}
// The depth should be fully uncompressed while the stencil stayed compressed.
CheckAspectCompressed(s, Aspect::Stencil, true);
CheckAspectCompressed(s, Aspect::Depth, false);
for (uint32_t layer = 0; layer < kLayers; layer++) {
CheckLayerCompressed(s, Aspect::Depth, layer, false);
}
// Update completely with a single value. Recompression should happen!
{
SubresourceRange fullRange =
SubresourceRange::MakeFull(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
CallUpdateOnBoth(&s, &f, fullRange, [](const SubresourceRange&, int* data) { *data = 31; });
}
CheckAspectCompressed(s, Aspect::Depth, true);
CheckAspectCompressed(s, Aspect::Stencil, true);
}
// Test updating as a crossing band pattern:
// - The first band is full layers [2, 3] on both aspects
// - The second band is full mips [5, 6] on one aspect.
// Then updating completely.
TEST(SubresourceStorageTest, UpdateTwoBand) {
const uint32_t kLayers = 5;
const uint32_t kLevels = 9;
SubresourceStorage<int> s(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
FakeStorage<int> f(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
// Update the two bands
{
SubresourceRange range(Aspect::Depth | Aspect::Stencil, {2, 2}, {0, kLevels});
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data += 3; });
}
// The layers were fully updated so they should stay compressed.
CheckLayerCompressed(s, Aspect::Depth, 2, true);
CheckLayerCompressed(s, Aspect::Depth, 3, true);
CheckLayerCompressed(s, Aspect::Stencil, 2, true);
CheckLayerCompressed(s, Aspect::Stencil, 3, true);
{
SubresourceRange range(Aspect::Depth | Aspect::Stencil, {0, kLayers}, {5, 2});
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data *= 3; });
}
// The layers had to be decompressed.
CheckLayerCompressed(s, Aspect::Depth, 2, false);
CheckLayerCompressed(s, Aspect::Depth, 3, false);
CheckLayerCompressed(s, Aspect::Stencil, 2, false);
CheckLayerCompressed(s, Aspect::Stencil, 3, false);
// Update completely. Without a single value recompression shouldn't happen.
{
SubresourceRange fullRange =
SubresourceRange::MakeFull(Aspect::Depth | Aspect::Stencil, kLayers, kLevels);
CallUpdateOnBoth(&s, &f, fullRange,
[](const SubresourceRange&, int* data) { *data += 12; });
}
CheckAspectCompressed(s, Aspect::Depth, false);
CheckAspectCompressed(s, Aspect::Stencil, false);
}
// Test updating with extremal subresources
// - Then half of the array layers in full.
// - Then updating completely.
TEST(SubresourceStorageTest, UpdateExtremas) {
const uint32_t kLayers = 6;
const uint32_t kLevels = 4;
SubresourceStorage<int> s(Aspect::Color, kLayers, kLevels);
FakeStorage<int> f(Aspect::Color, kLayers, kLevels);
// Update the two extrema
{
SubresourceRange range = SubresourceRange::MakeSingle(Aspect::Color, 0, kLevels - 1);
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data += 3; });
}
{
SubresourceRange range = SubresourceRange::MakeSingle(Aspect::Color, kLayers - 1, 0);
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data *= 3; });
}
CheckLayerCompressed(s, Aspect::Color, 0, false);
CheckLayerCompressed(s, Aspect::Color, 1, true);
CheckLayerCompressed(s, Aspect::Color, kLayers - 2, true);
CheckLayerCompressed(s, Aspect::Color, kLayers - 1, false);
// Update half of the layers in full with constant values. Some recompression should happen.
{
SubresourceRange range(Aspect::Color, {0, kLayers / 2}, {0, kLevels});
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data = 123; });
}
CheckLayerCompressed(s, Aspect::Color, 0, true);
CheckLayerCompressed(s, Aspect::Color, 1, true);
CheckLayerCompressed(s, Aspect::Color, kLayers - 1, false);
// Update completely. Recompression should happen!
{
SubresourceRange fullRange = SubresourceRange::MakeFull(Aspect::Color, kLayers, kLevels);
CallUpdateOnBoth(&s, &f, fullRange, [](const SubresourceRange&, int* data) { *data = 35; });
}
CheckAspectCompressed(s, Aspect::Color, true);
}
// A regression test for an issue found while reworking the implementation where RecompressAspect
// didn't correctly check that each each layer was compressed but only that their 0th value was
// the same.
TEST(SubresourceStorageTest, UpdateLevel0sHappenToMatch) {
SubresourceStorage<int> s(Aspect::Color, 2, 2);
FakeStorage<int> f(Aspect::Color, 2, 2);
// Update 0th mip levels to some value, it should decompress the aspect and both layers.
{
SubresourceRange range(Aspect::Color, {0, 2}, {0, 1});
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data = 17; });
}
CheckAspectCompressed(s, Aspect::Color, false);
CheckLayerCompressed(s, Aspect::Color, 0, false);
CheckLayerCompressed(s, Aspect::Color, 1, false);
// Update the whole resource by doing +1. The aspects and layers should stay decompressed.
{
SubresourceRange range = SubresourceRange::MakeFull(Aspect::Color, 2, 2);
CallUpdateOnBoth(&s, &f, range, [](const SubresourceRange&, int* data) { *data += 1; });
}
CheckAspectCompressed(s, Aspect::Color, false);
CheckLayerCompressed(s, Aspect::Color, 0, false);
CheckLayerCompressed(s, Aspect::Color, 1, false);
}
// Bugs found while testing:
// - mLayersCompressed not initialized to true.
// - DecompressLayer setting Compressed to true instead of false.
// - Get() checking for !compressed instead of compressed for the early exit.
// - ASSERT in RecompressLayers was inverted.
// - Two != being converted to == during a rework.