SDF is now encoded with inorm8 and inorm16

master
Marc Gilleron 2022-02-26 22:51:29 +00:00
parent 07d5f1fb2a
commit 493b93a051
16 changed files with 358 additions and 85 deletions

View File

@ -17,12 +17,14 @@ Godot 4 is required from this version.
- Added `gi_mode` to terrain nodes to choose how they behave with Godot's global illumination
- Added `FastNoise2` for faster SIMD noise
- Added experimental support functions to help setting up basic multiplayer with `VoxelTerrain` (might change in the future)
- Improved support for 64-bit floats
- `VoxelGeneratorGraph`: added support for outputting to the TYPE channel, allowing use with `VoxelMesherBlocky`
- `VoxelGeneratorGraph`: editor: unconnected inputs show their default value directly on the node
- `VoxelGeneratorGraph`: editor: allow to change the axes on preview nodes 3D slices
- `VoxelGeneratorGraph`: editor: replace existing connection if dragging from/to an input port having one already
- Smooth voxels
- SDF data is now encoded with `inorm8` and `inorm16`, instead of an arbitrary version of `unorm8` and `unorm16`. Migration code is in place to load old save files, but *do a backup before running your project with the new version*.
- `VoxelLodTerrain`: added *experimental* `full_load_mode`, in which all edited data is loaded at once, allowing any area to be edited anytime. Useful for some fixed-size volumes.
- `VoxelLodTerrain`: Editor: added option to show octree nodes in editor
- `VoxelToolLodTerrain`: added *experimental* `do_sphere_async`, an alternative version of `do_sphere` which defers the task on threads to reduce stutter if the affected area is big.

View File

@ -1,6 +1,9 @@
Voxel block format
====================
!!! warn
This document is about an old version of the format. You may check the most recent version.
Version: 2
This page describes the binary format used by default in this module to serialize voxel blocks to files, network or databases.

View File

@ -0,0 +1,122 @@
Voxel block format
====================
Version: 3
This page describes the binary format used by default in this module to serialize voxel blocks to files, network or databases.
### Changes from version 2
- The second channel (at index 1) was used for SDF data but the format didn't dictate anything particular about it. It is now expected to be for SDF. It used to have an arbitrary format for fixed-point encoding. It is now using inorm16.
- Compression format `1` is deprecated.
- Moved compression wrapper to its own specification.
Specification
----------------
### Endianess
By default, little-endian.
### Compressed container
A block is usually serialized within a compressed data container.
This is the format provided by the `VoxelBlockSerializer` utility class. If you don't use compression, the layout will correspond to `BlockData` described in the next listing, and won't have this wrapper.
See [Compressed container format](#compressed-container) for specification.
### Block format
It starts with version number `3` in one byte, then some metadata and the actual voxels.
!!! note
The size and formats are present to make the format standalone. When used within a chunked container like region files, it is recommended to check if they match the format expected for the volume as a whole.
```
BlockData
- version: uint8_t
- size_x: uint16_t
- size_y: uint16_t
- size_z: uint16_t
- channels[8]
- metadata*
- epilogue
```
### Channels
Block data starts with exactly 8 channels one after the other, each with the following structure:
```
Channel
- format: uint8_t (low nibble = compression, high nibble = depth)
- data
```
`format` contains both compression and bit depth, respectively known as `VoxelBuffer::Compression` and `VoxelBuffer::Depth` enums. The low nibble contains compression, and the high nibble contains depth. Depending on those values, `data` will be different.
Depth can be 0 (8-bit), 1 (16-bit), 2 (32-bit) or 3 (64-bit).
If compression is `COMPRESSION_NONE` (0), `data` will be an array of N*S bytes, where N is the number of voxels inside a block, multiplied by the number of bytes corresponding to the bit depth. For example, a block of size 16x16x16 and a channel of 32-bit depth will have `16*16*16*4` bytes to load from the file into this channel.
The 3D indexing of that data is in order `ZXY`.
If compression is `COMPRESSION_UNIFORM` (1), the data will be a single voxel value, which means all voxels in the block have that same value. Unused channels will always use this mode. The value spans the same number of bytes defined by the depth.
Other compression values are invalid.
#### SDF channel
The second channel (at index 1) is used for SDF data. If depth is 8 or 16 bits, it may contain fixed-point values encoded as `inorm8` or `inorm16`. This is numbers in the range [-1..1].
To obtain a `float` from an `int8`, use `max(i / 127, -1.f)`.
To obtain a `float` from an `int16`, use `max(i / 32767, -1.f)`.
For 32-bit depth, regular `float` are used.
For 64-bit depth, regular `double` are used.
### Metadata
After all channels information, block data can contain metadata information. Blocks that don't contain any will only have a fixed amount of bytes left (from the epilogue) before reaching the size of the total data to read. If there is more, the block contains metadata.
```
Metadata
- metadata_size: uint32_t
- block_metadata
- voxel_metadata[*]
```
It starts with one 32-bit unsigned integer representing the total size of all metadata there is to read. That data comes in two groups: one for the whole block, and one per voxel.
Block metadata is one Godot `Variant`, encoded using the `encode_variant` method of the engine.
Voxel metadata immediately follows. It is a sequence of the following data structures, which must be read until a total of `metadata_size` bytes have been read from the beginning:
```
VoxelMetadata
- x: uint16_t
- y: uint16_t
- z: uint16_t
- data
```
`x`, `y` and `z` indicate which voxel the data corresponds. `data` is also a `Variant` encoded the same way as described earlier. This results in an associative collection between voxel positions relative to the block and their corresponding metadata.
### Epilogue
At the very end, block data finishes with a sequence of 4 bytes, which once read into a `uint32_t` integer must match the value `0x900df00d`. If that condition isn't fulfilled, the block must be assumed corrupted.
!!! note
On little-endian architectures (like desktop), binary editors will not show the epilogue as `0x900df00d`, but as `0x0df00d90` instead.
Current Issues
----------------
### Endianess
The format is intented to use little-endian, however the implementation of the engine does not fully guarantee this.
Godot's `encode_variant` doesn't seem to care about endianess across architectures, so it's possible it becomes a problem in the future and gets changed to a custom format.
The implementation of block channels with depth greater than 8-bit currently doesn't consider this either. This might be refined in a later iteration.
This will become important to address if voxel games require communication between mobile and desktop.

View File

@ -0,0 +1,29 @@
Compressed data format
========================
Some custom formats used in this engine can be wrapped in a compressed container.
Specification
----------------
### Endianess
By default, little-endian.
### Compressed container
```
CompressedData
- uint8_t format
- data
```
Compressed data starts with one byte. Depending on its value, what follows is different.
- `0`: no compression. Following bytes can be read directly. This is rarely used and could be for debugging.
- `1`: LZ4_BE compression, *deprecated*. The next big-endian 32-bit unsigned integer is the size of the decompressed data, and following bytes are compressed data using LZ4 default parameters.
- `2`: LZ4 compression, The next little-endian 32-bit unsigned integer is the size of the decompressed data, and following bytes are compressed data using LZ4 default parameters. This is the default mode.
!!! note
Depending on the type of data, knowing its decompressed size may be important when parsing the it later.

View File

@ -9,11 +9,7 @@ Specification
### Compressed container
A block is usually serialized as compressed data.
Compressed data starts with one byte. Depending on its value, what follows is different.
- 0: no compression. Following bytes can be read as as block format directly. This is rarely used and could be for debugging.
- 1: LZ4 compression. The next big-endian 32-bit unsigned integer is the size of the decompressed data, and following bytes are compressed data using LZ4 default parameters. This mode is used by default.
See [Compressed container format](#compressed-container) for specification.
### Binary data

View File

@ -17,8 +17,8 @@ template <typename Op, typename Shape>
struct SdfOperation16bit {
Op op;
Shape shape;
inline uint16_t operator()(Vector3i pos, uint16_t sdf) const {
return norm_to_u16(op(u16_to_norm(sdf), shape(Vector3(pos))));
inline int16_t operator()(Vector3i pos, int16_t sdf) const {
return snorm_to_s16(op(s16_to_snorm(sdf), shape(Vector3(pos))));
}
};

View File

@ -368,7 +368,7 @@ void VoxelToolLodTerrain::_set_voxel(Vector3i pos, uint64_t v) {
void VoxelToolLodTerrain::_set_voxel_f(Vector3i pos, float v) {
ERR_FAIL_COND(_terrain == nullptr);
// TODO Format should be accessible from terrain
_terrain->try_set_voxel_without_update(pos, _channel, norm_to_u16(v));
_terrain->try_set_voxel_without_update(pos, _channel, snorm_to_s16(v));
}
void VoxelToolLodTerrain::_post_edit(const Box3i &box) {

View File

@ -458,12 +458,12 @@ static void fill_zx_sdf_slice(const VoxelGraphRuntime::Buffer &sdf_buffer, Voxel
switch (channel_depth) {
case VoxelBufferInternal::DEPTH_8_BIT:
fill_zx_sdf_slice(
channel_bytes, sdf_scale, rmin, rmax, ry, x_stride, sdf_buffer.data, buffer_size, norm_to_u8);
channel_bytes, sdf_scale, rmin, rmax, ry, x_stride, sdf_buffer.data, buffer_size, snorm_to_s8);
break;
case VoxelBufferInternal::DEPTH_16_BIT:
fill_zx_sdf_slice(channel_bytes.reinterpret_cast_to<uint16_t>(), sdf_scale, rmin, rmax, ry, x_stride,
sdf_buffer.data, buffer_size, norm_to_u16);
sdf_buffer.data, buffer_size, snorm_to_s16);
break;
case VoxelBufferInternal::DEPTH_32_BIT:

View File

@ -12,6 +12,8 @@ namespace gd {
class VoxelBuffer;
}
// Non-encoded, generic voxel value.
// (Voxels stored inside VoxelBuffers are encoded to take less space)
union VoxelSingleValue {
uint64_t i;
float f;

View File

@ -108,11 +108,11 @@ inline Vector3i dir_to_prev_vec(uint8_t dir) {
}
inline float sdf_as_float(uint8_t v) {
return -u8_to_norm(v);
return -s8_to_snorm_noclamp(v);
}
inline float sdf_as_float(uint16_t v) {
return -u16_to_norm(v);
return -s16_to_snorm_noclamp(v);
}
inline float sdf_as_float(float v) {

View File

@ -153,6 +153,7 @@ void register_voxel_types() {
PRINT_VERBOSE(String("Size of Node: {0}").format(varray((int)sizeof(Node))));
PRINT_VERBOSE(String("Size of Node3D: {0}").format(varray((int)sizeof(Node3D))));
PRINT_VERBOSE(String("Size of gd::VoxelBuffer: {0}").format(varray((int)sizeof(gd::VoxelBuffer))));
PRINT_VERBOSE(String("Size of VoxelBufferInternal: {0}").format(varray((int)sizeof(VoxelBufferInternal))));
PRINT_VERBOSE(String("Size of VoxelMeshBlock: {0}").format(varray((int)sizeof(VoxelMeshBlock))));
PRINT_VERBOSE(String("Size of VoxelTerrain: {0}").format(varray((int)sizeof(VoxelTerrain))));
PRINT_VERBOSE(String("Size of VoxelLodTerrain: {0}").format(varray((int)sizeof(VoxelLodTerrain))));

View File

@ -92,40 +92,56 @@ void fill_3d_region_zxy(Span<T> dst, Vector3i dst_size, Vector3i dst_min, Vector
}
}
// TODO Switch to using GPU format inorm16 for these conversions
// The current ones seem to work but aren't really correct
// https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#fundamentals-fixedconv
// Converts an int8 value into a float in the range [-1..1], which includes an exact value for 0.
// -128 is one value of the int8 which will not have a corresponding result, it will be clamped to -1.
inline float s8_to_snorm(int8_t v) {
return math::max(v / 127.f, -1.f);
}
inline float s8_to_snorm_noclamp(int8_t v) {
return v / 127.f;
}
inline float u8_to_norm(uint8_t v) {
// Converts a float value in the range [-1..1] to an int8.
// The float will be clamped if it lies outside of the expected range.
inline int8_t snorm_to_s8(float v) {
return math::clamp(v, -1.f, 1.f) * 127;
}
// Converts an int8 value into a float in the range [-1..1], which includes an exact value for 0.
// -32767 is one value of the int16 which will not have a corresponding result, it will be clamped to -1.
inline float s16_to_snorm(int16_t v) {
return math::max(v / 32767.f, -1.f);
}
inline float s16_to_snorm_noclamp(int8_t v) {
return v / 32767.f;
}
// Converts a float value in the range [-1..1] to an int16.
// The float will be clamped if it lies outside of the expected range.
inline int16_t snorm_to_s16(float v) {
return math::clamp(v, -1.f, 1.f) * 32767;
}
namespace legacy {
inline float u8_to_snorm(uint8_t v) {
return (static_cast<float>(v) - 0x7f) * constants::INV_0x7f;
}
inline float u16_to_norm(uint16_t v) {
inline float u16_to_snorm(uint16_t v) {
return (static_cast<float>(v) - 0x7fff) * constants::INV_0x7fff;
}
inline uint8_t norm_to_u8(float v) {
inline uint8_t snorm_to_u8(float v) {
return zylann::math::clamp(static_cast<int>(128.f * v + 128.f), 0, 0xff);
}
inline uint16_t norm_to_u16(float v) {
inline uint16_t snorm_to_u16(float v) {
return zylann::math::clamp(static_cast<int>(0x8000 * v + 0x8000), 0, 0xffff);
}
/*static inline float quantized_u8_to_real(uint8_t v) {
return u8_to_norm(v) * constants::QUANTIZED_SDF_8_BITS_SCALE_INV;
}
static inline float quantized_u16_to_real(uint8_t v) {
return u8_to_norm(v) * constants::QUANTIZED_SDF_16_BITS_SCALE_INV;
}
static inline uint8_t real_to_quantized_u8(float v) {
return norm_to_u8(v * constants::QUANTIZED_SDF_8_BITS_SCALE);
}
static inline uint16_t real_to_quantized_u16(float v) {
return norm_to_u16(v * constants::QUANTIZED_SDF_16_BITS_SCALE);
}*/
} // namespace legacy
inline FixedArray<uint8_t, 4> decode_weights_from_packed_u16(uint16_t packed_weights) {
FixedArray<uint8_t, 4> weights;

View File

@ -31,25 +31,25 @@ inline void free_channel_data(uint8_t *data, uint32_t size) {
#endif
}
uint64_t g_depth_max_values[] = {
0xff, // 8
0xffff, // 16
0xffffffff, // 32
0xffffffffffffffff // 64
};
// uint64_t g_depth_max_values[] = {
// 0xff, // 8
// 0xffff, // 16
// 0xffffffff, // 32
// 0xffffffffffffffff // 64
// };
inline uint64_t get_max_value_for_depth(VoxelBufferInternal::Depth d) {
CRASH_COND(d < 0 || d >= VoxelBufferInternal::DEPTH_COUNT);
return g_depth_max_values[d];
}
// inline uint64_t get_max_value_for_depth(VoxelBufferInternal::Depth d) {
// CRASH_COND(d < 0 || d >= VoxelBufferInternal::DEPTH_COUNT);
// return g_depth_max_values[d];
// }
inline uint64_t clamp_value_for_depth(uint64_t value, VoxelBufferInternal::Depth d) {
const uint64_t max_val = get_max_value_for_depth(d);
if (value >= max_val) {
return max_val;
}
return value;
}
// inline uint64_t clamp_value_for_depth(uint64_t value, VoxelBufferInternal::Depth d) {
// const uint64_t max_val = get_max_value_for_depth(d);
// if (value >= max_val) {
// return max_val;
// }
// return value;
// }
static_assert(sizeof(uint32_t) == sizeof(float), "uint32_t and float cannot be marshalled back and forth");
static_assert(sizeof(uint64_t) == sizeof(double), "uint64_t and double cannot be marshalled back and forth");
@ -57,10 +57,10 @@ static_assert(sizeof(uint64_t) == sizeof(double), "uint64_t and double cannot be
inline uint64_t real_to_raw_voxel(real_t value, VoxelBufferInternal::Depth depth) {
switch (depth) {
case VoxelBufferInternal::DEPTH_8_BIT:
return norm_to_u8(value);
return snorm_to_s8(value);
case VoxelBufferInternal::DEPTH_16_BIT:
return norm_to_u16(value);
return snorm_to_s16(value);
case VoxelBufferInternal::DEPTH_32_BIT: {
MarshallFloat m;
@ -82,10 +82,10 @@ inline real_t raw_voxel_to_real(uint64_t value, VoxelBufferInternal::Depth depth
// Depths below 32 are normalized between -1 and 1
switch (depth) {
case VoxelBufferInternal::DEPTH_8_BIT:
return u8_to_norm(value);
return s8_to_snorm(value);
case VoxelBufferInternal::DEPTH_16_BIT:
return u16_to_norm(value);
return s16_to_snorm(value);
case VoxelBufferInternal::DEPTH_32_BIT: {
MarshallFloat m;
@ -112,7 +112,7 @@ VoxelBufferInternal::VoxelBufferInternal() {
// 16-bit is better on average to handle large worlds
_channels[CHANNEL_SDF].depth = DEFAULT_SDF_CHANNEL_DEPTH;
_channels[CHANNEL_SDF].defval = 0xffff;
_channels[CHANNEL_SDF].defval = snorm_to_s16(1.f);
_channels[CHANNEL_INDICES].depth = DEPTH_16_BIT;
_channels[CHANNEL_INDICES].defval = encode_indices_to_packed_u16(0, 1, 2, 3);
@ -184,7 +184,7 @@ void VoxelBufferInternal::clear_channel(Channel &channel, uint64_t clear_value)
if (channel.data != nullptr) {
delete_channel(channel);
}
channel.defval = clamp_value_for_depth(clear_value, channel.depth);
channel.defval = clear_value;
}
void VoxelBufferInternal::clear_channel_f(unsigned int channel_index, real_t clear_value) {
@ -195,7 +195,7 @@ void VoxelBufferInternal::clear_channel_f(unsigned int channel_index, real_t cle
void VoxelBufferInternal::set_default_values(FixedArray<uint64_t, VoxelBufferInternal::MAX_CHANNELS> values) {
for (unsigned int i = 0; i < MAX_CHANNELS; ++i) {
_channels[i].defval = clamp_value_for_depth(values[i], _channels[i].depth);
_channels[i].defval = values[i];
}
}
@ -237,7 +237,6 @@ void VoxelBufferInternal::set_voxel(uint64_t value, int x, int y, int z, unsigne
Channel &channel = _channels[channel_index];
value = clamp_value_for_depth(value, channel.depth);
bool do_set = true;
if (channel.data == nullptr) {
@ -254,6 +253,9 @@ void VoxelBufferInternal::set_voxel(uint64_t value, int x, int y, int z, unsigne
switch (channel.depth) {
case DEPTH_8_BIT:
// Note, if the value is negative, it may be in the range supported by int8_t.
// This use case might exist for SDF data, although it is preferable to use `set_voxel_f`.
// Similar for higher depths.
channel.data[i] = value;
break;
@ -291,8 +293,6 @@ void VoxelBufferInternal::fill(uint64_t defval, unsigned int channel_index) {
Channel &channel = _channels[channel_index];
defval = clamp_value_for_depth(defval, channel.depth);
if (channel.data == nullptr) {
// Channel is already optimized and uniform
if (channel.defval == defval) {
@ -351,7 +351,6 @@ void VoxelBufferInternal::fill_area(uint64_t defval, Vector3i min, Vector3i max,
}
Channel &channel = _channels[channel_index];
defval = clamp_value_for_depth(defval, channel.depth);
if (channel.data == nullptr) {
if (channel.defval == defval) {
@ -761,7 +760,6 @@ void VoxelBufferInternal::set_channel_depth(unsigned int channel_index, Depth ne
WARN_PRINT("Changing VoxelBuffer depth with present data, this will reset the channel");
delete_channel(channel_index);
}
channel.defval = clamp_value_for_depth(channel.defval, new_depth);
channel.depth = new_depth;
}

View File

@ -216,6 +216,7 @@ struct VoxelDataLodMap {
// It is possible to unlock it after we are done querying the map.
RWLock map_lock;
};
// Each LOD works in a set of coordinates spanning 2x more voxels the higher their index is
FixedArray<Lod, constants::MAX_LOD> lods;
unsigned int lod_count = 1;
};

View File

@ -4,6 +4,7 @@
#include "../util/macros.h"
#include "../util/math/vector3i.h"
#include "../util/profiling.h"
#include "../util/serialization.h"
#include "compressed_data.h"
#include <core/io/marshalls.h>
@ -15,7 +16,7 @@
namespace zylann::voxel {
namespace BlockSerializer {
const uint8_t BLOCK_FORMAT_VERSION = 2;
const uint8_t BLOCK_FORMAT_VERSION = 3;
const unsigned int BLOCK_TRAILING_MAGIC = 0x900df00d;
const unsigned int BLOCK_TRAILING_MAGIC_SIZE = 4;
const unsigned int BLOCK_METADATA_HEADER_SIZE = sizeof(uint32_t);
@ -299,6 +300,94 @@ SerializeResult serialize(const VoxelBufferInternal &voxel_buffer) {
return SerializeResult(dst_data, true);
}
bool migrate_v2_to_v3(Span<const uint8_t> p_data, std::vector<uint8_t> &dst) {
// In v2, SDF data was using a legacy arbitrary formula to encode fixed-point numbers.
// In v3, it now uses inorm8 and inorm16.
// Serialized size does not change.
// Constants used at the time of this version
const unsigned int channel_count = 8;
const unsigned int sdf_channel_index = 2;
const unsigned int no_compression = 0;
const unsigned int uniform_compression = 1;
dst.resize(p_data.size());
memcpy(dst.data(), p_data.data(), p_data.size());
MemoryReader mr(p_data, ENDIANESS_LITTLE_ENDIAN);
const uint8_t rv = mr.get_8(); // version
CRASH_COND(rv != 2);
dst[0] = 3;
const unsigned short size_x = mr.get_16(); // size_x
const unsigned short size_y = mr.get_16(); // size_y
const unsigned short size_z = mr.get_16(); // size_z
const unsigned int volume = size_x * size_y * size_z;
for (unsigned int channel_index = 0; channel_index < 8; ++channel_index) {
const uint8_t channel_format = mr.get_8();
const uint8_t fmt = mr.get_8();
const uint8_t compression_value = fmt & 0xf;
const uint8_t depth_value = (fmt >> 4) & 0xf;
ERR_FAIL_INDEX_V(compression_value, 2, false);
ERR_FAIL_INDEX_V(depth_value, 4, false);
const unsigned int voxel_size = 1 << depth_value;
if (channel_index == sdf_channel_index) {
ByteSpanWithPosition dst2(to_span(dst), mr.pos);
MemoryWriterExistingBuffer mw(dst2, ENDIANESS_LITTLE_ENDIAN);
if (compression_value == no_compression) {
switch (depth_value) {
case 0:
for (unsigned int i = 0; i < volume; ++i) {
mw.store_8(snorm_to_s8(legacy::u8_to_snorm(mr.get_8())));
}
break;
case 1:
for (unsigned int i = 0; i < volume; ++i) {
mw.store_16(snorm_to_s16(legacy::u16_to_snorm(mr.get_16())));
}
break;
case 2:
case 3:
// Depths above 16bit use floats, just skip them
mr.pos += voxel_size * volume;
break;
}
} else if (compression_value == uniform_compression) {
switch (depth_value) {
case 0:
mw.store_8(snorm_to_s8(legacy::u8_to_snorm(mr.get_8())));
break;
case 1:
mw.store_16(snorm_to_s16(legacy::u16_to_snorm(mr.get_16())));
break;
case 2:
case 3:
// Depths above 16bit use floats, just skip them
mr.pos += voxel_size;
break;
}
}
} else {
// Skip
if (compression_value == no_compression) {
mr.pos += voxel_size * volume;
} else if (compression_value == uniform_compression) {
mr.pos += voxel_size;
}
}
}
return true;
}
bool deserialize(Span<const uint8_t> p_data, VoxelBufferInternal &out_voxel_buffer) {
VOXEL_PROFILE_SCOPE();
@ -313,28 +402,22 @@ bool deserialize(Span<const uint8_t> p_data, VoxelBufferInternal &out_voxel_buff
ERR_FAIL_COND_V(f->open_custom(p_data.data(), p_data.size()) != OK, false);
const uint8_t format_version = f->get_8();
if (format_version < 2) {
// In version 1, the first thing coming in block data is the compression value of the first channel.
// At the time, there was only 2 values this could take: 0 and 1.
// So we can recognize blocks using this old format and seek back.
// Formats before 2 also did not contain bit depth, they only had compression, leaving high nibble to 0.
// This means version 2 will read only 8-bit depth from the old block.
// "Fortunately", the old format also did not properly serialize formats using more than 8 bits.
// So we are kinda set to migrate without much changes, by assuming the block is already formatted properly.
f->seek(f->get_position() - 1);
WARN_PRINT("Reading block format_version < 2. Attempting to migrate.");
if (format_version == 2) {
std::vector<uint8_t> migrated_data;
ERR_FAIL_COND_V(!migrate_v2_to_v3(p_data, migrated_data), false);
return deserialize(to_span_const(migrated_data), out_voxel_buffer);
} else {
ERR_FAIL_COND_V(format_version != BLOCK_FORMAT_VERSION, false);
const unsigned int size_x = f->get_16();
const unsigned int size_y = f->get_16();
const unsigned int size_z = f->get_16();
out_voxel_buffer.create(Vector3i(size_x, size_y, size_z));
}
const unsigned int size_x = f->get_16();
const unsigned int size_y = f->get_16();
const unsigned int size_z = f->get_16();
out_voxel_buffer.create(Vector3i(size_x, size_y, size_z));
for (unsigned int channel_index = 0; channel_index < VoxelBufferInternal::MAX_CHANNELS; ++channel_index) {
const uint8_t fmt = f->get_8();
const uint8_t compression_value = fmt & 0xf;

View File

@ -20,15 +20,16 @@ inline Endianess get_platform_endianess() {
// TODO In C++20 we'll be able to use std::endian
}
struct MemoryWriter {
std::vector<uint8_t> &data;
template <typename Container_T>
struct MemoryWriterTemplate {
Container_T &data;
// Using network-order by default
// TODO Apparently big-endian is dead
// I chose it originally to match "network byte order",
// but as I read comments about it there seem to be no reason to continue using it. Needs a version increment.
Endianess endianess = ENDIANESS_BIG_ENDIAN;
MemoryWriter(std::vector<uint8_t> &p_data, Endianess p_endianess) : data(p_data), endianess(p_endianess) {}
MemoryWriterTemplate(Container_T &p_data, Endianess p_endianess) : data(p_data), endianess(p_endianess) {}
inline void store_8(uint8_t v) {
data.push_back(v);
@ -68,6 +69,25 @@ struct MemoryWriter {
}
};
// Default
typedef MemoryWriterTemplate<std::vector<uint8_t>> MemoryWriter;
struct ByteSpanWithPosition {
Span<uint8_t> data;
size_t pos = 0;
ByteSpanWithPosition(Span<uint8_t> p_data, size_t initial_pos) : data(p_data), pos(initial_pos) {}
inline void push_back(uint8_t v) {
#ifdef DEBUG_ENABLED
CRASH_COND(pos == data.size());
#endif
data[pos++] = v;
}
};
typedef MemoryWriterTemplate<ByteSpanWithPosition> MemoryWriterExistingBuffer;
struct MemoryReader {
Span<const uint8_t> data;
size_t pos = 0;
@ -77,12 +97,12 @@ struct MemoryReader {
MemoryReader(Span<const uint8_t> p_data, Endianess p_endianess) : data(p_data), endianess(p_endianess) {}
inline uint8_t get_8() {
ERR_FAIL_COND_V(pos >= data.size(), 0);
//ERR_FAIL_COND_V(pos >= data.size(), 0);
return data[pos++];
}
inline uint16_t get_16() {
ERR_FAIL_COND_V(pos + 1 >= data.size(), 0);
//ERR_FAIL_COND_V(pos + 1 >= data.size(), 0);
uint16_t v;
if (endianess == ENDIANESS_BIG_ENDIAN) {
v = (static_cast<uint16_t>(data[pos]) << 8) | data[pos + 1];
@ -94,7 +114,7 @@ struct MemoryReader {
}
inline uint32_t get_32() {
ERR_FAIL_COND_V(pos + 3 >= data.size(), 0);
//ERR_FAIL_COND_V(pos + 3 >= data.size(), 0);
uint32_t v;
if (endianess == ENDIANESS_BIG_ENDIAN) {
v = //