Similar to the listener, separate containers are provided atomically for the
mixer thread to apply updates without needing to block, and a free-list is used
to reuse container objects.
A couple things to note. First, the lock is still used when the effect state's
deviceUpdate method is called to prevent asynchronous calls to reset the device
from interfering. This can be fixed by using the list lock in ALc.c instead.
Secondly, old effect states aren't immediately deleted when the effect type
changes (the actual type, not just its properties). This is because the mixer
thread is intended to be real-time safe, and so can't be freeing anything. They
are cleared away when updates reuse the container they were kept in, and they
don't incur any extra processing cost, but there may be cases where the memory
is kept around until the effect slot is deleted.
This uses a separate container to provide the relevant properties to the
internal update method, using atomic pointer swaps. A free-list is used to
avoid having too many individual containers.
This allows the mixer to update the internal listener properties without
requiring the lock to protect against async updates. It also allows concurrent
read access to the user-facing property values, even the multi-value ones (e.g.
the vectors).
Unfortunately they conflict with AL_EXT_SOURCE_RADIUS, as AL_SOURCE_RADIUS and
AL_BYTE_RW_OFFSETS_SOFT share the same source property value. A replacement for
AL_SOFT_buffer_samples will eventually be made.
Instead of looping over all the coefficients for each channel with multiplies,
when we know only one will have a non-0 factor for ambisonic mixing buffers,
just index the one with a non-0 factor.
Could really do with some optimizations to the mixing gain calculations. For
ambisonic targets, the coefficients will only have 1 non-0 entry for each
output, so the double loop in unnecessarily wasteful. Similarly, most uses
won't need a full height encoding either, so a horizontal-only or mixed-order
target could reduce the number of channels.
This uses a virtual B-Format buffer for mixing, and then uses a dual-band
decoder for improved positional quality. This currently only works with first-
order output since first-order input (from the AL_EXT_BFROMAT extension) would
not sound correct when fed through a second- or third-order decoder.
This also does not currently implement near-field compensation since near-field
rendering effects are not implemented.
There were phase issues caused by applying HRTF directly to the B-Format
channels, since the HRIR delays were all averaged which removed the inter-aural
time-delay, which in turn removed significant spatial information.