It's the only implementation currently, so there's no point to having it stored
as a function pointer in the filter struct. Even if there were SIMD versions,
it'd be a global selection, not per-instance.
Since it was merely acting as an extension of it anyway, with the second delay
line tap (for late reverb) copying attenuated samples to the decorrelator line
that was being tapped off of. Just extend the delay line and offset the
decorrelator taps to be relative to the late reverb tap.
Ideally the band-pass should probably happen closer to output, like gain is.
However, doing that would require 16 filters (4 early + 4 late channels, each
with a low-pass and high-pass filter), compared to the two needed to do it on
input.
Currently incomplete, as second- and third-order output will not correctly
handle B-Format input buffers. A standalone up-sampler will be needed, similar
to the high-quality decoder.
Also, output is ACN ordering with SN3D normalization. A config option will
eventually be provided to change this if desired.
Similar to the listener, separate containers are provided atomically for the
mixer thread to apply updates without needing to block, and a free-list is used
to reuse container objects.
A couple things to note. First, the lock is still used when the effect state's
deviceUpdate method is called to prevent asynchronous calls to reset the device
from interfering. This can be fixed by using the list lock in ALc.c instead.
Secondly, old effect states aren't immediately deleted when the effect type
changes (the actual type, not just its properties). This is because the mixer
thread is intended to be real-time safe, and so can't be freeing anything. They
are cleared away when updates reuse the container they were kept in, and they
don't incur any extra processing cost, but there may be cases where the memory
is kept around until the effect slot is deleted.
The real FrontCenter output is used if it exists, and if it doesn't, it's
unlikely the dry buffer will have it (and even if it does, it won't be any
better than panning).
Instead of looping over all the coefficients for each channel with multiplies,
when we know only one will have a non-0 factor for ambisonic mixing buffers,
just index the one with a non-0 factor.
This is less than ideal, but matching each reverb line to a speaker with
surround sound output is way too loud without the ambient volume scaling
offered by the "direct" panning.
This uses a virtual B-Format buffer for mixing, and then uses a dual-band
decoder for improved positional quality. This currently only works with first-
order output since first-order input (from the AL_EXT_BFROMAT extension) would
not sound correct when fed through a second- or third-order decoder.
This also does not currently implement near-field compensation since near-field
rendering effects are not implemented.
This is pretty hacky. Since HRTF normally renders to B-Format with two "extra"
channels for the real stereo output, the panning interpolates between a panned
reverb channel on B-Format, and two non-panned reverb channels on stereo
output, given the panning vector length.