Rather than creating an effect-specific buffer that gets passed along as a
property, the buffer is set the effect state when the effect state is created,
the device is updated, or the buffer is changed. The buffer can only be set
while the effect slot isn't playing, so it won't be changed or updated while
the mixer is processing the effect state.
A newly-created effect slot is in an AL_INITIAL state, in which processing is
stopped but will automatically become AL_PLAYING after successfully setting an
AL_EFFECTSLOT_EFFECT value (including AL_EFFECT_NULL or 0). Calling Play[v] or
Stop[v] will set the effect slot to AL_PLAYING or AL_STOPPED respectively.
While stopped, the effect won't produce audio and will not be processed.
Rather than allocating for a full 8 channels for each voice, when the vast
majority will only need 1 or 2. The voice channel data is relatively big since
it needs to hold HRTF coefficients and history, and this will allow increasing
the maximum number of buffer channels without an obscene memory increase.
When starting a voice, the source ID was set before its first update struct was
provided, creating a small window where a listener or effect slot update could
force a voice to update without it having any valid properties to update with.
Supplying the update struct first would create a different race, where the
mixer could see a voice without a source but with an update struct, causing the
update struct to be 'freed' without being applied.
The fix here is to provide the update struct before setting the source ID, and
change the mixer to ignore update structs for voices without a source ID. This
can pseudo-orphan the updates that get set on a voice just as it stops, leaving
the struct unusable until the voice is used again, or the voice gets deleted
which will clear it. But it allows the update struct to stay in place and get
applied once the voice gets a source ID.