libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
/******************************************************************************
|
|
|
|
Copyright (C) 2015 by Hugh Bailey <obs.jim@gmail.com>
|
|
|
|
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation, either version 2 of the License, or
|
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
******************************************************************************/
|
|
|
|
|
|
|
|
#include <inttypes.h>
|
|
|
|
#include "obs-internal.h"
|
2020-03-21 10:55:12 +01:00
|
|
|
#include "util/util_uint64.h"
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
struct ts_info {
|
|
|
|
uint64_t start;
|
|
|
|
uint64_t end;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define DEBUG_AUDIO 0
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
#define DEBUG_LAGGED_AUDIO 0
|
2016-02-21 11:26:43 -08:00
|
|
|
#define MAX_BUFFERING_TICKS 45
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
static void push_audio_tree(obs_source_t *parent, obs_source_t *source, void *p)
|
|
|
|
{
|
|
|
|
struct obs_core_audio *audio = p;
|
|
|
|
|
|
|
|
if (da_find(audio->render_order, &source, 0) == DARRAY_INVALID) {
|
2019-02-12 19:16:22 -08:00
|
|
|
obs_source_t *s = obs_source_get_ref(source);
|
|
|
|
if (s)
|
|
|
|
da_push_back(audio->render_order, &s);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
UNUSED_PARAMETER(parent);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline size_t convert_time_to_frames(size_t sample_rate, uint64_t t)
|
|
|
|
{
|
2021-01-29 21:47:34 -08:00
|
|
|
return (size_t)util_mul_div64(t, sample_rate, 1000000000ULL);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void mix_audio(struct audio_output_data *mixes,
|
|
|
|
obs_source_t *source, size_t channels,
|
|
|
|
size_t sample_rate, struct ts_info *ts)
|
|
|
|
{
|
|
|
|
size_t total_floats = AUDIO_OUTPUT_FRAMES;
|
|
|
|
size_t start_point = 0;
|
|
|
|
|
|
|
|
if (source->audio_ts < ts->start || ts->end <= source->audio_ts)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (source->audio_ts != ts->start) {
|
|
|
|
start_point = convert_time_to_frames(
|
|
|
|
sample_rate, source->audio_ts - ts->start);
|
|
|
|
if (start_point == AUDIO_OUTPUT_FRAMES)
|
|
|
|
return;
|
|
|
|
|
|
|
|
total_floats -= start_point;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) {
|
|
|
|
for (size_t ch = 0; ch < channels; ch++) {
|
|
|
|
register float *mix = mixes[mix_idx].data[ch];
|
|
|
|
register float *aud =
|
|
|
|
source->audio_output_buf[mix_idx][ch];
|
|
|
|
register float *end;
|
|
|
|
|
|
|
|
mix += start_point;
|
|
|
|
end = aud + total_floats;
|
|
|
|
|
|
|
|
while (aud < end)
|
|
|
|
*(mix++) += *(aud++);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
static bool ignore_audio(obs_source_t *source, size_t channels,
|
|
|
|
size_t sample_rate, uint64_t start_ts)
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
{
|
|
|
|
size_t num_floats = source->audio_input_buf[0].size / sizeof(float);
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
const char *name = obs_source_get_name(source);
|
|
|
|
|
|
|
|
if (!source->audio_ts && num_floats) {
|
|
|
|
#if DEBUG_LAGGED_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "[src: %s] no timestamp, but audio available?",
|
|
|
|
name);
|
|
|
|
#endif
|
|
|
|
for (size_t ch = 0; ch < channels; ch++)
|
|
|
|
circlebuf_pop_front(&source->audio_input_buf[ch], NULL,
|
|
|
|
source->audio_input_buf[0].size);
|
|
|
|
source->last_audio_input_buf_size = 0;
|
|
|
|
return false;
|
|
|
|
}
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
if (num_floats) {
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
/* round up the number of samples to drop */
|
|
|
|
size_t drop = util_mul_div64(start_ts - source->audio_ts - 1,
|
|
|
|
sample_rate, 1000000000ULL) +
|
|
|
|
1;
|
|
|
|
if (drop > num_floats)
|
|
|
|
drop = num_floats;
|
|
|
|
|
|
|
|
#if DEBUG_LAGGED_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG,
|
|
|
|
"[src: %s] ignored %" PRIu64 "/%" PRIu64 " samples", name,
|
|
|
|
(uint64_t)drop, (uint64_t)num_floats);
|
|
|
|
#endif
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
for (size_t ch = 0; ch < channels; ch++)
|
|
|
|
circlebuf_pop_front(&source->audio_input_buf[ch], NULL,
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
drop * sizeof(float));
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
2016-01-31 14:04:54 -08:00
|
|
|
source->last_audio_input_buf_size = 0;
|
2020-03-21 10:55:12 +01:00
|
|
|
source->audio_ts +=
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
util_mul_div64(drop, 1000000000ULL, sample_rate);
|
|
|
|
blog(LOG_DEBUG, "[src: %s] ts lag after ignoring: %" PRIu64,
|
|
|
|
name, start_ts - source->audio_ts);
|
|
|
|
|
|
|
|
/* rounding error, adjust */
|
|
|
|
if (source->audio_ts == (start_ts - 1))
|
|
|
|
source->audio_ts = start_ts;
|
|
|
|
|
|
|
|
/* source is back in sync */
|
|
|
|
if (source->audio_ts >= start_ts)
|
|
|
|
return true;
|
|
|
|
} else {
|
|
|
|
#if DEBUG_LAGGED_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "[src: %s] no samples to ignore! ts = %" PRIu64,
|
|
|
|
name, source->audio_ts);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!source->audio_pending || num_floats) {
|
|
|
|
blog(LOG_WARNING,
|
|
|
|
"Source %s audio is lagging (over by %.02f ms) "
|
|
|
|
"at max audio buffering. Restarting source audio.",
|
|
|
|
name, (start_ts - source->audio_ts) / 1000000.);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
|
|
|
|
source->audio_pending = true;
|
|
|
|
source->audio_ts = 0;
|
|
|
|
/* tell the timestamp adjustment code in source_output_audio_data to
|
|
|
|
* reset everything, and hopefully fix the timestamps */
|
|
|
|
source->timing_set = false;
|
|
|
|
return false;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
2016-01-30 15:14:47 -08:00
|
|
|
static bool discard_if_stopped(obs_source_t *source, size_t channels)
|
|
|
|
{
|
|
|
|
size_t last_size;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
last_size = source->last_audio_input_buf_size;
|
|
|
|
size = source->audio_input_buf[0].size;
|
|
|
|
|
|
|
|
if (!size)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* if perpetually pending data, it means the audio has stopped,
|
|
|
|
* so clear the audio data */
|
|
|
|
if (last_size == size) {
|
2017-03-27 09:01:13 -07:00
|
|
|
if (!source->pending_stop) {
|
|
|
|
source->pending_stop = true;
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "doing pending stop trick: '%s'",
|
|
|
|
source->context.name);
|
|
|
|
#endif
|
2020-12-10 19:45:31 +09:00
|
|
|
return false;
|
2017-03-27 09:01:13 -07:00
|
|
|
}
|
|
|
|
|
2016-01-30 15:14:47 -08:00
|
|
|
for (size_t ch = 0; ch < channels; ch++)
|
|
|
|
circlebuf_pop_front(&source->audio_input_buf[ch], NULL,
|
|
|
|
source->audio_input_buf[ch].size);
|
|
|
|
|
2017-03-27 09:01:13 -07:00
|
|
|
source->pending_stop = false;
|
2016-01-30 15:14:47 -08:00
|
|
|
source->audio_ts = 0;
|
|
|
|
source->last_audio_input_buf_size = 0;
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "source audio data appears to have "
|
|
|
|
"stopped, clearing");
|
|
|
|
#endif
|
|
|
|
return true;
|
|
|
|
} else {
|
|
|
|
source->last_audio_input_buf_size = size;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-02-04 00:35:00 -08:00
|
|
|
#define MAX_AUDIO_SIZE (AUDIO_OUTPUT_FRAMES * sizeof(float))
|
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
static inline void discard_audio(struct obs_core_audio *audio,
|
|
|
|
obs_source_t *source, size_t channels,
|
|
|
|
size_t sample_rate, struct ts_info *ts)
|
|
|
|
{
|
|
|
|
size_t total_floats = AUDIO_OUTPUT_FRAMES;
|
|
|
|
size_t size;
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
/* debug assert only */
|
|
|
|
UNUSED_PARAMETER(audio);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
bool is_audio_source = source->info.output_flags & OBS_SOURCE_AUDIO;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (source->info.audio_render) {
|
|
|
|
source->audio_ts = 0;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ts->end <= source->audio_ts) {
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG,
|
|
|
|
"can't discard, source "
|
|
|
|
"timestamp (%" PRIu64 ") >= "
|
|
|
|
"end timestamp (%" PRIu64 ")",
|
|
|
|
source->audio_ts, ts->end);
|
|
|
|
#endif
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (source->audio_ts < (ts->start - 1)) {
|
2016-02-04 00:35:00 -08:00
|
|
|
if (source->audio_pending &&
|
|
|
|
source->audio_input_buf[0].size < MAX_AUDIO_SIZE &&
|
|
|
|
discard_if_stopped(source, channels))
|
2016-01-30 15:14:47 -08:00
|
|
|
return;
|
2016-02-04 00:35:00 -08:00
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
if (is_audio_source) {
|
|
|
|
blog(LOG_DEBUG,
|
|
|
|
"can't discard, source "
|
|
|
|
"timestamp (%" PRIu64 ") < "
|
|
|
|
"start timestamp (%" PRIu64 ")",
|
|
|
|
source->audio_ts, ts->start);
|
|
|
|
}
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
|
|
|
|
/* ignore_audio should have already run and marked this source
|
|
|
|
* pending, unless we *just* added buffering */
|
|
|
|
assert(audio->total_buffering_ticks < MAX_BUFFERING_TICKS ||
|
|
|
|
source->audio_pending || !source->audio_ts ||
|
|
|
|
audio->buffering_wait_ticks);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
#endif
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (source->audio_ts != ts->start &&
|
|
|
|
source->audio_ts != (ts->start - 1)) {
|
|
|
|
size_t start_point = convert_time_to_frames(
|
|
|
|
sample_rate, source->audio_ts - ts->start);
|
|
|
|
if (start_point == AUDIO_OUTPUT_FRAMES) {
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
if (is_audio_source)
|
2017-03-19 07:35:51 -04:00
|
|
|
blog(LOG_DEBUG, "can't discard, start point is "
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
"at audio frame count");
|
|
|
|
#endif
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
total_floats -= start_point;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = total_floats * sizeof(float);
|
|
|
|
|
|
|
|
if (source->audio_input_buf[0].size < size) {
|
2016-01-30 15:14:47 -08:00
|
|
|
if (discard_if_stopped(source, channels))
|
|
|
|
return;
|
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
if (is_audio_source)
|
|
|
|
blog(LOG_DEBUG, "can't discard, data still pending");
|
|
|
|
#endif
|
2019-07-05 08:41:34 -07:00
|
|
|
source->audio_ts = ts->end;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (size_t ch = 0; ch < channels; ch++)
|
|
|
|
circlebuf_pop_front(&source->audio_input_buf[ch], NULL, size);
|
|
|
|
|
2016-01-31 14:04:54 -08:00
|
|
|
source->last_audio_input_buf_size = 0;
|
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
if (is_audio_source)
|
|
|
|
blog(LOG_DEBUG, "audio discarded, new ts: %" PRIu64, ts->end);
|
|
|
|
#endif
|
|
|
|
|
2017-03-27 09:01:13 -07:00
|
|
|
source->pending_stop = false;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
source->audio_ts = ts->end;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void add_audio_buffering(struct obs_core_audio *audio,
|
2018-09-20 23:36:20 +02:00
|
|
|
size_t sample_rate, struct ts_info *ts,
|
|
|
|
uint64_t min_ts, const char *buffering_name)
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
{
|
|
|
|
struct ts_info new_ts;
|
|
|
|
uint64_t offset;
|
|
|
|
uint64_t frames;
|
2016-02-04 11:45:36 -08:00
|
|
|
size_t total_ms;
|
|
|
|
size_t ms;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
int ticks;
|
|
|
|
|
|
|
|
if (audio->total_buffering_ticks == MAX_BUFFERING_TICKS)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!audio->buffering_wait_ticks)
|
|
|
|
audio->buffered_ts = ts->start;
|
|
|
|
|
|
|
|
offset = ts->start - min_ts;
|
|
|
|
frames = ns_to_audio_frames(sample_rate, offset);
|
|
|
|
ticks = (int)((frames + AUDIO_OUTPUT_FRAMES - 1) / AUDIO_OUTPUT_FRAMES);
|
|
|
|
|
|
|
|
audio->total_buffering_ticks += ticks;
|
|
|
|
|
|
|
|
if (audio->total_buffering_ticks >= MAX_BUFFERING_TICKS) {
|
|
|
|
ticks -= audio->total_buffering_ticks - MAX_BUFFERING_TICKS;
|
|
|
|
audio->total_buffering_ticks = MAX_BUFFERING_TICKS;
|
|
|
|
blog(LOG_WARNING, "Max audio buffering reached!");
|
|
|
|
}
|
|
|
|
|
2016-02-04 11:45:36 -08:00
|
|
|
ms = ticks * AUDIO_OUTPUT_FRAMES * 1000 / sample_rate;
|
|
|
|
total_ms = audio->total_buffering_ticks * AUDIO_OUTPUT_FRAMES * 1000 /
|
|
|
|
sample_rate;
|
|
|
|
|
|
|
|
blog(LOG_INFO,
|
|
|
|
"adding %d milliseconds of audio buffering, total "
|
2018-09-20 23:36:20 +02:00
|
|
|
"audio buffering is now %d milliseconds"
|
|
|
|
" (source: %s)\n",
|
|
|
|
(int)ms, (int)total_ms, buffering_name);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG,
|
|
|
|
"min_ts (%" PRIu64 ") < start timestamp "
|
|
|
|
"(%" PRIu64 ")",
|
|
|
|
min_ts, ts->start);
|
|
|
|
blog(LOG_DEBUG, "old buffered ts: %" PRIu64 "-%" PRIu64, ts->start,
|
|
|
|
ts->end);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
new_ts.start =
|
|
|
|
audio->buffered_ts -
|
|
|
|
audio_frames_to_ns(sample_rate, audio->buffering_wait_ticks *
|
|
|
|
AUDIO_OUTPUT_FRAMES);
|
|
|
|
|
|
|
|
while (ticks--) {
|
|
|
|
int cur_ticks = ++audio->buffering_wait_ticks;
|
|
|
|
|
|
|
|
new_ts.end = new_ts.start;
|
|
|
|
new_ts.start =
|
|
|
|
audio->buffered_ts -
|
|
|
|
audio_frames_to_ns(sample_rate,
|
|
|
|
cur_ticks * AUDIO_OUTPUT_FRAMES);
|
|
|
|
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "add buffered ts: %" PRIu64 "-%" PRIu64,
|
|
|
|
new_ts.start, new_ts.end);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
circlebuf_push_front(&audio->buffered_timestamps, &new_ts,
|
|
|
|
sizeof(new_ts));
|
|
|
|
}
|
|
|
|
|
|
|
|
*ts = new_ts;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool audio_buffer_insuffient(struct obs_source *source,
|
|
|
|
size_t sample_rate, uint64_t min_ts)
|
|
|
|
{
|
|
|
|
size_t total_floats = AUDIO_OUTPUT_FRAMES;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
if (source->info.audio_render || source->audio_pending ||
|
|
|
|
!source->audio_ts) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (source->audio_ts != min_ts && source->audio_ts != (min_ts - 1)) {
|
|
|
|
size_t start_point = convert_time_to_frames(
|
|
|
|
sample_rate, source->audio_ts - min_ts);
|
|
|
|
if (start_point >= AUDIO_OUTPUT_FRAMES)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
total_floats -= start_point;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = total_floats * sizeof(float);
|
|
|
|
|
|
|
|
if (source->audio_input_buf[0].size < size) {
|
|
|
|
source->audio_pending = true;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-09-20 23:36:20 +02:00
|
|
|
static inline const char *find_min_ts(struct obs_core_data *data,
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
uint64_t *min_ts)
|
|
|
|
{
|
2018-09-20 23:36:20 +02:00
|
|
|
obs_source_t *buffering_source = NULL;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
struct obs_source *source = data->first_audio_source;
|
|
|
|
while (source) {
|
|
|
|
if (!source->audio_pending && source->audio_ts &&
|
2018-09-20 23:36:20 +02:00
|
|
|
source->audio_ts < *min_ts) {
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
*min_ts = source->audio_ts;
|
2018-09-20 23:36:20 +02:00
|
|
|
buffering_source = source;
|
2019-02-12 19:33:13 -08:00
|
|
|
}
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
source = (struct obs_source *)source->next_audio_source;
|
|
|
|
}
|
2018-09-20 23:36:20 +02:00
|
|
|
return buffering_source ? obs_source_get_name(buffering_source) : NULL;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool mark_invalid_sources(struct obs_core_data *data,
|
|
|
|
size_t sample_rate, uint64_t min_ts)
|
|
|
|
{
|
|
|
|
bool recalculate = false;
|
|
|
|
|
|
|
|
struct obs_source *source = data->first_audio_source;
|
|
|
|
while (source) {
|
|
|
|
recalculate |=
|
|
|
|
audio_buffer_insuffient(source, sample_rate, min_ts);
|
|
|
|
source = (struct obs_source *)source->next_audio_source;
|
|
|
|
}
|
|
|
|
|
|
|
|
return recalculate;
|
|
|
|
}
|
|
|
|
|
2018-09-20 23:36:20 +02:00
|
|
|
static inline const char *calc_min_ts(struct obs_core_data *data,
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
size_t sample_rate, uint64_t *min_ts)
|
|
|
|
{
|
2018-09-20 23:36:20 +02:00
|
|
|
const char *buffering_name = find_min_ts(data, min_ts);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
if (mark_invalid_sources(data, sample_rate, *min_ts))
|
2018-09-20 23:36:20 +02:00
|
|
|
buffering_name = find_min_ts(data, min_ts);
|
|
|
|
return buffering_name;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void release_audio_sources(struct obs_core_audio *audio)
|
|
|
|
{
|
|
|
|
for (size_t i = 0; i < audio->render_order.num; i++)
|
|
|
|
obs_source_release(audio->render_order.array[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool audio_callback(void *param, uint64_t start_ts_in, uint64_t end_ts_in,
|
|
|
|
uint64_t *out_ts, uint32_t mixers,
|
|
|
|
struct audio_output_data *mixes)
|
|
|
|
{
|
|
|
|
struct obs_core_data *data = &obs->data;
|
|
|
|
struct obs_core_audio *audio = &obs->audio;
|
|
|
|
struct obs_source *source;
|
|
|
|
size_t sample_rate = audio_output_get_sample_rate(audio->audio);
|
|
|
|
size_t channels = audio_output_get_channels(audio->audio);
|
|
|
|
struct ts_info ts = {start_ts_in, end_ts_in};
|
|
|
|
size_t audio_size;
|
|
|
|
uint64_t min_ts;
|
|
|
|
|
|
|
|
da_resize(audio->render_order, 0);
|
|
|
|
da_resize(audio->root_nodes, 0);
|
|
|
|
|
|
|
|
circlebuf_push_back(&audio->buffered_timestamps, &ts, sizeof(ts));
|
|
|
|
circlebuf_peek_front(&audio->buffered_timestamps, &ts, sizeof(ts));
|
|
|
|
min_ts = ts.start;
|
|
|
|
|
|
|
|
audio_size = AUDIO_OUTPUT_FRAMES * sizeof(float);
|
|
|
|
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
blog(LOG_DEBUG, "ts %llu-%llu", ts.start, ts.end);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* build audio render order
|
|
|
|
* NOTE: these are source channels, not audio channels */
|
|
|
|
for (uint32_t i = 0; i < MAX_CHANNELS; i++) {
|
|
|
|
obs_source_t *source = obs_get_output_source(i);
|
|
|
|
if (source) {
|
|
|
|
obs_source_enum_active_tree(source, push_audio_tree,
|
|
|
|
audio);
|
|
|
|
push_audio_tree(NULL, source, audio);
|
|
|
|
da_push_back(audio->root_nodes, &source);
|
|
|
|
obs_source_release(source);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-29 23:18:02 -08:00
|
|
|
pthread_mutex_lock(&data->audio_sources_mutex);
|
|
|
|
|
|
|
|
source = data->first_audio_source;
|
|
|
|
while (source) {
|
|
|
|
push_audio_tree(NULL, source, audio);
|
|
|
|
source = (struct obs_source *)source->next_audio_source;
|
|
|
|
}
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&data->audio_sources_mutex);
|
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* render audio data */
|
|
|
|
for (size_t i = 0; i < audio->render_order.num; i++) {
|
|
|
|
obs_source_t *source = audio->render_order.array[i];
|
|
|
|
obs_source_audio_render(source, mixers, channels, sample_rate,
|
|
|
|
audio_size);
|
libobs: guard against lagging audio sources
df4eb82 fixed a bug that caused source audio timestamps to perpetually
lag. However, there is a deeper issue where after we reach max
buffering, lagging sources make OBS's entire audio pipeline fall over.
These may be corrected by later code, but still cause global audio
glitches at best. Persistent problems, as prior to df4eb82, cause audio
to fail entirely.
The root cause is that OBS's audio mixing tree cannot deal with
timestamps prior to the current audio tick. Intermediate mixing stages
assume that the lowest incoming timestamp is the base of the current
tick, and mix accordingly. This propagates lagged timestamps up the
tree, where at the top level mix_audio will drop the source entirely -
which at this point is a transition covering all inputs, thus glitching
audio globally. Where extra buffering can cover the slip, the entire mix
gets retried and the error corrected, but when the global buffer
duration is maxed out, it makes it to the output.
The solution is to catch laggy sources immediately after rendering, and
drop audio to bring them back in sync, or mark them pending if not
enough audio is available. This ensures later mixing stages are not fed
with out of sync timestamps.
This improves the ignore_audio code to only drop as much audio as
needed to bring the source back in sync, and moves its call to
immediately after source audio rendering.
2020-12-11 14:45:27 +09:00
|
|
|
|
|
|
|
/* if a source has gone backward in time and we can no
|
|
|
|
* longer buffer, drop some or all of its audio */
|
|
|
|
if (audio->total_buffering_ticks == MAX_BUFFERING_TICKS &&
|
|
|
|
source->audio_ts < ts.start) {
|
|
|
|
if (source->info.audio_render) {
|
|
|
|
blog(LOG_DEBUG,
|
|
|
|
"render audio source %s timestamp has "
|
|
|
|
"gone backwards",
|
|
|
|
obs_source_get_name(source));
|
|
|
|
|
|
|
|
/* just avoid further damage */
|
|
|
|
source->audio_pending = true;
|
|
|
|
#if DEBUG_AUDIO == 1
|
|
|
|
/* this should really be fixed */
|
|
|
|
assert(false);
|
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
pthread_mutex_lock(&source->audio_buf_mutex);
|
|
|
|
bool rerender = ignore_audio(source, channels,
|
|
|
|
sample_rate,
|
|
|
|
ts.start);
|
|
|
|
pthread_mutex_unlock(&source->audio_buf_mutex);
|
|
|
|
|
|
|
|
/* if we (potentially) recovered, re-render */
|
|
|
|
if (rerender)
|
|
|
|
obs_source_audio_render(source, mixers,
|
|
|
|
channels,
|
|
|
|
sample_rate,
|
|
|
|
audio_size);
|
|
|
|
}
|
|
|
|
}
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* get minimum audio timestamp */
|
|
|
|
pthread_mutex_lock(&data->audio_sources_mutex);
|
2018-09-20 23:36:20 +02:00
|
|
|
const char *buffering_name = calc_min_ts(data, sample_rate, &min_ts);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
pthread_mutex_unlock(&data->audio_sources_mutex);
|
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* if a source has gone backward in time, buffer */
|
|
|
|
if (min_ts < ts.start)
|
2018-09-20 23:36:20 +02:00
|
|
|
add_audio_buffering(audio, sample_rate, &ts, min_ts,
|
|
|
|
buffering_name);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* mix audio */
|
|
|
|
if (!audio->buffering_wait_ticks) {
|
|
|
|
for (size_t i = 0; i < audio->root_nodes.num; i++) {
|
|
|
|
obs_source_t *source = audio->root_nodes.array[i];
|
|
|
|
|
|
|
|
if (source->audio_pending)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&source->audio_buf_mutex);
|
|
|
|
|
|
|
|
if (source->audio_output_buf[0][0] && source->audio_ts)
|
|
|
|
mix_audio(mixes, source, channels, sample_rate,
|
|
|
|
&ts);
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&source->audio_buf_mutex);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* discard audio */
|
|
|
|
pthread_mutex_lock(&data->audio_sources_mutex);
|
|
|
|
|
|
|
|
source = data->first_audio_source;
|
|
|
|
while (source) {
|
|
|
|
pthread_mutex_lock(&source->audio_buf_mutex);
|
|
|
|
discard_audio(audio, source, channels, sample_rate, &ts);
|
|
|
|
pthread_mutex_unlock(&source->audio_buf_mutex);
|
|
|
|
|
|
|
|
source = (struct obs_source *)source->next_audio_source;
|
|
|
|
}
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&data->audio_sources_mutex);
|
|
|
|
|
|
|
|
/* ------------------------------------------------ */
|
|
|
|
/* release audio sources */
|
|
|
|
release_audio_sources(audio);
|
|
|
|
|
|
|
|
circlebuf_pop_front(&audio->buffered_timestamps, NULL, sizeof(ts));
|
|
|
|
|
|
|
|
*out_ts = ts.start;
|
|
|
|
|
|
|
|
if (audio->buffering_wait_ticks) {
|
|
|
|
audio->buffering_wait_ticks--;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
UNUSED_PARAMETER(param);
|
|
|
|
return true;
|
|
|
|
}
|