Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
(Note: test and UI are also modified by this commit)
API Changed (removed "enum obs_source_type type" parameter):
-------------------------
obs_source_get_display_name
obs_source_create
obs_get_source_output_flags
obs_get_source_defaults
obs_get_source_properties
Removes the "type" parameter from these functions. The "type" parameter
really doesn't serve much of a purpose being a parameter in any of these
cases, the type is just to indicate what it's used for.
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
(Note: This commit breaks UI compilation. Skip if bisecting)
Adds a means of saving specific sources that the front-end chooses,
rather than being forced to use the now-removed "user list".
(Note: This commit breaks UI compilation. Skip if bisecting)
API Removed:
------------------------
obs_add_source
API Changed:
------------------------
obs_source_remove: Now just marks/signals a source for removal
The concept of "user sources" is flawed: it was something that the
front-end was forced to deal with if it wanted to automate source
saving/loading, and often it had to code around it. That's not how
saving/loading should work, a front-end should be allowed to manage
lists of sources in the way it explicitly chooses, and it should be able
to choose which sources it wants to save/load.
This function was removed even though the browser plugin was using this
function on mac, so this is being put back in temporarily while the
browser plugin is modified to remove this function.
API removed:
--------------------
gs_effect_t *obs_get_default_effect(void);
gs_effect_t *obs_get_default_rect_effect(void);
gs_effect_t *obs_get_opaque_effect(void);
gs_effect_t *obs_get_solid_effect(void);
gs_effect_t *obs_get_bicubic_effect(void);
gs_effect_t *obs_get_lanczos_effect(void);
gs_effect_t *obs_get_bilinear_lowres_effect(void);
API added:
--------------------
gs_effect_t *obs_get_base_effect(enum obs_base_effect effect);
Summary:
--------------------
Combines multiple near-identical functions into a single function with
an enum parameter.
This is useful for allowing the ability to have private data associated
with the object type definition structures. This private data can be
useful for things like plugin wrappers for other languages, or providing
dynamically generated object types.
API Changed:
---------------------------
From:
- bool obs_startup(const char *locale, profiler_name_store_t *store);
To:
- bool obs_startup(const char *locale, const char *module_config_path,
profiler_name_store_t *store);
Summary:
---------------------------
This allows plugin modules to store plugin-specific configuration data
(rather than only allowing objects to store configuration data). This
will be useful for things like caching data, for example looking up and
storing ingests from remote (rather than storing locally), or caching
font data (so it doesn't have to build a font cache each time), among
other things.
Also adds a module-specific directory for the UI
Due to all the threads in libobs it wouldn't be safe to make that
parameter reconfigurable after libobs is initialized without adding
even more synchronization. On the other hand, adding a function to set
the name store before calling obs_startup would solve the problem of
passing a name store into libobs, but it can lead to more complicated
semantics for obs_get_profiler_name_store (e.g., should it always return
the current name store even if libobs isn't initialized until someone
calls set_name_store(NULL)? should obs_shutdown call
set_name_store(NULL)? Passing it as obs_startup parameter avoids
these (and hopefully other) potential misunderstandings
(Non-compiling commit: windowless-context branch)
API Changed:
---------------------
Removed functions:
- obs_add_draw_callback
- obs_remove_draw_callback
- obs_resize
- obs_preview_set_enabled
- obs_preview_enabled
Removed member variables from struct obs_video_info:
- window_width
- window_height
- window
Summary:
---------------------
Changes the core libobs API to not be dependent upon a main window/view.
If you wish to draw to a window/view, use an obs_display object to
handle it.
This allows the use of libobs without requiring a window to be present
on the system. This is also prunes code that had to be needlessly
duplicated to handle the "main" window.
If you don't need to see what's displayed, then this is particularly
useful for two reasons:
1. It reduces the number of draw/present calls
2. It can prevent issues with certain hardware setups where rendering on
a monitor hooked up to a separate card can experience slowdowns
The "last" filter that's rendered is technically at filter index 0, so
enumeration needs to be from the last index in the list to the first
index in the list.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
The main view does not need to worry about hiding/deactivation of
sources when it's being freed here, when the obs context is shutting
down in this section of obs, all the sources are being freed, thus
there's no need to worry about deactivating/hiding sources.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.