This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
API Changed:
---------------------------
From:
- bool obs_startup(const char *locale, profiler_name_store_t *store);
To:
- bool obs_startup(const char *locale, const char *module_config_path,
profiler_name_store_t *store);
Summary:
---------------------------
This allows plugin modules to store plugin-specific configuration data
(rather than only allowing objects to store configuration data). This
will be useful for things like caching data, for example looking up and
storing ingests from remote (rather than storing locally), or caching
font data (so it doesn't have to build a font cache each time), among
other things.
Also adds a module-specific directory for the UI
Due to all the threads in libobs it wouldn't be safe to make that
parameter reconfigurable after libobs is initialized without adding
even more synchronization. On the other hand, adding a function to set
the name store before calling obs_startup would solve the problem of
passing a name store into libobs, but it can lead to more complicated
semantics for obs_get_profiler_name_store (e.g., should it always return
the current name store even if libobs isn't initialized until someone
calls set_name_store(NULL)? should obs_shutdown call
set_name_store(NULL)? Passing it as obs_startup parameter avoids
these (and hopefully other) potential misunderstandings
(Non-compiling commit: windowless-context branch)
API Changed:
---------------------
Removed functions:
- obs_add_draw_callback
- obs_remove_draw_callback
- obs_resize
- obs_preview_set_enabled
- obs_preview_enabled
Removed member variables from struct obs_video_info:
- window_width
- window_height
- window
Summary:
---------------------
Changes the core libobs API to not be dependent upon a main window/view.
If you wish to draw to a window/view, use an obs_display object to
handle it.
This allows the use of libobs without requiring a window to be present
on the system. This is also prunes code that had to be needlessly
duplicated to handle the "main" window.
Intentionally breaks compilation when trying to compile the specific
merged commits within the windowless-context branch. This is meant to
be used in conjunction with a merge commit so that bisecting will never
see any non-compiling commits.
In case the encoder has to use a different sample rate (due to the
sample rate being unsupported), we need an API function for the encoder
to get the sample rate that the encoder is actually running at.
Allows the ability to hint at encoders what format should be used.
This is particularly useful if libobs is currently operating in planar
4:4:4, but you want to force an encoder used for streaming to convert to
NV12 to prevent streaming issues.
If you don't need to see what's displayed, then this is particularly
useful for two reasons:
1. It reduces the number of draw/present calls
2. It can prevent issues with certain hardware setups where rendering on
a monitor hooked up to a separate card can experience slowdowns
obs_source_process_filter tried to do everything in a single function,
but the problem is that effect parameters would not properly be
accounted for due to the way it internally draws, therefore it was
necessary to split the functions in to two, you first call
obs_source_process_filter_begin, then you set your effect parameters,
then you finally call obs_source_process_filter_end. This ensures that
when the filter is drawn, that the effect parameters are set.
These functions are primarily for use with filters, filters need to be
able to get the width/height of a target source without it necessarily
getting the post-filtered dimensions.
When a frame is processed by a filter, it comes directly from the
source's video frame cache. However, if a filter is using or processing
those frames for whatever reason, there would be no guarantee that the
frames would persist during processing, and frames could eventually be
deallocated unexpected, for example when the resolution or format
changes.
So the solution is to implement simple reference counting for the frames
so that the frames will exist until they have been released by any
source or filter that's using them.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
The obs_source_get_width and obs_source_get_height functions need to
account for filters that may change their width/height such as a crop
filter or something similar.
As a side effect of this commit, because these functions need to lock
the filter mutex, these functions can no longer be used with a
const-qualified obs_source_t pointer.
obs_source_inc_showing and obs_source_dec_showing are used to indicate
that a source is showing or no longer being shown in a particular area
of the program outside from the main render view.
One could use an obs_view, but I felt like it was unnecessary because
using an obs_view just to display a single source feels somewhat
superfluous.
Instead of having services automatically apply encoder settings on
initialization (whether the output wants to or not), instead make it
something that must be explicitly called by the developer. There are
cases where the developer may not wish to apply the service-specific
settings, or may wish to override them for whatever reason.
Before: After:
obs_service_gettype obs_service_get_type
It seems there was an API function that was missed when we were doing
our big API consistency update. Unsquishes obs_service_gettype to
obs_service_get_type.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
The comment says "these are different", but doesn't state why.
Actually, I should really rename the output flags so they're not flags,
but instead just "caps", because that's really all that they are.
If an async video source stops video for whatever reason, it would get
stuck on the last frame that was played. This was particularly awkward
when I wanted to give the user the ability to deactivate a source such
as a webcam because it would get stuck on the last frame.