On outputs that use already-active video/audio encoder, the audio
pruning to sync up audio packets with video packets doesn't always get
called (for example if the video pruning function was called). Always
prune excess starting audio packets.
From MSDN: "The behavior of the least significant bit of the return value
is retained strictly for compatibility with 16-bit Windows applications
(which are non-preemptive) and should not be relied upon."
This caused problems with hotkeys firing if the user pressed a hotkey key
in another application, followed by the modifier keys at any other time.
OBS would then think the hotkey key was just pressed based on the was_down
behavior and incorrectly trigger a hotkey event.
Fixes 0000443.
If audio buffering is very high, the audio packets built up in the
interleaved buffer would be significantly before the first video packet,
causing the offset between the starting video/audio packet pairs to be
significantly off, leading to desync.
This issue was not spotted until recently because it only happens when
streaming/recording with same encoders while audio buffering is very
high.
The source shouldn't be inserted into obs->data.first_audio_source until it's
fully initialized, or other threads will access source->control and
dereference an uninitialized pointer.
This is a band-aid solution to be able to create temporary services
without logging them and keep them out of enumeration functions.
This is a band-aid solution -- 'master obs context lists' should not be
kept by the core. Logging of object creation/destruction should also be
controlled by the front-end instead of the core.
This patch fixes a specific crash where if the user named a filter the
same name as an input source that already existed in the system, scene
item loading code could find the filter with the same name instead of
the source, and mistakenly use it as the scene item's source directly.
This would cause a crash when trying to render that filter as a regular
source.
Marking filters as private is a temporary and simple workaround to the
solution. Filters are currently not meant to be found via the main
enumeration/search functions, which is a design flaw (lack of
consistency). In future major API revisions of libobs, filters should
be reworked to act as sources, with the sources they filter as
sub-sources ideally.
Additionally, the concept of "private context objects" and "primary
lists of context objects" in the back-end should probably also be
removed, allowing the font-end (or optional separate API layers) to
control all primary lists of obs context objects. These minor issues
that occur ultimately stem from API design flaws which need to be
corrected.
This crash happened when a filter was mistakenly used as a regular
source due to an unrelated bug in filter code and scene loading code.
The filter and the source it belongs to both had the same names, and the
source loading code found the filter and mistakenly used it as the
source instead of the actual source with the same name.
Determines whether an obs object was created successfully. If a plugin
that's used for a saved object is removed (third party plugins), its
data will become invalid, but the objects can often still be created for
the sake of preserving user settings, but sometimes these objects can
cause problems if they're actually used (such as using them for
transitions).
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
Just creates an effect to the target variable only if its current value
is null. This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.
(Note: This commit also modifies obs-filters and text-freetype2)
This simplifies writing of effects. DrawMatrix is no longer necessary
because there are no sources that require drawing with a color matrix
other than async sources, and async sources are automatically processed
and don't defer their initial render stage to filters.
When the #include directive in in the C lexer preprocessor is
encountered, the files being included need to be relative to the
directory of the file that the include was used in.
(Note: This commit also changes the UI)
Changed:
-------------------
void obs_load_sources(obs_data_array_t *sources_list);
To:
-------------------
void obs_load_sources(obs_data_array_t *sources_list,
obs_source_load_cb callback, void *private_data);
Signals should really never be required to use to make some function
work properly. The "source_load" signal was required for the
obs_load_sources function, but it's meant more for loading private data
in the settings, not for general loading of sources.
This changes it so that a callback is explicitly required to load the
sources.
The default buffering time for audio was always 1 second before the
audio subsystem was changed, and it was always more than sufficient for
max audio buffering time
Under certain circumstances, the timing_adjust variable would cause line
1161 to continually trigger over and over again. The "loop detection"
code incorrectly made it so that any timestamp that was just simply
below the expected value would be seen as a jump. After that, the
timing_adjust variable would be set for the frame again, and then the
audio would see it as a jump again after that, and those two things
would continue endlessly. This would cause stuttering particularly with
certain devices (particularly elgato/lgp/hdpvr) where the audio/video
data are decoded and sent at varying/different/unpredictable times.
To fix this issue, it should not detect values below as jumps, but
instead should only do it for values that exceed the MAX_TS_VAR (maximum
timestamp variance) value.