This is a band-aid solution to be able to create temporary services
without logging them and keep them out of enumeration functions.
This is a band-aid solution -- 'master obs context lists' should not be
kept by the core. Logging of object creation/destruction should also be
controlled by the front-end instead of the core.
This patch fixes a specific crash where if the user named a filter the
same name as an input source that already existed in the system, scene
item loading code could find the filter with the same name instead of
the source, and mistakenly use it as the scene item's source directly.
This would cause a crash when trying to render that filter as a regular
source.
Marking filters as private is a temporary and simple workaround to the
solution. Filters are currently not meant to be found via the main
enumeration/search functions, which is a design flaw (lack of
consistency). In future major API revisions of libobs, filters should
be reworked to act as sources, with the sources they filter as
sub-sources ideally.
Additionally, the concept of "private context objects" and "primary
lists of context objects" in the back-end should probably also be
removed, allowing the font-end (or optional separate API layers) to
control all primary lists of obs context objects. These minor issues
that occur ultimately stem from API design flaws which need to be
corrected.
This crash happened when a filter was mistakenly used as a regular
source due to an unrelated bug in filter code and scene loading code.
The filter and the source it belongs to both had the same names, and the
source loading code found the filter and mistakenly used it as the
source instead of the actual source with the same name.
Determines whether an obs object was created successfully. If a plugin
that's used for a saved object is removed (third party plugins), its
data will become invalid, but the objects can often still be created for
the sake of preserving user settings, but sometimes these objects can
cause problems if they're actually used (such as using them for
transitions).
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
Just creates an effect to the target variable only if its current value
is null. This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.
(Note: This commit also modifies obs-filters and text-freetype2)
This simplifies writing of effects. DrawMatrix is no longer necessary
because there are no sources that require drawing with a color matrix
other than async sources, and async sources are automatically processed
and don't defer their initial render stage to filters.
When the #include directive in in the C lexer preprocessor is
encountered, the files being included need to be relative to the
directory of the file that the include was used in.
(Note: This commit also changes the UI)
Changed:
-------------------
void obs_load_sources(obs_data_array_t *sources_list);
To:
-------------------
void obs_load_sources(obs_data_array_t *sources_list,
obs_source_load_cb callback, void *private_data);
Signals should really never be required to use to make some function
work properly. The "source_load" signal was required for the
obs_load_sources function, but it's meant more for loading private data
in the settings, not for general loading of sources.
This changes it so that a callback is explicitly required to load the
sources.
The default buffering time for audio was always 1 second before the
audio subsystem was changed, and it was always more than sufficient for
max audio buffering time
Under certain circumstances, the timing_adjust variable would cause line
1161 to continually trigger over and over again. The "loop detection"
code incorrectly made it so that any timestamp that was just simply
below the expected value would be seen as a jump. After that, the
timing_adjust variable would be set for the frame again, and then the
audio would see it as a jump again after that, and those two things
would continue endlessly. This would cause stuttering particularly with
certain devices (particularly elgato/lgp/hdpvr) where the audio/video
data are decoded and sent at varying/different/unpredictable times.
To fix this issue, it should not detect values below as jumps, but
instead should only do it for values that exceed the MAX_TS_VAR (maximum
timestamp variance) value.
If obs_source::audio_ts is set to 0 (such as by discard_if_stopped in
obs-audio.c), but the push_back variable in the source_output_audio_data
function in obs-source.c was being set to true (meaning it's within the
seamless audio smoothing threshold), it would cause it to never reset
the obs_source::audio_ts value, and thus all audio data from the source
would become perpetually ignored by the audio subsystem until there was
finally some sort of timestamp jump that caused it to call
source_output_audio_place, and thus reset obs_source::audio_ts.
obs_source::audio_ts is only reset in source_output_audio_place, not in
source_output_audio_push_back, so the most simple solution is to just
call source_output_audio_push_back is obs_source::audio_ts is 0.
This code causes audio data in general to be reset (and subsequently
deleted). It should just be marked as pending and ignored until the
data is ready. The discard_if_stopped function will serve the same
purpose if the source's audio has actually stopped.
There's technically no need to clear the audio data here, nor is there
any need to try to trick the timestamp in to a different position. It
can simple just reset the audio timing.
Prevents a possible case where audio data might be deleted when it's not
necessary to delete any.
This variable is used to detect whether audio has stopped -- if audio
stops, it detects that no new data is coming in, and resets the audio
position so that it eliminates the chance of causing the audio buffering
to go haywire if audio starts up again. However, this variable was not
being reset every time the value changes, which it should.
Sometimes the A and B sources of a transition would a large difference
in their timestamps, and the calculation of where to start the audio
data for one of the sources could be above the tick size, which could
cause a crash.
If the circular audio buffer of the source has data remaining that's
less than the audio frame tick count (1024 frames), it would just leave
that audio data on the source without discarding it. However, this
could cause audio buffering to increase unnecessarily under certain
circumstances (when the next audio timestamp is within the timestamp
jump window), so it would append data to that circular buffer despite
the audio stopping that long ago, causing audio buffering to have to
increase to compensate.
Instead, just discard pending audio if it hasn't been written to. In
other words, if the audio has stopped and there's insufficient audio
left to continue processing.
With the new audio subsystem, audio buffering is minimal at all times.
However, when the audio buffering is too small or non-existent, it would
cause the audio encoders to start with a timestamp that was actually
higher than the first video frame timestamp. Video would have some
inherent buffering/delay, but then audio could return and encode almost
immediately. This created a possible window of empty time between the
first encoded video packet and the first encoded audio packet, where as
audio buffering would cause the first audio packet's timestamp to always
be way before the first video packet's timestamp. It would then
incorrectly assume the two starting points were in sync.
So instead of assuming the audio data is always first, this patch makes
video wait for audio data comes in, and conversely buffers audio data
until video comes in, and tries to find a starting point within that
video data instead, ensuring a synced starting point whether audio
buffering is active or not.
When starting a multi-track output, attempt to pair the video encoder
with one of the audio encoders to ensure that the video and audio
encoders start as close together in time as possible. This ensures the
best possible audio/video syncing point when using multi-track audio
output.