(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
Just creates an effect to the target variable only if its current value
is null. This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.
If the circular audio buffer of the source has data remaining that's
less than the audio frame tick count (1024 frames), it would just leave
that audio data on the source without discarding it. However, this
could cause audio buffering to increase unnecessarily under certain
circumstances (when the next audio timestamp is within the timestamp
jump window), so it would append data to that circular buffer despite
the audio stopping that long ago, causing audio buffering to have to
increase to compensate.
Instead, just discard pending audio if it hasn't been written to. In
other words, if the audio has stopped and there's insufficient audio
left to continue processing.
With the new audio subsystem, audio buffering is minimal at all times.
However, when the audio buffering is too small or non-existent, it would
cause the audio encoders to start with a timestamp that was actually
higher than the first video frame timestamp. Video would have some
inherent buffering/delay, but then audio could return and encode almost
immediately. This created a possible window of empty time between the
first encoded video packet and the first encoded audio packet, where as
audio buffering would cause the first audio packet's timestamp to always
be way before the first video packet's timestamp. It would then
incorrectly assume the two starting points were in sync.
So instead of assuming the audio data is always first, this patch makes
video wait for audio data comes in, and conversely buffers audio data
until video comes in, and tries to find a starting point within that
video data instead, ensuring a synced starting point whether audio
buffering is active or not.
The seamless audio looping code would erroneously trigger for things
that weren't loops, causing the audio data to continually push back and
ignore timestamps, thus going out of sync.
There does need to be loop handling code, but due to the fact that other
things may need to trigger this code, it's best just to clear the audio
data and start from a fresh sync point. Unfortunately for the case of
loops, this means the window in which audio data loops and video frames
loop need to be muted.
This fixes an age-old issue where audio samples could be lost or audio
could temporarily go out of sync in the case of looping videos. When
audio/video data is looping, there's a window between when the audio
data resets its timestamp value and when the video data resets its
timestamp value. This method simply pushes back the audio data while in
that window and does not modify sync, and when it detects that its out
of the loop window it simply forces a resync of the audio data in the
circular buffer.
This ensures that minimal audio data is lost in the loop process, and
minimizes the likelihood of any sort of sync issues associated with
looping.
Instead of applying the resampler offset right away (to each audio
packet), apply the resampler offset when the timestamps are converted to
system timestamps. This fixes an issue where if audio timestamps reset
to 0 (for whatever reason), the offset would cause the timestamp to go
in to the negative.
(Note: This commit also modifies UI)
Instead of using signals, use designated callback lists for audio
capture and audio control helpers. Signals aren't suitable here due to
the fact that signals aren't meant for things that happen every frame or
things that happen every time audio/video is received. Also prevents
audio from being allocated every time these functions are called due to
the calldata structure.
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
(Note: test and UI are also modified by this commit)
API Changed (removed "enum obs_source_type type" parameter):
-------------------------
obs_source_get_display_name
obs_source_create
obs_get_source_output_flags
obs_get_source_defaults
obs_get_source_properties
Removes the "type" parameter from these functions. The "type" parameter
really doesn't serve much of a purpose being a parameter in any of these
cases, the type is just to indicate what it's used for.
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Adds a "composite" source type which is used for sources that composite
one or more sub-sources. The audio_render callback is called for
composite sources to allow those types of sources to do custom
processing of the audio of its sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Removes audio lines and stores the circular buffer for the audio on the
source itself.
(Note: This commit breaks libobs compilation. Skip if bisecting)
The mixers that a source was assigned to were originally stored in the
audio line. This will store it in the sources themselves instead.
Ensures that the packet dts_usec vals which are generated for
syncing/interleaving use the proper offset relative to where they're
supposed to be starting from. The negative DTS of a first video packet
could potentially have been applied twice due to this.
This was originally used for calculating audio volume if transitions
were active, but transitions won't work that way so tracking the active
transitions is no longer needed.
(Note: This commit breaks UI compilation. Skip if bisecting)
API Removed:
------------------------
obs_add_source
API Changed:
------------------------
obs_source_remove: Now just marks/signals a source for removal
The concept of "user sources" is flawed: it was something that the
front-end was forced to deal with if it wanted to automate source
saving/loading, and often it had to code around it. That's not how
saving/loading should work, a front-end should be allowed to manage
lists of sources in the way it explicitly chooses, and it should be able
to choose which sources it wants to save/load.
This prevents encoders (hardware encoders in particular) from being
continually active when all outputs disconnect from an encoder. This is
mostly just a temporary measure; the encoding interface may need a bit
of a redesign. It will also definitely needs to be able to flush at
some point. Currently when an output is stopped, the pending data is
discarded, which needs to be fixed.
Allows objects to be created regardless of whether the actual id exists
or not. This is a precaution that preserves objects/settings if for
some reason the id was removed for whatever reason (plugin removed, or
hardware encoder that disappeared). This was already added for sources,
but really needs to be added for other libobs objects as well: outputs,
encoders, services.
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
API Changed:
---------------------------
From:
- bool obs_startup(const char *locale, profiler_name_store_t *store);
To:
- bool obs_startup(const char *locale, const char *module_config_path,
profiler_name_store_t *store);
Summary:
---------------------------
This allows plugin modules to store plugin-specific configuration data
(rather than only allowing objects to store configuration data). This
will be useful for things like caching data, for example looking up and
storing ingests from remote (rather than storing locally), or caching
font data (so it doesn't have to build a font cache each time), among
other things.
Also adds a module-specific directory for the UI
Due to all the threads in libobs it wouldn't be safe to make that
parameter reconfigurable after libobs is initialized without adding
even more synchronization. On the other hand, adding a function to set
the name store before calling obs_startup would solve the problem of
passing a name store into libobs, but it can lead to more complicated
semantics for obs_get_profiler_name_store (e.g., should it always return
the current name store even if libobs isn't initialized until someone
calls set_name_store(NULL)? should obs_shutdown call
set_name_store(NULL)? Passing it as obs_startup parameter avoids
these (and hopefully other) potential misunderstandings
(Non-compiling commit: windowless-context branch)
API Changed:
---------------------
Removed functions:
- obs_add_draw_callback
- obs_remove_draw_callback
- obs_resize
- obs_preview_set_enabled
- obs_preview_enabled
Removed member variables from struct obs_video_info:
- window_width
- window_height
- window
Summary:
---------------------
Changes the core libobs API to not be dependent upon a main window/view.
If you wish to draw to a window/view, use an obs_display object to
handle it.
This allows the use of libobs without requiring a window to be present
on the system. This is also prunes code that had to be needlessly
duplicated to handle the "main" window.
The "clamped" video time is the system time per video frame that is
closest to the current system time, but always divisible by the frame
interval. For example, if the last frame system timestamp was 1600 and
the new frame is 2500, but the frame interval is 800, then the
"clamped" video time is 2400.
This clamped value is useful to get the relative system time without any
jitter.