Certain types of sources (display captures, game captures, audio
device captures, video device captures) should not be duplicated. This
capability flag hints that the source prefers references over full
duplication.
Mostly only used for transitions with the intention of automatically
creating transitions which don't require configuration, returns whether
the source has any properties or not (whether it's configurable)
(Note: This commit also modifies UI)
Instead of using signals, use designated callback lists for audio
capture and audio control helpers. Signals aren't suitable here due to
the fact that signals aren't meant for things that happen every frame or
things that happen every time audio/video is received. Also prevents
audio from being allocated every time these functions are called due to
the calldata structure.
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
Prevents a mutual lock with the scene mutex and graphics mutex. In
libobs/obs-video.c, the graphics mutex could be locked first, then the
scene mutexes second, while in the UI thread, the scene mutexes could be
locked first, then when a scene item is being destroyed, a source could
be destroyed, and sometimes sources would lock the graphics mutex
second.
A possible additional solution is to defer source destroys to the video
thread.
(Note: test and UI are also modified by this commit)
API Changed (removed "enum obs_source_type type" parameter):
-------------------------
obs_source_get_display_name
obs_source_create
obs_get_source_output_flags
obs_get_source_defaults
obs_get_source_properties
Removes the "type" parameter from these functions. The "type" parameter
really doesn't serve much of a purpose being a parameter in any of these
cases, the type is just to indicate what it's used for.
This buffers scene item visibility actions so that if
obs_sceneitem_set_visible to true or false, that it will ensure that the
action is mapped to the exact sample point time in which
obs_sceneitem_set_visible is called. Mapping to the exact sample point
isn't necessary, but it's a nice thing to have.
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Adds a "composite" source type which is used for sources that composite
one or more sub-sources. The audio_render callback is called for
composite sources to allow those types of sources to do custom
processing of the audio of its sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
This variable is somewhat redundant. Volume is already known/accessible
to front-ends.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Removes audio lines and stores the circular buffer for the audio on the
source itself.
(Note: This commit breaks libobs compilation. Skip if bisecting)
The mixers that a source was assigned to were originally stored in the
audio line. This will store it in the sources themselves instead.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Uses a callback and allows the caller to mix audio. Additionally,
allows the callback to return audio later, allowing it to buffer as much
as it needs.
Darkest dungeon uses an unusual technique for drawing its frames: a
fixed 1920x1080 frame buffer used in place of the backbuffer, which is
then stretched to fit the size of the screen (whether the screen is
bigger or smaller than the actual texture).
The custom frame would cause glReadBuffer to initially fail with an
error. When this happens, their custom frame buffer is in use, so all
that needs to be done is simply reset the capture and force the current
output size to 1920x1080 while that custom frame is in use.
They presumably did this in order to ensure the game looks the same at
any resolution. Instead of having to use power-of-two sprites and
mipmaps for every single game sprite and stretch/skew each of them
(which would risk the final output "not looking quite right" at
different resolutions), they simply use non-pow-2 sprites with no
mipmaps and render them all on to one texture of a fixed size and then
stretch that final output texture. That ensures that the actual
composite of the game still looks the same at any resolution, while
reducing texture memory by not requiring each sprite to use a
power-of-two texture and mipmaps.
Adds the option of making the media file restart when the source becomes
active (such as switching to a scene with it).
Due to lack of libff features to start/stop/pause/seek media files,
currently this just destroys the demuxer and recreates it. Ideally,
libff should have some functions to allow a more optimal means of doing
those things.
Reactors a bit of code related to starting up FFmpeg and makes it so the
initial view for the media source's properties displays the most
commonly desired settings.
Instead of the media source properties showing the URL mode by default
along with a whole bunch of properties that are confusing to most users,
starts on file mode and changes defaults to be a bit more sensible
related to file input.
Also, as a temporary measure for fixing color format issues (some video
files would display their color information incorrectly), forced format
conversion is now enabled by default, and has been moved to advanced
settings. Ideally, the actual bug causing color format issues in either
media-io or libff should be fixed at some point.
When browsing for a file, it would also just use *.* for the file
filter, which is a pain to use. This has been changed to use a
reasonable file filter related to common video/audio files so you don't
have to wade through non-media files just to select a media file. A
filter to show all files is still available as well.
Prevents it from loading the entire image in the graphics thread, and
allows for animated gifs in the filter (not that anyone would ever do
that. ..right?)
Fixes what is arguably the most annoying feature of the mask/blend
filter, the fact that the image always stretches to the entire source.
It now centers and preserves aspect ratio by default, with an option to
make it stretch and discard aspect ratio to make it operate as it did
before.
Images continually loading/unloading every time transitioning occurs
adds a lot of unnecessary transition lag. The user can always change
this value manually and/or use scene collections, so change the default
setting to make it not unload/reload by default feels a bit more safe.
This is probably not necessary but might fix an issue where errors pass
through to other parts of the program, possibly causing the crash on
exit related to the xcomposite capture.
OSX has an annoying feature called "BeamSync", which on later versions
of OSX is always on. For whatever reason, Apple devs decided to force
this feature to always be on, so applications must always render with
v-sync regardless of what they set their swap interval to.
This issue would cause syncing to the vertical refresh for each
additional active display, and wouldn't allow rendering above the
current refresh rate. This caused major rendering stalls and prevented
video frame timing from being accurate.
This fixes the issue by using an undocumented set of functions to
disable BeamSync. Note that because this is an undocumented method of
working around the issue, its existence cannot be guaranteed. If the
functions no longer exist for whatever reason, it will safely do
nothing.