A linker error for XGetXCBConnection started occurring after the
frontend API was added, the fix is simply to make sure libobs is
properly linking to X11_XCB.
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
If the FFMPEG_AVCODEC_LIBRARIES variable does not exist, it will
generate a cmake error, so check to make sure the variable exists before
executing this code.
These fucntions prevent the computer from going to sleep, hibernating,
or starting up a screen saver.
On linux, it will also attempt to use DBus to prevent gnome/kde/etc
sleep, but it's not necessarily required in order to compile the
library. Otherwise, it will simply call "xdg-screensaver reset" once
every 30 seconds to reset the screensaver timer.
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
Microsoft basically deprecated GetVersion/GetVersionEx, so now you have
to query the file version of kernel32.dll in order to get the actual
windows version. Because of the steps involved in getting the windows
version are fairly complicated, this is an exported libobs helper
function that can be used to get the windows version if needed.
(Microsoft does have its own set of helper functions for this but the
API feels a bit.. awkward, and you can't actually get the real windows
version with them)
A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.
This adds functions for piping a command line program's stdin or stdout.
Note however that this is unidirectional only.
This will be especially useful later on when implementing MP4 output,
because MP4 output has to be piped to prevent unexpected program
termination from corrupting the file.
This adds a new library of audio control functions mainly for the use in
GUIS. For now it includes an implementation of a software fader that can
be attached to sources in order to easily control the volume.
The fader can translate between fader-position, volume in dB and
multiplier with a configurable mapping function.
Currently only a cubic mapping (mul = fader_pos ^ 3) is included, but
different mappings can easily be added.
Due to libobs saving/restoring the source volume from the multiplier,
the volume levels for existing source will stay the same, and live
changing of the mapping will work without changing the source volume.
Just use platform-nix.c code for general stuff that mac is compliant
with, and put a define around everything else. Take that code out of
platform-cocoa.m.
Added os_opendir, os_readdir, and os_closedir to be able to query
available files within a directory.