Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.
Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
When an async video frame comes in and it sets the timing_adjust value
(used to sync audio to video based upon their timestamps), it would use
os_gettime_ns as a base. Instead, it should use OBS' current video
frame time so that the audio and video playback is as accurate as
possible relative to the actual exact timestamp of the video frame.
(Results are almost insignificant, but it's nice to be as precise as
possible)
Fixes a potential issue where copying filters from one source to another
might add filters from the old source that are not compatible with the
new source.
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)
Originally, async buffering for sources was supposed to be a
user-controllable flag. However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).
The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.
Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.
This reverts commit d85224bb9b01173b4ceed866482c435681f3f9b1.
Would cause other flags to stop saving. Buffering needs to be split off
from the source flags.
Eventually, most things should be replaced with Load where applicable
(though in some cases sub-pixel sampling is desired).
This commit also fixes a bug where NV12 async sources wouldn't render
correctly.
(Note: This commits also modifies the linux-pulseaudio, mac-capture, and
win-wasapi plugins)
Do not prevent the targeted output device from being monitored if the
selected monitor output device is a different one.
Closesjp9000/obs-studio#872
This change prevents source flags from being incorrectly overwritten and
set to 0. Eventually flags need to be separated from source settings
and this should be reverted, but for now this solves an issue where
buffering would be enabled on async video sources regardless of whether
the user disabled it or not on the source.
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker. It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.
On windows, audio monitoring uses WASAPI. Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.
On mac, it uses AudioQueue.
On linux, it's not currently implemented and won't do anything (to be
implemented).
Fixes a bug that would allow possible infinite recursion within a source
tree. To fix this, inactive sources must be enumerated as well in order
to prevent infinite recursion.
Commit 53955301a23 introduced a async source texture copy bug due to
creating a new case in a switch without adding a break to the one above
it, causing it to execute both cases by mistake.
Because D3D11 specifically does not support an L8 texture format (you
have to use a shader swizzle), manually convert Y800 signals to RGBX
instead. This also fixes a bug where Y800 signals will render red.
Closesjp9000/obs-studio#718
If an async source is cropped on one side, then when the program is
restarted and the source is loaded from file, the async source will
start out with a width/height of zero. This will cause the async source
to not be drawn if cropping or scale filtering is added to the scene
item, because it has to be rendered to a texture first. However, the
source cannot reset its size until it's drawn, so it leaves it in
perpetual state of having a 0x0 size.
This fixes that problem by ensuring that the async source size is always
reset even when not being rendered.
Closejp9000/obs-studio#686
When a scene is duplicated the filters on the scene were not copied to
the new scene. This causes that a temporary copy of a scene renders
differently in the program than in the preview when using studio mode.
(Note: This commit also modifies coreaudio-encoder, win-capture, and
win-mf modules)
This reduces logging to the user's log file. Most of the things
specified are not useful for examining log files, and make reading log
files more painful.
The things that are useful to log should be up to the front-end to
implement. The core and core plugins should have minimal mandatory
logging.
The active_refs and show_refs variable would only increment/decrement
their children if their values were 1 and 0, which means that in the
case of scenes within scenes, sub-sources of scenes within scenes would
end up having the wrong ref values.
When using GPU conversion for 4:2:0 frames on async video sources, it
would create a texture bigger than necessary and try to copy too much
data from the frame, resulting in a crash.
When a transition is a sub-source of another source, it would not call
the transition's active source enum function, meaning that any sources
the transition had would not increment their active/showing refs (it
would only be called when activating the transition directly before).
That would result in negative/invalid active/showing refs on its
sub-sources, causing them to become permanently active/inactive and/or
permanently showing/hidden.
(Note: this commit also modifies the obs-filters and test-input modules)
Changes the obs_source_process_filter_begin return type so that it
returns true/false to indicate that filter processing should or should
not continue (for example if the filter is bypassed or if there's some
other sort of issue that causes the filtering to fail)
The source shouldn't be inserted into obs->data.first_audio_source until it's
fully initialized, or other threads will access source->control and
dereference an uninitialized pointer.
This patch fixes a specific crash where if the user named a filter the
same name as an input source that already existed in the system, scene
item loading code could find the filter with the same name instead of
the source, and mistakenly use it as the scene item's source directly.
This would cause a crash when trying to render that filter as a regular
source.
Marking filters as private is a temporary and simple workaround to the
solution. Filters are currently not meant to be found via the main
enumeration/search functions, which is a design flaw (lack of
consistency). In future major API revisions of libobs, filters should
be reworked to act as sources, with the sources they filter as
sub-sources ideally.
Additionally, the concept of "private context objects" and "primary
lists of context objects" in the back-end should probably also be
removed, allowing the font-end (or optional separate API layers) to
control all primary lists of obs context objects. These minor issues
that occur ultimately stem from API design flaws which need to be
corrected.
This crash happened when a filter was mistakenly used as a regular
source due to an unrelated bug in filter code and scene loading code.
The filter and the source it belongs to both had the same names, and the
source loading code found the filter and mistakenly used it as the
source instead of the actual source with the same name.
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).