Fixes a potential issue where copying filters from one source to another
might add filters from the old source that are not compatible with the
new source.
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)
Originally, async buffering for sources was supposed to be a
user-controllable flag. However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).
The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.
Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.
This reverts commit d85224bb9b01173b4ceed866482c435681f3f9b1.
Would cause other flags to stop saving. Buffering needs to be split off
from the source flags.
Make sure the target file exists before attempting to create a backup.
This will allow for the function to work correctly if the target does
not yet exist.
Prevents lagged frames (frames that took too long to render) to be
counted as skipped frames (frames that are skipped due to encoding
taking too long to process)
Originally, obs_get_video_info would recreate the obs_video_info
structure that was originally passed to it from obs_reset_video. This
changes that to just store a copy of the obs_video_info when calling
obs_reset_video, and then copying that to the parameter of
obs_get_video_info when called.
When frames are skipped the skipped frame count would increment, but the
total frame count would not increment, causing the percentage
calculation to fail.
Additionally, the skipped frames log reporting has been moved to
media-io/video-io.c instead of each output.
libobs' shader language is basically HLSL, and tex.Load uses an int3 for
2D textures, with texture mipmap index for the last component. This bug
bypassed testing because the front-end automatically switches to OpenGL
if D3D11 initialization fails, and when converted to GLSL, works fine
because texelFetch only requires two components. This also means
there's a bug in GLSL shader conversion code, because it's essentially
ignoring the third component when it shouldn't be.
Eventually, most things should be replaced with Load where applicable
(though in some cases sub-pixel sampling is desired).
This commit also fixes a bug where NV12 async sources wouldn't render
correctly.
Allows safely/atomically replacing a file and creating a backup of the
original. The reason for adding this function is because Microsoft
provides a ReplaceFile function which does this in a single call.
A file rename will automatically replace the old file if an older file
exists, and will do so automatically. Unlinking is unnecessary, and may
have a chance of preventing that move operation from being atomic.
(Note: This commits also modifies the linux-pulseaudio, mac-capture, and
win-wasapi plugins)
Do not prevent the targeted output device from being monitored if the
selected monitor output device is a different one.
Closesjp9000/obs-studio#872
This change prevents source flags from being incorrectly overwritten and
set to 0. Eventually flags need to be separated from source settings
and this should be reverted, but for now this solves an issue where
buffering would be enabled on async video sources regardless of whether
the user disabled it or not on the source.
On windows, when a source has only audio (no video) yet is marked as
capable of both video and audio, it would be programmed to expect a
video frame to synchronize with. This fixes that potential issue by
detecting whether any video is actually playing or not.
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker. It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.
On windows, audio monitoring uses WASAPI. Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.
On mac, it uses AudioQueue.
On linux, it's not currently implemented and won't do anything (to be
implemented).