If an audio source does not provide enough data at a steady pace, the
timestamp update does not happen, and buffering increases until it
maxes out. To counteract this, update the timestamp anyway.
Another issue for decoupled audio sources is that timing is not
adjusted for divergence from system time. Making this adjustment is
better for timing stability.
5+ hours of stable audio without any buffering on my GV-USB2 where it
used to add 21ms every 5 mintues or so.
Fixes https://obsproject.com/mantis/view.php?id=1269
Fix ternary test to use BGRX render targets for YUV to RGB
conversions. The previous behavior may have been fine though since
the shaders fill the alpha channel with 1.0 anyway.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
The cache coherency of rasterization for full-screen passes is better
using an oversized triangle that is clipped rather than two triangles.
Traversal order of rasterization is GPU-specific, but will almost
certainly be better using an undivided primitive.
A smaller benefit is that quads along the diagonal are not evaluated
multiple times, but that's minor in comparison.
Redo format shaders to bypass vertex buffer, and input layout. Add
global shader bool "obs_glsl_compile" to make API-specific decisions,
i.e. handle upside-down UVs. gl_ortho is not needed for format
conversion because the vertex shader does not use ViewProj anymore.
This can be applied to more situations, but start small first.
Testbed full screen passes, Intel HD Graphics 530:
RGBA -> UYVX: 467 -> 439 us, ~6% savings
UYVX -> uv: 295 -> 239 us, ~19% savings
A previous refactoring to make DrawMatrix unnecessary has left behind
unreachable YUV conversions. Even if this code was somehow reachable,
DrawMatrix for YUV -> RGB doesn't exist anymore, so they would render
incorrectly anyway.
This commit adds a function to forcefully stop a transition, and to
increment/decrement the showing counter for a source with the MAIN_VIEW
type.
These functions are needed for the transition previews to work as
intended.
libobs: Add support for limited to full color range conversions when
using RGB or Y800 formats, and move RGB converison for Y800 formats to
the GPU.
decklink: Stop hiding color space/range properties for RGB formats, and
remove "YUV" from "YUV Color Space" and "YUV Color Range".
win-dshow: Remove "YUV" from "YUV Color Space" and "YUV Color Range".
UI: Remove "YUV" from "YUV Color Space" and "YUV Color Range".
Fixes handling of the `obs_source_frame::full_range` member variable,
which is often set to false by default by many plugins even when using
RGB, which would cause RGB to be marked as "partial range". This change
is crucial for when partial range RBG support is implemented.
Adds `obs_source_frame2` structure that replaces the `full_range` member
variable with a `range` variable, which uses the `video_range_type` enum
to allow handling default range values. This member variable treats
VIDEO_RANGE_DEFAULT as full range if the format is RGB, and partial
range if the format is YUV.
Also adds `obs_source_output_video2` and `obs_source_preload_video2`
functions which use the `obs_source_frame2` structure instead of the
`obs_source_frame` structure.
When using the original `obs_source_frame`, `obs_source_output_video`,
and `obs_source_preload_video` functions, RGB will always be full range
by default for backward compatibility purposes.
This reverts commit d91bd327d7a8bb4597562fc26da4edb7b56874ff, which
broke alpha with sources, scenes, and filter, causing them all to become
opaque unintentionally.
Currently several shaders need "DrawMatrix" techniques to support the
possibility that the input texture is a "YUV" format. Also, "DrawMatrix"
is overloaded for translation in both directions when it is written for
RGB to "YUV" only.
A cleaner solution is to handle "YUV" to RGB up-front as part of format
conversion, and ensure only RGB inputs reach the other shaders. This is
necessary to someday perform correct scale filtering without the cost of
redundant "YUV" conversions per texture tap.
A necessary prerequisite for this is to add conversion support for
VIDEO_FORMAT_I444, and that is now in place. There was already a hack in
place to cover VIDEO_FORMAT_Y800. All other "YUV" formats already have
conversion functions.
"DrawMatrix" has been removed from shaders that only supported "YUV" to
RGB conversions. It still exists in shaders that perform RGB to "YUV"
conversions, and the implementations have been sanitized accordingly.
Add D3D/GL debug markers to make RenderDoc captures easier to tranverse.
Also add obs_source_get_name_no_null() to avoid boilerplate for safe
string formatting.
Closesobsproject/obs-studio#1799
Currently SrcBlendAlpha and DestBlendAlpha are both ONE, and can
combine together to form two. This is not a noticeable problem for
UNORM targets because the channels are clamped, but it will likely
become a problem if FLOAT targets are more widely used.
This change switches DestBlendAlpha to INVSRCALPHA, and starts
backgrounds as opaque black instead of transparent black. The blending
behavior of stacked transparents is preserved without overflowing the
alpha channel.
If audio monitoring is enabled and set to output only, the
obs_source_show_preloaded_video function would still incorrectly set the
current source audio output timestamp to the current system time, which
would cause audio to use an incorrect starting point from long ago for
the first starting audio segment if audio monitoring is then turned off
some time after having started the source.
Instead, the starting timestamp should be set to 0 if audio monitoring
is enabled with no output to stream, so that if/when audio monitoring is
disabled, it recalculates the starting timestamp of the first audio
packet on the spot again.
Closesobsproject/obs-studio#1522
Don't allow calling a source's custom get_width or get_height callbacks
unless the context is valid. Fixes a bug where a null pointer could be
passed to those functions.
The filters array should not be accessed without locking the filter
mutex. This locks the filter mutex, grabs a reference to the first
filter, unlocks the mutex, renders the filter, and then releases the
reference.
Closesobsproject/obs-studio#1215
Solves crash when a user tries to paste filters without selecting a
source or when pasting filters from a deleted source. This commit
checks if both sources are still valid before pasting.
This addresses Mantis Issue 1220
(https://obsproject.com/mantis/view.php?id=1220).
If a filter's implementation (its plugin for example) no longer exists,
it would cause the source to stop rendering if that filter was present
on the source. Instead, just bypass the filter to ensure that the
source continues to render.
The memset in custom_audio_render() did not clear all audio buffers when
the number of output channels was less then 8. This caused wrong audio
output on mixes that did not get cleared.
Closesjp9000/obs-studio#1123
Because it would be troublesome to add the ability to remove source
types (in case for example a script fails to reload), instead make it so
source types can be temporarily disabled while the program is running.
Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.
Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
When an async video frame comes in and it sets the timing_adjust value
(used to sync audio to video based upon their timestamps), it would use
os_gettime_ns as a base. Instead, it should use OBS' current video
frame time so that the audio and video playback is as accurate as
possible relative to the actual exact timestamp of the video frame.
(Results are almost insignificant, but it's nice to be as precise as
possible)
Fixes a potential issue where copying filters from one source to another
might add filters from the old source that are not compatible with the
new source.
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)
Originally, async buffering for sources was supposed to be a
user-controllable flag. However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).
The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.
Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.