There are cases where alpha is multiplied unnecessarily. This change
attempts to use premultiplied alpha blending for composition.
To keep this change simple, The filter chain will continue to use
straight alpha. Otherwise, every source would need to modified to output
premultiplied, and every filter modified for premultiplied input.
"DrawAlphaDivide" shader techniques have been added to convert from
premultiplied alpha to straight alpha for final output. "DrawMatrix"
techniques ignore alpha, so they do not appear to need changing.
One remaining issue is that scale effects are set up here to use the
same shader logic for both scale filters (straight alpha - incorrectly),
and output composition (premultiplied alpha - correctly). A fix could be
made to add additional shaders for straight alpha, but the "real" fix
may be to eliminate the straight alpha path at some point.
For graphics, SrcBlendAlpha and DestBlendAlpha were both ONE, and could
combine together to form alpha values greater than one. This is not as
noticeable of a problem for UNORM targets because the channels are
clamped, but it will likely become a problem in more situations if FLOAT
targets are used.
This change switches DestBlendAlpha to INVSRCALPHA. The blending
behavior of stacked transparents is preserved without overflowing the
alpha channel.
obs-transitions: Use premultiplied alpha blend, and simplify shaders
because both inputs and outputs use premultiplied alpha now.
Fixes https://obsproject.com/mantis/view.php?id=1108
This reverts commit d91bd327d7a8bb4597562fc26da4edb7b56874ff, which
broke alpha with sources, scenes, and filter, causing them all to become
opaque unintentionally.
Add D3D/GL debug markers to make RenderDoc captures easier to tranverse.
Also add obs_source_get_name_no_null() to avoid boilerplate for safe
string formatting.
Closesobsproject/obs-studio#1799
Currently SrcBlendAlpha and DestBlendAlpha are both ONE, and can
combine together to form two. This is not a noticeable problem for
UNORM targets because the channels are clamped, but it will likely
become a problem if FLOAT targets are more widely used.
This change switches DestBlendAlpha to INVSRCALPHA, and starts
backgrounds as opaque black instead of transparent black. The blending
behavior of stacked transparents is preserved without overflowing the
alpha channel.
Fixes the remaining case where a frame from the previous
recording/stream could show up at the beginning of the next
recording/stream on the same running session when using the new version
of NVENC. Textures are being converted for both raw and texture-based
encoders, so this variable which determines whether a texture is ready
and has been converted should be cleared in both cases.
When all outputs stop, and then the output starts back up again at a
later point after that, the last frame data or two from the previous
output session would end up as the first frame or two of the proceeding
output. This was because certain rendering variables were not being
properly cleared when a new output starts back up.
Uses obs_source_get_ref on the sources enumerated in the tick_sources
function in obs-video.c to ensure a reference has been incremented
before calling that source's video_tick, and replaces an
obs_source_addref with obs_source_get_ref in the push_audio_tree
function in obs-audio.c to ensure that it cannot increment a source that
has already decremented its reference to 0.
Allows the ability to encode by passing NV12 textures. This uses a
separate thread for texture-based encoders with a small queue of
textures. An output texture with a keyed mutex shared texture is locked
between OBS and each encoder. A new encoder callback and capability
flag is used to encode with textures.
Reduces GPU usage when encoding is not active. Does not perform color
conversion, frame staging, or frame downloading unless encoding is
explicitly active.
(Note: This commit also modifies UI and test)
This makes it so that main preview panes are rendered with the main
output texture rather than re-rendering the main view. The view will
render all objects again, whereas the output texture will be a single
texture render of the same exact thing.
Also fixes some abnormal artifacting when scaling the main preview pane.
This is to prevent confusion with video_thread in
libobs/media-io/video-io.c, which is used exclusively for video
encoding/output. Also prevents confusion in the profiler log data.
Allows getting the current active framerate that the core is rendering
with. This takes in to account any rendering lag or stalls that may be
occurring.
Allows the ability to use scale filters such as point, bicubic, lanczos
on specific scene items, disabled by default. When using one of the
latter two options, if the item's scale is under half of the source's
original size, it uses the bilinear low resolution downscale shader
instead.
This was originally used for calculating audio volume if transitions
were active, but transitions won't work that way so tracking the active
transitions is no longer needed.
Use explicit UTF-8 byte sequence for the "no-break space" character.
Prevents issues with certain editors, and fixes the following compiler
warning on Visual C++:
warning C4819: The file contains a character that cannot be represented
in the current code page (X). Save the file in Unicode format to prevent
data loss
(Non-compiling commit: windowless-context branch)
API Changed:
---------------------
Removed functions:
- obs_add_draw_callback
- obs_remove_draw_callback
- obs_resize
- obs_preview_set_enabled
- obs_preview_enabled
Removed member variables from struct obs_video_info:
- window_width
- window_height
- window
Summary:
---------------------
Changes the core libobs API to not be dependent upon a main window/view.
If you wish to draw to a window/view, use an obs_display object to
handle it.
This allows the use of libobs without requiring a window to be present
on the system. This is also prunes code that had to be needlessly
duplicated to handle the "main" window.
A minor optimization: in copy_rgbx_frame (used when libobs is set to
output RGBA frames instead of YUV frames), if the line sizes for the
source and destination match, just use a single memcpy call for all of
the data instead of multiple memcpy calls.
The "clamped" video time is the system time per video frame that is
closest to the current system time, but always divisible by the frame
interval. For example, if the last frame system timestamp was 1600 and
the new frame is 2500, but the frame interval is 800, then the
"clamped" video time is 2400.
This clamped value is useful to get the relative system time without any
jitter.
The normal scaling methods cannot sample enough pixels to create an
accurate output image when the output size is under half the base size,
so use the bilinear low resolution scaling effect in that case instead
to ensure a more accurate low resolution image.
Direct3D textures are usually aligned to a specific pitch, so their
internal width is often not equal to the expected output width; this
means that if we want to use it on our texture output, that we must
de-align the texture while copying the texture data.
However, I unintentionally messed up the calculation at some point with
RGBA textures, so the variable size I was supposed to be using was
supposed to be multiplied by 4 (for RGBA), while I was still expecting
single channel data. So, if the texture width was something like 1332,
the source (directx) texture line size would be somewhere at or above
5328 (because it's RGBA), then destination is at 1332 (YUV luma plane),
and it would unintentionally treat 3996 (or 5328 - 1332) bytes as the
unused alignment data. So this fixes that miscalculation.
The return value of os_sleepto_ns is true if it waited to the specified
time, and false if the current time is past the specified time. So it
basically returns true if it successfully waited.
I just didn't check the return value properly here, so it ended up just
setting the count of frames to 1 if overshot, ultimately causing sync
issues.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
This changes the way source volume handles transitioning between being
active and inactive states.
The previous way that transitioning handled volume was that it set the
presentation volume of the source and all of its sub-sources to 0.0 if
the source was inactive, and 1.0 if active. Transition sources would
then also set the presentation volume for sub-sources to whatever their
transitioning volume was. However, the problem with this is that the
design didn't take in to account if the source or its sub-sources were
active anywhere else, so because of that it would break if that ever
happened, and I didn't realize that when I was designing it.
So instead, this completely overhauls the design of handling
transitioning volume. Each frame, it'll go through all sources and
check whether they're active or inactive and set the base volume
accordingly. If transitions are currently active, it will actually walk
the active source tree and check whether the source is in a
transitioning state somewhere.
- If the source is a sub-source of a transition, and it's not active
outside of the transition, then the transition will control the
volume of the source.
- If the source is a sub-source of a transition, but it's also active
outside of the transition, it'll defer to whichever is louder.
This also adds a new callback to the obs_source_info structure for
transition sources, get_transition_volume, which is called to get the
transitioning volume of a sub-source.
This adds bicubic and lanczos scaling capability to libobs to improve
scaling quality and sharpness when the output resolution has to be
scaled relative to the base resolution. Bilinear is also available,
although bilinear has rather poor quality and causes scaling to appear
blurry.
If the output resolution is close to the base resolution, then bilinear
is used instead as an optimization, as there's no need to use these
shaders if scaling is not in use.
The Bicubic and Lanczos effects are also exposed via exported function
to allow the ability to use those shaders in plugin modules if desired.
The API change adds a variable 'scale_type' to the obs_video_info
structure that allows the user interface to choose what type of scaling
filter should be used.
This was an important change because we were originally using an
hard-coded 709/partial range color matrix for the output, which was
causing problems for people wanting to use different formats or color
spaces. This will now automatically generate the color matrix depending
on the format, color space, and range, or use an identity matrix if the
video format is RGB instead of YUV.
On certain GPUs, if you don't flush and the window is minimized it can
endlessly accumulate memory due to what I'm assuming are driver design
flaws (though I can't know for sure). The flush seems to prevent this
from happening, at least from my tests. It would be nice if this
weren't necessary.
At the start of each render loop, it would get the timestamp, and then
it would then assign that timestamp to whatever frame was downloaded.
However, the frame that was downloaded was usually occurred a number of
frames ago, so it would assign the wrong timestamp value to that frame.
This fixes that issue by storing the timestamps in a circular buffer.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.