While discussing the Flatpak RFC [1], it was spotted that the
LUT filter couldn't open the file selection dialog. It was
explained, then, that the proper formats were either composed
of "User Label (file extensions)", or "file extensions", and
the LUT filter was setting "(file extensions)" without the
actual user label.
While this works on a standard Qt file selection dialog, it
cannot be properly formatted as a set of D-Bus filters, thus
breaking the sandbox integration.
Add a simple user label to the LUT file filter.
[1] https://github.com/obsproject/rfcs/pull/21#issuecomment-619106757
As os_gettime_ns() gets large the current scaling methods, mostly by casting
to uint64_t, may lead to numerical overflows. Sweep the code and use
util_mul_div64() where applicable.
Signed-off-by: Hans Petter Selasky <hps@selasky.org>
Not sure what effect black_and_white.png is going for. Add grayscale.png
to try to make it clear that the other image shouldn't be used for
desaturation.
By giving the option to disable the looping in the scroll filter, it
makes it more suitable for tasks like credits sequences, where you don't
want the texture to repeat and for the motion to only be performed once.
Add a separate shader for area upscaling to take advantage of bilinear
filtering. Iterating over texels is unnecessary in the upscale case
because a target pixel can only overlap 1 or 2 texels in X and Y
directions. When only overlapping one texel, adjust UVs to sample texel
center to avoid filtering.
Also add "base_dimension" uniform to avoid unnecessary division.
Intel HD Graphics 530, 644x478 -> 1323x1080: ~836 us -> ~232 us
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
This reverts commit d91bd327d7a8bb4597562fc26da4edb7b56874ff, which
broke alpha with sources, scenes, and filter, causing them all to become
opaque unintentionally.
Currently SrcBlendAlpha and DestBlendAlpha are both ONE, and can
combine together to form two. This is not a noticeable problem for
UNORM targets because the channels are clamped, but it will likely
become a problem if FLOAT targets are more widely used.
This change switches DestBlendAlpha to INVSRCALPHA, and starts
backgrounds as opaque black instead of transparent black. The blending
behavior of stacked transparents is preserved without overflowing the
alpha channel.
This new scale filter computes pixels by weighing the coverage area of
source pixels over the target pixel. This algorithm works well for both
upsampling and downsampling, but was mainly designed to upscale
high-quality low-resolution sources like RGB/HDMI retro consoles. I've
heard of people using odd workarounds like scaling up to very high
resolutions before scaling back down to preserve pixel shartpness. This
algorithm directly addresses this use-case in a much more direct
fashion.
The Area scale filter does a better job of preserving the thickness of
thin features than the Point filter.
The Area scale filter does not look at source pixels that lie outside
of the target pixel, leading to a much sharper image than Bilinear,
Bicubic, and Lanczos filters.
This filter should interpolate pixels in linear space, but OBS is not
equipped to do that at the moment.
libobs: Add GPU effect, and wire up scene serialization.
obs-filters: Add Area as an option for scale_filter.
UI: Add Area as an option for both scene items, and canvas downscaling.