This adds the drmbuf format as a parameter separate from the obs texture
format that will be used. drmbuf's may have a variety of formats that we
need to pass correctly to get a usable texture which may correspond to
multi-platform texture formats.
Implement device_texture_create_from_dmabuf for EGL/X11 and EGL/Wayland.
The code is shared between them, in a new gl-egl-common.c file.
This is currently limited to a few common RGB(A) formats for now, which
seems to cover most use cases.
DMA-BUF is a widespread Linux buffer sharing mechanism. It is what's
commonly used zero-copy screen sharing by Wayland compositors.
Add a new 'device_texture_create_from_dmabuf' vfunc to gs_exports,
and stub implementations to libobs-opengl. Add a new public method
gs_texture_create_from_dmabuf() that calls this vfunc.
Move the OBS_USE_EGL environment variable check to obs-app.cpp,
and set the OBS platform to be either OBS_NIX_PLATFORM_X11_GLX
or OBS_NIX_PLATFORM_X11_EGL.
Introduce the EGL/X11 winsys, and use it when the OBS_USE_EGL environment
variable is defined. This variable is only temporary, for future commits
will add a proper concept of platform.
All the EGL/X11 code is authored by Ivan Avdeev <me@w23.ru>.
Move the GLX-related code to gl-x11-glx, and introduce gl-nix as
a winsys-agnostic abstraction layer. gl-nix serves as the runtime
selector of which winsys going to be used. Only the X11/GLX winsys
is available now, but later commits will introduce the X11/EGL
winsys as well.
The gl-nix code was originally written by Jason Francis <cycl0ps@tuta.io>
This is in preparation for the future abstraction layer (gl-x11-*)
and also to match the actual name of the windowing system. When
running under X11, we can glue OpenGL through GLX or EGL, so the
new file name matches that now.
GS_RGBA, GS_BGRX, and GS_BGRA now use TYPELESS DXGI formats, so we can
alias them between UNORM and UNORM_SRGB as necessary. GS_RGBA_UNORM,
GS_BGRX_UNORM, and GS_BGRA_UNORM have been added to support straight
UNORM types, which Windows requires for sharing textures from D3D9 and
OpenGL. The D3D path aliases via views, and GL aliases via
GL_EXT_texture_sRGB_decode/GL_FRAMEBUFFER_SRGB.
A significant amount of code has changed in the D3D/GL backends, but the
concepts are simple. On the D3D side, we need separate SRVs and RTVs to
support nonlinear/linear reads and writes. On the GL side, we need to
set the proper GL parameters to emulate the same.
Add gs_enable_framebuffer_srgb/gs_framebuffer_srgb_enabled to set/get
the framebuffer as SRGB or not.
Add gs_linear_srgb_active/gs_set_linear_srgb to instruct sources that
they should render as SRGB. Legacy sources can ignore this setting
without regression.
Update obs_source_draw to use linear SRGB as needed.
Update render_filter_tex to use linear SRGB as needed.
Add gs_effect_set_texture_srgb next to gs_effect_set_texture to set
texture with SRGB view instead.
Add SRGB helpers for vec4 struct.
Create GDI-compatible textures without SRGB support. Doesn't seem to
work with SRGB formats.
Attempt to fix threading issues that cause OBS to force-crash when
compiled with latest Xcode. There are two places where the new SDK
introduces a force-crash because operations are not happening on the
main thread: when we modify the context view to switch swap chains, and
when we resize a swap chain.
Instead of using just one context for all rendering, we create an
additional context for each swap chain, set each view once on
initialization, and switch contexts only in present to blit the final
framebuffer. This is an extra copy, but it's pretty hairy to optimize
away, and it's not worth potential regressions just to speed up Mac.
For resizing, we schedule the update code to run on the main thread from
the render thread. Ideally, we wouldn't have to round trip the logic
from main thread to graphics thread and back, but I don't think we want
to hack up the interface for this, especially since OpenGL will give way
to Metal soon enough.
We really shouldn't be resetting duplicator state as part of gs_flush.
gs_begin_scene is not ideal because it is called twice per frame, and
only after duplicators have been ticked. Even though it makes no
user-facing difference, it makes more logical sense to reset at the top
of the frame than the bottom.
There don't appear to be any GPUs that support 3.2, but not 3.3. GLSL
330 maps to HLSL Shader Model 4, so this will theoretically make shaders
programs less likely to diverge, particularly for behavior around NaNs.
This change only wraps the functionality. I have rough code to exercise
the the query functionality, but that part is not really clean enough to
submit.
When mixing sampling with raw loads in a shader, ending a shader with a
load would case the default sampler to become unset for OpenGL. Instead,
initialize with no sampler, and only set if there is a sampler.