This fixes a race condition where the audio/video backends/threads may
start using sources before their obs_source_info::create function has
been called.
The hardware accelerated decoder context needs to be explicitly unrefed
when it's no longer in use, otherwise it and many resources associated
with it will leak.
When hardware accelerated decoding is enabled, sometimes it can't
initialize for whatever reason, so it will fall back to software on its
own. When this occurs, it will not use the hardware pixel format on the
frame; instead it will defer to a standard format on the frame. So if
the frame format does not match the expected format, assume software
decoding. (This was also what the hw-decode.c FFmpeg example did if the
format did not match the expected format)
When the streaming audio track was separated from the recording tracks
in advanced output mode in be8c06334, it mistakenly removed the opus
audio encoder code when FTL is used. This restores that code.
CEF outputs multiple audio streams at once, and OBS was only able to
handle one at a time. This fixes it by using audio lines for each CEF
audio stream, and mixes them together itself.
Adds the "audio_line" internal source type as a bare source type for the
sole purpose of outputting audio, and the obs_source_info::audio_mix
callback which allows mixing of those audio lines, which is then treated
as normal audio for the source. Audio line objects should be added as
sub-sources when multiple audio lines from a single source are needed,
then mixed together with the audio_mix callback.
The difference between the new obs_source_info::audio_mix callback and
obs_source_info::audio_render is that obs_source_info::audio_mix (along
with the audio_line source) are only one track, and it outputs audio to
the source automatically via obs_source_output_audio() when the call
completes. This allows the mixed audio to be treated like a normal
source's audio, in that you can filter it, change its volume, or monitor
it.
This change was necessary because the CEF (used with the browser source)
outputs multiple audio streams at once to a single browser source, so
it's the program's responsibility to mix those streams together itself.
The dynamic bitrate operates based upon estimating the current bitrate
output, and then adjusting the bitrate on the fly as necessary when
congestion is detected as a replacement for dropping frames.
This may still need adjustment, as it is difficult to accurately emulate
real-world frame drop scenarios. This does not currently drop frames at
all, and because of that, very high congestion may cause additional
stream delay to viewers (because data will be buffered), but from
limited testing, most congestion will not cause that and it can safely
recover pretty quickly without adding significant delay.
When doing the bitrate limit test, it can be useful to have the ability
to change the current maximum bitrate limit. This adds the ability to
press keys on windows (numpad 0-6) to change between bitrates. Numpad 0
being no limit, 1 being 1000, 2 being 2000, etc.
This change adds the ability to box select by clicking and dragging from
an empty part of the preview.
Shift + Drag add any items in the box to the selection. Alt + Drag will
remove items in the box from the selection. Ctrl + Drag inverts the
selected state for items in the box.
Fixes an issue where the browser source settings will continually reset
pre-24. Note that this is not 23.2.2, but the version is being
temporarily updated in order to fix the issue for the release candidate
build.
NV12 GPU copies to staging textures for CPU read take a ridiculously
long time on my integrated Intel GPU. Using R8/R8G8 instead seems to be
a huge speed-up.
Intel HD Graphics 530, D3D11 query timings, SetStablePowerState
NV12: ~3268 us (minimum of wild timings)
R8/R8G8: ~781 us (most frequently occurring timing)