(Note: This commit breaks libobs compilation. Skip if bisecting)
Removes audio lines and stores the circular buffer for the audio on the
source itself.
(Note: This commit breaks libobs compilation. Skip if bisecting)
The mixers that a source was assigned to were originally stored in the
audio line. This will store it in the sources themselves instead.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Uses a callback and allows the caller to mix audio. Additionally,
allows the callback to return audio later, allowing it to buffer as much
as it needs.
Darkest dungeon uses an unusual technique for drawing its frames: a
fixed 1920x1080 frame buffer used in place of the backbuffer, which is
then stretched to fit the size of the screen (whether the screen is
bigger or smaller than the actual texture).
The custom frame would cause glReadBuffer to initially fail with an
error. When this happens, their custom frame buffer is in use, so all
that needs to be done is simply reset the capture and force the current
output size to 1920x1080 while that custom frame is in use.
They presumably did this in order to ensure the game looks the same at
any resolution. Instead of having to use power-of-two sprites and
mipmaps for every single game sprite and stretch/skew each of them
(which would risk the final output "not looking quite right" at
different resolutions), they simply use non-pow-2 sprites with no
mipmaps and render them all on to one texture of a fixed size and then
stretch that final output texture. That ensures that the actual
composite of the game still looks the same at any resolution, while
reducing texture memory by not requiring each sprite to use a
power-of-two texture and mipmaps.
Adds the option of making the media file restart when the source becomes
active (such as switching to a scene with it).
Due to lack of libff features to start/stop/pause/seek media files,
currently this just destroys the demuxer and recreates it. Ideally,
libff should have some functions to allow a more optimal means of doing
those things.
Reactors a bit of code related to starting up FFmpeg and makes it so the
initial view for the media source's properties displays the most
commonly desired settings.
Instead of the media source properties showing the URL mode by default
along with a whole bunch of properties that are confusing to most users,
starts on file mode and changes defaults to be a bit more sensible
related to file input.
Also, as a temporary measure for fixing color format issues (some video
files would display their color information incorrectly), forced format
conversion is now enabled by default, and has been moved to advanced
settings. Ideally, the actual bug causing color format issues in either
media-io or libff should be fixed at some point.
When browsing for a file, it would also just use *.* for the file
filter, which is a pain to use. This has been changed to use a
reasonable file filter related to common video/audio files so you don't
have to wade through non-media files just to select a media file. A
filter to show all files is still available as well.
Prevents it from loading the entire image in the graphics thread, and
allows for animated gifs in the filter (not that anyone would ever do
that. ..right?)
Fixes what is arguably the most annoying feature of the mask/blend
filter, the fact that the image always stretches to the entire source.
It now centers and preserves aspect ratio by default, with an option to
make it stretch and discard aspect ratio to make it operate as it did
before.
Images continually loading/unloading every time transitioning occurs
adds a lot of unnecessary transition lag. The user can always change
this value manually and/or use scene collections, so change the default
setting to make it not unload/reload by default feels a bit more safe.
This is probably not necessary but might fix an issue where errors pass
through to other parts of the program, possibly causing the crash on
exit related to the xcomposite capture.
OSX has an annoying feature called "BeamSync", which on later versions
of OSX is always on. For whatever reason, Apple devs decided to force
this feature to always be on, so applications must always render with
v-sync regardless of what they set their swap interval to.
This issue would cause syncing to the vertical refresh for each
additional active display, and wouldn't allow rendering above the
current refresh rate. This caused major rendering stalls and prevented
video frame timing from being accurate.
This fixes the issue by using an undocumented set of functions to
disable BeamSync. Note that because this is an undocumented method of
working around the issue, its existence cannot be guaranteed. If the
functions no longer exist for whatever reason, it will safely do
nothing.
These really are advanced options that users shouldn't need to change
normally, so moving them to advanced makes sense, and keeps them away
from users who don't know what they're doing.
If the base resolution is set to an invalid resolution, it would cause
the output resolution to automatically change to an invalid resolution.
After the invalid resolution was set for the output resolution, it would
stay at that invalid resolution.
This fixes it so it always tries to find the output resolution closest
to what was last actually set or actually edited.
When right-clicking items in the preview window and getting context
menus for them, it would often make the wrong scene item be associated
with the context menu because of the fact that it was using
QListWidget::currentItem instead of querying the actual selected list.
You must query the actual selection list via QListWidget::selectedItems
because QListWidget::currentItem does not work properly for
multi-selection list widgets.
If a global audio device is disconnected or for whatever reason is no
longer available when audio settings is opened, it will erroneously
select index 0 of the combo box ("Disabled") by default because it can't
find it in the combo box. This fixes that issue by making it insert an
item in to the combo box specifying that the previously available audio
device is no longer available, and properly preserving the user's
settings.
Some streamers would accidentally hit start/stop streaming, which on
certain services would send out mass emails to all their followers.
This just adds options to general settings to optionally enable dialogs
that confirm whether to actually start/stop streaming when the button is
clicked.
Because of the patch that removed the "user sources list" in libobs
(70fec7ae8), obs_enum_sources will now enumerate the global audio
sources, where it didn't before. In the advanced audio properties
dialog, before the patch I had to use code to manually include them, but
I neglected to remove that code when I made that patch, so it would
cause them to be added more than once. This just removes that code.
When you assign a menu to a normal QPushButton, it becomes a button that
only allows you to have a menu. This class lets you click the button
and have a menu at the same time.
Just in case glSwapIntervalEXT and glSwapIntervalSGI aren't available
for whatever reason. This entire patch is most likely completely
redundant on modern mesa drivers.
The skipped frame count (dropped frames due to encoding being
overloaded) would erroneously include lagged frames (dropped frames due
to render stalls). This will make diagnosing actual issues a user might
be having a bit easier.
Problem:
When an output is started with encoders that have already been started
by another output, and it starts in between the window in between where
the first audio packets start coming in and where the first video packet
comes in, the output would get audio packets with timestamps potentially
much later than the first video frame when the first video frame finally
comes in. The audio/video encoders will almost always have a differing
delay.
Solution:
Detect that starting window, and if within that starting window, wait
for a new keyframe from video instead of trying to sync up video.
Additional Notes:
In these cases where an output starts with already-active encoders, this
patch also reduces the potential sync offset between the first video
frame and the first audio frame. Before, it would sync the first video
frame with the first audio frames right after that, but now it syncs
with the closest audio frame in the interleaved array, which halves the
potential sync difference between the first video frame and the first
audio frame of already-active encoders. (So now the potential sync
difference is roughly 11.6 milliseconds at 44.1khz audio, where it was
23.2 before)
Ensures that the packet dts_usec vals which are generated for
syncing/interleaving use the proper offset relative to where they're
supposed to be starting from. The negative DTS of a first video packet
could potentially have been applied twice due to this.
Allows the ability to use fixed stack memory to construct a calldata_t
structure rather than having to allocate each time. This is fine to do
for certain signals where the calldata never goes above a specific size.
If by some chance the size is insufficient, it will output a log
message.
Originally this was programmed to call the recursive height/width
functions if the source type was an input with the intention of not
calling it on filters, but instead of doing that just program it to do
just that: only call the recursive height/width functions if it's not a
filter.