Transition audio was programmed to stop if there is no queued audio from
both sources. Because of that, when a source's audio started after the
transition started, it would cause audio from the source to be excluded
from the transition until the transition had completed because the audio
had already been marked as stopped.
Instead, if there's no audio from the transition sources, the audio
should only be marked as stopped when video has stopped. This allows
the to/from sources to have an opportunity to start/restart audio during
the transition safely.
When using per-encoder rescaling, QSV would overwrite the current
encoder scale value in the get_video_info callback with the base video
width/height instead of using the current encoder width/height.
(Also modifies obs-ffmpeg to handle empty frames on EOF)
Previously the demuxer could hit EOF before the decoder threads are
finished, resulting in truncated output. In the worse case scenario the
demuxer could read small files before ff_decoder_refresh even has a chance
to start the clocks, resulting in no output at all.
There was no error checking when sending headers/metadata, so what would
happen is that if a header/metadata send failed (meaning the socket was
disconnected), it would continue to act as if it was still connected,
and it would block and lock up on the next send/recv call.
The signal name is "remove" for when a source is removed, not "removed".
This is proof that I should never have relied on strings for signals.
The original intention for string-based signals was to make them
programmable and scriptable, but honestly that use-case (that never
happened and will likely never happen) was foolish to program around.
These should have been fixed macros from the beginning.
Sometimes encoders might have a tiny insignificant amount of lag
unintentionally for whatever reason (for example VP9 sometimes lags on
startup by a frame or two), so don't warn the user if the skip count is
below 10.
In aa4e18740a219b I mistakenly thought that I could add the variables
back in and that it would automatically cull variables that aren't used,
but that wasn't the case -- the shader parser always checks to see
whether parameters were set, and if they're not, it'll fail. This fixes
an issue where the shader would try to access parameters that are no
longer needed and fail due to the shader parameter check.
YUV-based shader support has been removed (due to the fact that no
sources ever use YUV shading) so there's no reason to keep around the
YUV processing code.
The field orders of retro 2x and linear 2x deinterlace shaders were
inverted. Note that yadif 2x does not act the same in this regard, its
field ordering is correct due to how it operates.
Warns users that two separate QSV encoders can't be active at the same
time.
This should be considered a temporary solution to two issues:
1.) Encoders need to be able to report these errors themselves
2.) If the QSV encoder is ever changed to allow more than one encoder at
the same time this should be removed
Currently, multiple QSV encoders cannot be active at the same time
(otherwise it will crash). This is a temporary solution to prevent
crashes from occurring when more than one QSV encoder tries to start up
at the same time.
Additionally, in the future there should be a way for encoders to be
able to communicate with the front-end when an error such as this
occurs.
If the parent source of a scroll filter has a 0 width or 0 height, the
scroll filter would do a division by zero on the size_i variable, which
would then cause the offset variable to perpetually have a non-finite
value, thus preventing the scroll filter from rendering properly after
that due to the non-finite offset value being uploaded to the shader.
(Note: this commit also modifies the obs-filters and test-input modules)
Changes the obs_source_process_filter_begin return type so that it
returns true/false to indicate that filter processing should or should
not continue (for example if the filter is bypassed or if there's some
other sort of issue that causes the filtering to fail)
On outputs that use already-active video/audio encoder, the audio
pruning to sync up audio packets with video packets doesn't always get
called (for example if the video pruning function was called). Always
prune excess starting audio packets.
How to crash:
1. Use recent ffmpeg shared libraries.
2. Add a ffmpeg_source, a small static picture (e.g. jpeg) with loop
3. After a while of high cpu usage, it crashed. Seems reproduced more
easily on faster computer
Closes#533