- Changes the default base exponent value to 1.5 from 2.0
- Applies a random skew of +-0.05 to the exponent to lessen the
"water hammer" effect caused by predictable backoff techniques
- Fixes the logging associated with exponential backoff to log the
true reconnect delay value
With this, you can now cast normal obs objects (services, outputs,
sources, encoders) to an obs_object_t, and then use obs_object_*
functions to get references, release references, and similar for weak
object references as well. This allows the ability for the frontend to
use an object of any of those types interchangeably in certain
situations without having to handle each specific type individually.
This is useful because the properties view in particular doesn't care
what type of object it uses, it just needs to be able to hold weak
references to abstract OBS objects.
(This commit also modifies UI)
This makes it more trivial for encoder plugins to communicate to users
why specifically an encoder error might have occurred mid-stream.
This isn't particularly needed, as a service with multiple tracks won't
be using multiple tracks to begin with anyway. This might change later,
but for now just mark it deprecated.
Similar to how outputs can pass errors, add the same functionality for
encoders so that if an output encoder has an error, it is made available
to the output and eventually the UI / user.
As os_gettime_ns() gets large the current scaling methods, mostly by casting
to uint64_t, may lead to numerical overflows. Sweep the code and use
util_mul_div64() where applicable.
Signed-off-by: Hans Petter Selasky <hps@selasky.org>
When a pause/unpause occurs, a timestamp is set and the actual
pause/unpause does not occur until the output/encoders reach the
specified timestamps. Do not allow pausing/unpausing unless that point
has been reached with all encoders of an encoded output or the output
itself when using a raw output.
This fixes a bug where pause data could get corrupted if
pausing/unpausing too fast, because the audio/video encoders aren't
necessarily synchronized and although one encoder may have unpaused, the
other encoder(s) may not have yet. Checking all encoders first before
allowing a pause/unpause ensures that doesn't occur.
Audio latency can get really low, and if it's low enough, the timestamp
can be passed by the audio subsystem before it's had a chance to pause
with it. So instead, make the pause have a little bit of extra delay to
ensure that doesn't occur.
This implements pausing of outputs. To accomplish this, raw audio/video
data is halted to the encoders or raw output. Pausing is as precisely
timed as possible according to the timing of the obs_output_pause call,
and audio data will be spliced down to the exact audio sample in
accordance to that timing at the start/end marks.
Outputs that support this (outputs used for recording) can set the
OBS_OUTPUT_CAN_PAUSE capability flag.
If the audio subsystem was buffered to any extent, the audio of a raw
output would start off at a negative offset, requiring each raw output
to implement a "prepare_audio" function (as seen in the FFmpeg output)
in order to ensure proper synchronization with video. This did not
apply to encoded outputs because it was already being performed by the
obs-encoder code.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
Normally, paired encoders are unpaired when they stop. However, if the
pairing occurs before the encoders actually start, and the encoders
never actually end up starting, they are never unpaired, and that
pairing stays with them until the next time an output is started up
again. That in turn can cause an output that uses one of the encoders
but not the other to not function correctly, and neither properly
"start" nor stop because the data is queued continually in the
interleaved packet array.
For example, let's say there are two outputs, two video encoders, and
one audio encoder. This can be reproduced by using advanced output mode
and making the two outputs use separate video encoders while sharing
track 1's audio encoder. If you start up the stream output first and it
fails to fully connect for whatever reason (bad server, bad stream key,
etc), then you start up the recording output, the recording output will
appear to be running, but will not stop when you hit "stop recording".
It will stay perpetually on "stopping recording" and will get stuck that
way. This is because when the streaming output started, the streaming
output would initially pair video encoder A with audio encoder A before
the encoders actually fully started up (as the encoders do not fully
start up until a connection is successfully made), and when the
recording output starts up after that disconnection, audio encoder A
will wait for video encoder A rather than video encoder B because that
pairing was never actually cleared.
So, instead of pairing encoders when the output starts, wait until the
encoders themselves are being started and then pair the encoders at that
point in time. This ensures that the encoders start up and will clear
their pairing when no longer in use.
Adds display_duration declaring the minimum duration a caption text
is not going to be overwritten by a new one. To keep the functions
backwards-compatible obs_output_output_caption_text2 was added while
obs_output_output_caption_text1 continues having a 2 second default.
Track 1 offset is reset but not the offset for other tracks.
This caused sync issues in between tracks (with track 1 and others).
(bug found by EposVox)
Reduces GPU usage when encoding is not active. Does not perform color
conversion, frame staging, or frame downloading unless encoding is
explicitly active.
During packet interleaving (for outputs), ensure that if two packets
coincide with the same timestamp, that the video packet always comes
first instead of the audio packet.
This fix is required to make FLV demux properly with certain demuxers;
some FLV demuxers expect the video packet before the audio packet when
two packets coincide with the same timestamps.
(This commit also modifies the obs-outputs module)
The first video packet video offset (the value used to set the starting
point of video data) would be set to the DTS value of the first video
packet. However, when b-frames are used, the first DTS value will be
negative. This was originally done because FLV muxing requires that the
first packet's DTS start from 0. Unfortunately, this would also
effectively cause the first packet's PTS/DTS value to be shifted forward
by the negative amount, which would cause video sync to be off by a
video frame or two.
This fixes it to start at the PTS value instead and preserve any
negative offsets. Additionally, the FLV muxing code has been fixed to
ensure that it adjusts the starting video DTS to 0, and now correctly
adjusts the first audio packet's timestamp according to that DTS as well
(which it didn't do before).
Instead of logging the relative encoded count (which is susceptible to
integer overflow), log the output frame count instead. If there's an
issue with encoding, it'll show up when all encoding stops regardless.
This reverts commit 4e3e67bb8c.
The way this is handled will erroneously report 0 frames encoded when
frames have actually been properly encoded, which is best avoided.
Additionally, and overflow would be generated for drawn frames where
none occurred before. The encoded value should probably not even be
present in the log for the output due to the way it's handled.
I believe the issue with the next to impossible frame count to be an integer underflow, as in order to achieve those you'd have to have recorded for at least 345 days with 144 fps. So this commit fixes them by using a normal integer first and then deciding on the result if it should be used or be replaced with a 0.
When an output fails to connect and it's already been prematurely
stopped, the event to mark the output as stopped would not be signaled,
causing obs_output_destroy to lock up indefinitely while waiting for the
event to be signaled.
Rather than have the back-end try to determine whether the output can or
cannot stop, allow the stop callback to continue in the plugin either
way and let the plugin itself make that determination.
This fixes a bug where the back-end wouldn't have data active while
connecting, therefore the stop callback wouldn't be called, and once
connected it wouldn't know that it was supposed to stop. In other words
trying to call obs_output_stop on an output that was in a state of
connecting would do nothing and the output would never stop.
When frames are skipped the skipped frame count would increment, but the
total frame count would not increment, causing the percentage
calculation to fail.
Additionally, the skipped frames log reporting has been moved to
media-io/video-io.c instead of each output.
Captions do something unusual with encoder packets: they reallocate them
due to appending extra h.264 data. Due to the way allocations are
handled with core encoder packets (they now store a reference in their
data), instead of modifying the encoder data directly, create a new
encoder packet instead and release the old packet.