Similar to how outputs can pass errors, add the same functionality for
encoders so that if an output encoder has an error, it is made available
to the output and eventually the UI / user.
As os_gettime_ns() gets large the current scaling methods, mostly by casting
to uint64_t, may lead to numerical overflows. Sweep the code and use
util_mul_div64() where applicable.
Signed-off-by: Hans Petter Selasky <hps@selasky.org>
This reverts commit ff22c20019.
This caused a bug in FTL output, which started hitching after this
commit. Presumably due to opus; it's likely you're not supposed to do
this with all audio encoders.
Returns whether rescaling is enabled for an encoder. This will be used
with texture-based encoders to determine whether to fall back to
RAM-based encoding instead.
When an unpause occurs, it takes an audio segment and splits it at the
exact point corresponding to the pause timestamp, and then it's supposed
to only send the ending part of the split. However, the audio pointers
were not being incremented, therefore it was sending the front of the
audio segment to instead of the back of the audio segment by mistake.
When pause has been activated, the video_pause_check() function is used
when receiving raw frames in order to filter out frames that are in the
pause window, that way they aren't sent to the encoder or output.
However, when pause was enabled, it was unintentionally filtering out
some frames before the specified starting timestamp as well, causing
extra video data to get cut out prematurely. This fixes that issue.
Unlike get_properties, there is not reason to not call get_defaults if it is
given in addition to get_defaults2. Additonally this fixes the bug with
'init_encoder' which would only ever call get_defaults, resulting in broken
encoders if those used get_defaults2.
This implements pausing of outputs. To accomplish this, raw audio/video
data is halted to the encoders or raw output. Pausing is as precisely
timed as possible according to the timing of the obs_output_pause call,
and audio data will be spliced down to the exact audio sample in
accordance to that timing at the start/end marks.
Outputs that support this (outputs used for recording) can set the
OBS_OUTPUT_CAN_PAUSE capability flag.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
(This commit also modifies the UI, obs-ffmpeg, and obs-output modules)
Fixes a long-time regression where the program would lock up if an
encode call fails. Shuts down all outputs associated with the failing
encoder and displays an error message to the user.
Ideally, it would be best if a more detailed error could be displayed to
the user about the nature of the error, though the primary problem is
the encoder errors are typically not something the user would be able to
understand. The current message is a bit of a generic error message;
improvement is welcome.
Another suggestion is to try to have the encoder restart seamlessly,
though it would take a significant amount of work to be able to make it
do something like that properly, and it sort of assumes that encoder
failures are sporadic, which may not necessarily be the case with some
hardware encoders on some systems. It may be better just to use another
encoder in that case. For now, seamless restart is ruled out.
If the remove_connection call of obs_encoder_stop_internal took too
long, obs_encoder_destroy could get called before that function
completed, causing a race condition.
Allows the ability to encode by passing NV12 textures. This uses a
separate thread for texture-based encoders with a small queue of
textures. An output texture with a keyed mutex shared texture is locked
between OBS and each encoder. A new encoder callback and capability
flag is used to encode with textures.
This splits the "do_encode" function in to "do_encode" and
"send_off_encoder_packet", the latter of which allows the ability for
texture-based encoders to manage their own encoding and just simply send
off a packet to the outputs.
Allows the ability for one encoder to defer to another in case of
failure or unsupported feature. Okay, fine, it's mostly a hack so the
new NVENC encoder can fall back to the FFmpeg encoder if NV12 textures
aren't in use, that way it does not have to implement raw fallback
support itself. The settings and properties are pretty much the same,
so there's no reason not to utilize it in order to save time that could
otherwise be spent more productively.
Reduces GPU usage when encoding is not active. Does not perform color
conversion, frame staging, or frame downloading unless encoding is
explicitly active.
On audio encoder startup, audio encoders paired with a video encoder
would unintentionally discard a single audio data segment, causing it to
be 1024 audio frames out of sync.
(Note: This commit also modifies coreaudio-encoder, win-capture, and
win-mf modules)
This reduces logging to the user's log file. Most of the things
specified are not useful for examining log files, and make reading log
files more painful.
The things that are useful to log should be up to the front-end to
implement. The core and core plugins should have minimal mandatory
logging.
With the new audio subsystem, audio buffering is minimal at all times.
However, when the audio buffering is too small or non-existent, it would
cause the audio encoders to start with a timestamp that was actually
higher than the first video frame timestamp. Video would have some
inherent buffering/delay, but then audio could return and encode almost
immediately. This created a possible window of empty time between the
first encoded video packet and the first encoded audio packet, where as
audio buffering would cause the first audio packet's timestamp to always
be way before the first video packet's timestamp. It would then
incorrectly assume the two starting points were in sync.
So instead of assuming the audio data is always first, this patch makes
video wait for audio data comes in, and conversely buffers audio data
until video comes in, and tries to find a starting point within that
video data instead, ensuring a synced starting point whether audio
buffering is active or not.
Ensures that the packet dts_usec vals which are generated for
syncing/interleaving use the proper offset relative to where they're
supposed to be starting from. The negative DTS of a first video packet
could potentially have been applied twice due to this.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This prevents encoders (hardware encoders in particular) from being
continually active when all outputs disconnect from an encoder. This is
mostly just a temporary measure; the encoding interface may need a bit
of a redesign. It will also definitely needs to be able to flush at
some point. Currently when an output is stopped, the pending data is
discarded, which needs to be fixed.
Allows objects to be created regardless of whether the actual id exists
or not. This is a precaution that preserves objects/settings if for
some reason the id was removed for whatever reason (plugin removed, or
hardware encoder that disappeared). This was already added for sources,
but really needs to be added for other libobs objects as well: outputs,
encoders, services.
In case the encoder has to use a different sample rate (due to the
sample rate being unsupported), we need an API function for the encoder
to get the sample rate that the encoder is actually running at.