When an encoder has been removed (such as CoreAudio) and the audio
bitrates currently configured no longer are available to the current
audio encoders anymore, it would cause GetAACEncoderForBitrate to return
false with no encoder available.
To fix the issue, instead just choose the closest bitrate relative to
the current bitrate (rounded up).
After some more testing, utvideo not only gives better encoding
performance, but also better compression and better decoding
performance. It's pretty much superior all around over huffyuv.
So certain high-profile individuals were complaining that it was
difficult to configure recording settings for quality in OBS. So, I
decided to add a very easy-to-use auto-configuration for high quality
encoding -- including lossless encoding. This feature will
automatically configure ideal recording settings based upon a specified
quality level.
Recording quality presets added to simple output:
- Same as stream: Copies the encoded streaming data with no extra usage
hit.
- High quality: uses a higher CRF value (starting at 23) if using x264.
- Indistinguishable quality: uses a low CRF value (starting at 16) if
using x264.
- Lossless will spawn an FFmpeg output that uses huffyuv encoding. If a
user tries to select lossless, they will be warned both via a dialog
prompt and a warning message in the settings window to ensure they
understand that it requires tremendous amounts of free space. It will
always use the AVI file format.
Extra Notes:
- When High/Indistinguishable quality is set, it will allow you to
select the recording encoder. Currently, it just allows you to select
x264 (at either veryfast or ultrafast). Later on, it'll be useful to
be able to set up pre-configured presets for hardware encoders once
more are implemented and tested.
- I decided to allow the use of x264 at both veryfast or ultrafast
presets. The reasoning is two-fold:
1.) ultrafast is perfectly viable even for near indistinguishable
quality as long as it has the appropriate CRF value. It's nice if you
want to record but would like to or need to reduce the impact of
encoding on the CPU. It will automatically compensate for the preset at
the cost of larger file size.
2.) It was suggested to just always use ultrafast, but ultrafast
requires 2-4x as much disk space for the same CRF (most likely due to
x264 compensating for the preset). Providing veryfast is important if
you really want to reduce file size and/or reduce blocking at lower
quality levels.
- When a recording preset is used, a secondary audio encoder is also
spawned at 192 bitrate to ensure high quality audio. I chose 192
because that's the limit of the media foundation aac encoder on
windows, which I want to make sure is used if available due to its
high performance.
- The CRF calculation is based upon resolution, quality, and whether
it's set to ultrafast. First, quality sets the base CRF, 23 for
"good" quality, 16 for "very high" quality. If set to ultrafast,
it'll subtract 2 points from the CRF value to help compensate. Lower
resolutions will also lower the CRF value to help improve higher
details with a smaller pixel ratio.
CBR is now always on by default for streaming, so there's no reason to
have a setting for this in particular. Still available in advanced
output settings of course, but simple output mode really should be kept
as simple as possible.
This is mostly just to remove the unnecessary clutter from the output
sections. The reconnect settings are generally rarely modified by users
as it is.
When stream delay is active, the "Start/Stop Streaming" button is
changed in to a menu button, which allows the user to select either the
option to stop the stream (which causes it to count down), or forcibly
stop the stream (which immediately stops the stream and cuts off all
delayed data).
If the user decides they want to start the stream again while in the
process of counting down, they can safely do so without having to wait
for it to stop, and it will schedule it to start up again with the same
delay after the stop.
On the status bar, it will now show whether delay is active, and its
duration. If the stream is in the process of stopping/starting, it will
count down to the stop/start.
If the option to preserve stream cutoff point on unexpected
disconnections/reconnections is enabled, it will update the current
delay duration accordingly.
Due to certain design changes for delay, it's better to simply determine
whether outputs are active via booleans rather than an activeRefs
variable, which could get decremented more than once if say, the signal
for stopping the stream gets called more than once for whatever reason
(which may happen in the case of delay due to the way delay works)
This changes the way the advanced output section's FFmpeg output
settings work by allowing the user to select whether they want to output
to a file or output to a URL, and makes it so file names are
automatically generated like other recording outputs.
If they choose to output to a file, it'll only require an output
directory similarly to how other recording outputs work. They can
select a directory to output to rather than being required to type in a
full path and filename; the filename is automatically generated. The
extension is also automatically retrieved from libff depending on the
format selected.
Otherwise if they have Output to URL selected, it'll show a simple edit
box where they can type in the target URL.
Adds setting profiles to the basic user interface. For each profile, a
subdirectory for the profile will be created in
[config_dir]/obs-studio/basic/profiles which will contain the settings
data for each profile.
All audio encoders are currently having the service-specific settings
applied to them, so this makes it so that it checks which track the
stream is set to and only applies it to that specific encoder.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
Add a checkbox named "Enforce streaming service encoder settings"
checkbox to advanced output. Disabling this checkbox allows the user to
optionally disable the enforcement of streaming service encoder
settings. I had a user complain that they didn't want to always have
the service's preferred encoder settings forced on them.
Ensures that the current service's encoder settings are applied to the
encoders used with the simple output. This is always on for simple
output so users don't have to mess with it themselves.
The "rescale" option for streaming in the advanced output settings was
not properly checking the parameter output of sscanf. sscanf returns
the number of values that were found, not the number of string matches.
Adds an 'advanced' mode to the output settings to allow more powerful
and complex streaming and recording options:
- Optionally use a different encoder for recording than for streaming to
allow the recording to use a different encoder or encoder settings if
desired (though at the cost if increased CPU usage depending on the
encoders being used)
- Use encoders other than x264
- Rescale the recording or streaming encoders in case the user wishes to
stream and record at different resolutions
- Select the specific mixer to use for recording and for streaming,
allowing the stream and recording to use separate mixers (to for
example allow a user to stream the game/mic audio but only record the
game audio)
- Use FFmpeg output for the recording button instead of only recording
h264/aac to FLV, allowing the user to output to various different
types of file formats or remote URLs, as well as allowing the user to
select and use different encoders and encoder settings that are
available in the FFmpeg library
- Optionally allow the use of multiple audio tracks in a single output
if the file formats or stream services support it
To accommodate multiple types of outputs, there has to be some level of
abstraction. The BasicOutputHandler structure will give us a way that
we can switch between different output configurations.