This happens because the enum had the incorrect name, and microsoft
automatically treats all enums as integers in C, regardless of whether
they actually exist or not. Microsoft makes terrible compilers and
whoever decided this was a good idea should be fired.
The "default" preset is not an actual default, but something Nvidia
decided to just call that way. It yields the worst quality per bitrate
out of all the presets, for no actual benefits. The actual FFmpeg
default is the hq one, which yields the best quality, especially when
twopass mode is enabled.
I can't think of a way to keep the "default" preset in a non-confusing
way, and as it gives no known benefits, might as well just remove it
entirely.
(Jim edit: Also made it so that if the settings have it set to
"default", it automatically treats it as "hq")
Closesjp9000/obs-studio#865
Due to crashes being caused by hardware acceleration in the media source
on mac, disable hardware acceleration of the media source being on by
default for the time being.
The encoder name was changed from "nvenc_h264" to "h264_nvenc", and will
throw a warning in the log file if you use the former, so try the latter
first, then the former.
This reverts commit bd70e73c25b0ec18373c9d2270ebe386c304076c.
Turns out the commit was due to a miscommunication -- the commit it was
fixing actually worked fine, and this fix was unnecessary.
(Note: This commit also modifies obs-ffmpeg and obs-outputs)
API Changed:
obs_output_info::void (*stop)(void *data);
To:
obs_output_info::void (*stop)(void *data, uint64_t ts);
This fixes the long-time design flaw where obs_output_stop and the
output 'stop' callback would just shut down the output without
considering the timing of when obs_output_stop was used, discarding any
possible buffering and causing the output to get cut off at an
unexpected timing.
The 'stop' callback of obs_output_info now takes a timestamp with the
expectation that the output will use that timestamp to stop output data
in accordance to that timing. obs_output_stop now records the timestamp
at the time that the function is called and calls the 'stop' callback
with that timestamp. If needed, obs_output_force_stop will still stop
the output immediately without buffering.
For some reason in the FFmpeg output, this AVCodecContext variable is
being set to 1 by FFmpeg itself somewhere, and it's causing a massive
slowdown when encoding with FFmpeg directly. This should be set to 0 to
specify to use as many threads as necessary.
On windows, if you were saving a file name or directory with characters
that are not of the current windows character set, it could cause the
file saving process to fail. This fixes it so that on windows it uses
wmain and converts the unicode command line to a UTF-8 command line,
which works with FFmpeg.
Instead of using an option that turns CBR on/off, adds rate control
methods: VBR, CBR, CQP, Lossless.
This moves lossless from being a preset to being a rate control method.
(Also modifies obs-ffmpeg to handle empty frames on EOF)
Previously the demuxer could hit EOF before the decoder threads are
finished, resulting in truncated output. In the worse case scenario the
demuxer could read small files before ff_decoder_refresh even has a chance
to start the clocks, resulting in no output at all.