The built-in format selector failed in certain cases like UtVideo now
using a differently packeg RGB format. FFmpeg has a format selection
functionality built-in that does pick the correct format however
(avcodec_find_best_pix_fmt_of_list), so we can simply use that instead.
Certain ffmpeg parameters such as "bufsize" or "maxrate" have to be
applied to the context rather than to "priv_data". In order to make sure
options are still passed to the encoder setting set AV_OPT_SEARCH_CHILDREN.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
After you call av_frame_alloc(), ffmpeg expects you to fill in certain
fields on the frame, depending on whether it's an audio or video frame.
obs-ffmpeg did this in the two places where it allocates video frames,
but not where it allocates audio frames. On my system, using trunk
ffmpeg and the Opus codec, this causes OBS to crash while calling
avcodec_send_frame, ultimately because av_frame_copy fails due to
'dst->format < 0' (as 'format' stays at the default of -1), causing a
null pointer to be added to a buffer queue, which later gets
dereferenced.
Oddly, the fields in question can just be copied directly from
corresponding fields in the AVCodecContext, but I don't see any ffmpeg
API to automatically copy all relevant fields, and all the examples I've
seen do it by hand. So this patch does the same.
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
Fixes ticket 1070.
See also
https://obsproject.com/forum/threads/ffmpeg-recording.77378/#post-330473
(related bugs).
The ffmpeg constant AVFMT_RAWPICTURE was deprecated in october 2015
and marked for removal at avformat major bump to version 58
(ffmpeg commit 34ed5c2 , oct 12, 2015).
The bump occured with commit 69b5ce6 (oct21, 2017).
The constant was subsequently removed (commit 693a11b, oct 26 2017).
It was removed from obs-studio with commit d670d7b (from me).
But the code block which was executed with this constant was not
removed, causing issues with ffmpeg output.
The commit fixes the issue for old ffmpeg builds as well as new ones.
The constant is reintegrated for avformat major version < 58 and removed
for version >= 58 (along with its accompanying code).
Thanks to J Lowe for help in solving the bug.
(tested on win 10, macos 10.13, ubuntu 17.10 with ffmpeg head & ffmpeg
3.4.1)
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
(This commit also modifies the deps/media-playback, obs-ffmpeg, and
win-dshow modules)
More fixes due to ffmpeg renaming some constants and deprecating
AVFMT_RAWPICTURE and AV_PIX_FMT_VDA_VLD.
Latter replaced by AV_PIX_FMT_VIDEOTOOLBOX per ffmpeg dev advice.
Closesjp9000/obs-studio#1061
This reverts commit bd70e73c25.
Turns out the commit was due to a miscommunication -- the commit it was
fixing actually worked fine, and this fix was unnecessary.
(Note: This commit also modifies obs-ffmpeg and obs-outputs)
API Changed:
obs_output_info::void (*stop)(void *data);
To:
obs_output_info::void (*stop)(void *data, uint64_t ts);
This fixes the long-time design flaw where obs_output_stop and the
output 'stop' callback would just shut down the output without
considering the timing of when obs_output_stop was used, discarding any
possible buffering and causing the output to get cut off at an
unexpected timing.
The 'stop' callback of obs_output_info now takes a timestamp with the
expectation that the output will use that timestamp to stop output data
in accordance to that timing. obs_output_stop now records the timestamp
at the time that the function is called and calls the 'stop' callback
with that timestamp. If needed, obs_output_force_stop will still stop
the output immediately without buffering.
For some reason in the FFmpeg output, this AVCodecContext variable is
being set to 1 by FFmpeg itself somewhere, and it's causing a massive
slowdown when encoding with FFmpeg directly. This should be set to 0 to
specify to use as many threads as necessary.
This also adds the ability to detect whether it stopped due to lack of
space or not -- particularly useful for the FFmpeg output due to
lossless file format support.
For the FFmpeg output, the encoder ids are sort of superfluous. They
really should be optional. If they're not set, it should use the
encoder name string instead to determine the ids automatically.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is used by some muxers that set AVFMT_NOFILE and doesn't seem to
hurt muxers that don't set it; notable this makes the hls muxer output
its m3u8 playlist with the proper filename in the proper directory
This particularly affected audio encoding, audio encoding previously
would count samples and use it to create an encoding timestamp, but
because I was using a standard integer (which is 32bit by default on
x86), it would max out at about 0x7FFFFFFF samples, which is about 12
hours of samples at 48000 sample rate. After that, it would start going
into negative territory (overflowing). By changing it to int64_t, it
will make it so that audio at 48000 samples per second would only be
able to overflow after about.. 6.09 million years. In other words,
this should fix the issue for good.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Check the actual name of the codec before applying an x264-specific
preset so we don't encounter an "Invalid argument" error when using
other h264 encoders in FFmpeg (such as NVEnc).
Closesjp9000/obs-studio#412
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
This makes FFmpeg usable as an output, and removes or changes most of
the code that was originally intended for testing purposes.
Changes the settings for the FFmpeg output to the following:
* url: Sets the output URL or file path
* video_bitrate: Sets the video bitrate
* audio_bitrate: Sets the audio bitrate
* video_encoder: Sets the video encoder (by name, blank for default)
* audio_encoder: Sets the audio encoder (by name, blank for default)
* video_settings: Sets custom video encoder FFmpeg settings
* audio_settings: Sets custom audio encoder FFmpeg settings
* scale_width: Image scale width (0 if none)
* scale_height: Image scale height (0 if none)
The reason why scale_width and scale_height are provided is because it
may internally convert formats, and it may be a bit more optimal to use
that scaler instead of the pre-output scaler just in case it already has
to convert formats internally anyway (though you can do it either way
you wish).
Video format handling has also changed; it will now attempt to use the
closest format to the current format if available for a given video
codec.