(This commit also modifies the obs-ffmpeg module)
The default channel layouts from aac spec are implemented in FFmpeg
native aac encoder as follows:
AV_CH_LAYOUT_MONO,
AV_CH_LAYOUT_STEREO,
AV_CH_LAYOUT_SURROUND,
AV_CH_LAYOUT_4POINT0,
AV_CH_LAYOUT_5POINT0_BACK,
AV_CH_LAYOUT_5POINT1_BACK,
AV_CH_LAYOUT_7POINT1,
The correspondence of speaker layouts to AV_CH_LAYOUT from FFmpeg is
changed to reflect the previous table.
Although FFmpeg native aac encoder can now encode all the layouts listed
in avutil channel_layout.h (on master), there might be issues with older
FFmpeg binaries.
Note that 2.1 speaker layout will be encoded as AV_CH_LAYOUT_SURROUND
(FL FR FC) because it is not listed as the default layout for three
channels.
This just means some optimizations for LFE channel will not be used by
the encoder which will treat it as an SCE (single channel element).
Closesjp9000/obs-studio#1182
When using more than two channels, the channel map of pulse-audio is incorrect.
Add an API for getting a speaker map based on OBS speaker layout. Then use the
speaker map when connecting to a pulse-audio device, for both source and
monitor output.
The memset in custom_audio_render() did not clear all audio buffers when
the number of output channels was less then 8. This caused wrong audio
output on mixes that did not get cleared.
Closesjp9000/obs-studio#1123
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
(also modifies obs-ffmpeg, audio-monitoring, win-wasapi, decklink,
obs-outputs)
Removes speaker layouts which are not exposed in UI. The speaker
layouts selectable by users in the UI are the most common ones. It is
not necessary to keep other layouts. (This basically removes
5POINT1_SURROUND, 7POINT1_SURROUND, SURROUND =3.0).
The AVCodecParameters weren't introduced until avcodec version 57.48.101
(FFmpeg version 3.1), so this will make sure to still use the older
avcodec_copy_context if the detected FFmpeg version is earlier.
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
(This commit also modifies the deps/media-playback, obs-ffmpeg, and
win-dshow modules)
More fixes due to ffmpeg renaming some constants and deprecating
AVFMT_RAWPICTURE and AV_PIX_FMT_VDA_VLD.
Latter replaced by AV_PIX_FMT_VIDEOTOOLBOX per ffmpeg dev advice.
Closesjp9000/obs-studio#1061
This function had a number of bugs and just wasn't working properly at
all. This function is currently not used in public builds because GPU
is used for color conversion instead (hence why it had probably not
really been tested), but a need came up where CPU conversion was useful
for the sake of testing something else in the back-end, and it needed to
be fixed before CPU conversion could be used.
Prevents lagged frames (frames that took too long to render) to be
counted as skipped frames (frames that are skipped due to encoding
taking too long to process)
When frames are skipped the skipped frame count would increment, but the
total frame count would not increment, causing the percentage
calculation to fail.
Additionally, the skipped frames log reporting has been moved to
media-io/video-io.c instead of each output.
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
(Note: This commit breaks libobs compilation. Skip if bisecting)
This variable is somewhat redundant. Volume is already known/accessible
to front-ends.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Uses a callback and allows the caller to mix audio. Additionally,
allows the callback to return audio later, allowing it to buffer as much
as it needs.
The skipped frame count (dropped frames due to encoding being
overloaded) would erroneously include lagged frames (dropped frames due
to render stalls). This will make diagnosing actual issues a user might
be having a bit easier.
With certain devices (AVerMedia C985 and LGP), audio timestamps are
bad, and a 50ms threshold of audio data "smoothing" (making consecutive
audio packets seamless with one another) isn't enough to handle bad
consecutive timestamp values. After testing, 70ms sufficiently solves
the issue.
This improves logging for when audio data insertion is way out of bounds
or is getting cut off in the front due to a bad negative sync offset.
Instead of throwing out a log message for every time this happens with
each piece of data, it now states when the out of bounds or cutoff has
started and stopped only.
This fixes a case where an insertion of audio data would pass
valid_timestamp_range yet the insert position would cause a negative
integer position and thus an unsigned integer overflow.
The minimum and maximum color range values were not being set by the
video_format_get_parameters function when full range was in use; I
assume it's because the expected min/max values of full range is
{0.0, 0.0, 0.0} and {1.0, 1.0, 1.0}, so I'm just making it so that it
sets those values if using full range.
Way to replicate the issue (windows):
1.) Create a win-dshow device capture source
2.) Select a device and set it to a YUV color format to enable YUV color
conversion
3.) Select "Full Range" in the color range property.
4.) Restart OBS, the device will then start up, but will display green
due to the fact that the min/max range values were never set, and are
left at their default value of 0.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump. This could mess up syncing
slightly when the frame is finally given to an outputs.
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold. The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about. For
the time being, I feel it may be worth it to try 50 milliseconds.