Currently, ifdefs are used to determine if monitoring is supported.
This is difficult to maintain and restricts plugins from knowing if
monitoring is supported by OBS. This adds a runtime function to fix
that issue.
As os_gettime_ns() gets large the current scaling methods, mostly by casting
to uint64_t, may lead to numerical overflows. Sweep the code and use
util_mul_div64() where applicable.
Signed-off-by: Hans Petter Selasky <hps@selasky.org>
This reverts commit 22aa66a6eb.
Apparently, starting audio on the fly like this can introduce latency in
to the audio playback, so for now revert it. It was a bit of a
precautionary thing rather than an actual fix anyway, so it probably
wasn't all that necessary to begin with.
This prevents audio monitoring from actually initializing unless audio
is actually played back through the source. This prevents many browser
sources from initializing audio monitoring all at once needlessly if
audio is not being rerouted to OBS.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
(also modifies obs-ffmpeg, audio-monitoring, win-wasapi, decklink,
obs-outputs)
Removes speaker layouts which are not exposed in UI. The speaker
layouts selectable by users in the UI are the most common ones. It is
not necessary to keep other layouts. (This basically removes
5POINT1_SURROUND, 7POINT1_SURROUND, SURROUND =3.0).
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.
Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
(Note: This commits also modifies the linux-pulseaudio, mac-capture, and
win-wasapi plugins)
Do not prevent the targeted output device from being monitored if the
selected monitor output device is a different one.
Closesjp9000/obs-studio#872
On windows, when a source has only audio (no video) yet is marked as
capable of both video and audio, it would be programmed to expect a
video frame to synchronize with. This fixes that potential issue by
detecting whether any video is actually playing or not.
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker. It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.
On windows, audio monitoring uses WASAPI. Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.
On mac, it uses AudioQueue.
On linux, it's not currently implemented and won't do anything (to be
implemented).