Currently, ifdefs are used to determine if monitoring is supported.
This is difficult to maintain and restricts plugins from knowing if
monitoring is supported by OBS. This adds a runtime function to fix
that issue.
Previously we would wait for pulse to attempt to read from the monitor
source and obs buffered at least 5ms of audio data before we tried to
fill the buffer. In some cases this resulted in consistently triggering
underruns in pulse.
Instead we try to fill the buffer immediately as obs outputs audio data
and while the pa buffer is not full. We also stop trying to grow the
buffer to prevent underruns after we reach 1s of latency.
When signed 32-bit audio arrived to pulseaudio-output and volume was
lowered, audio data was broken. In the function `process_volume`, the
type of the data is switched by `bytes_per_channel`. However the size of
signed 32-bit integer and the size of float are same so that the signed
32-bit integer is processed as float.
This commit changes these items.
- Use `format` instead of `bytes_per_channel` so that all the sample
types can be differentiated.
- Change `short` to `int16_t` and renames existing function
`process_short` to `process_s16` to clarify the function is
processing signed 16-bit.
PulseAudio code needs to be called with the PA lock held. This chiefly
fixes multiple races during stream shutdown:
* If the functions are called without the lock held, deferred event
handling races end up with PA infinite looping on the mainloop, or
asserting, or other badness, as the reentrant calls cause data
structure corruption on the PA side.
* If we don't reset our callbacks, PA might call us even after we
request stream disconnection (since the stream actually getting fully
shut down is asynchronous), and then we dereference NULL pointers from
our userdata etc. PA will keep its data structures alive until necessary
via reference counting, but not ours.
The lock around pa_stream_begin_write doesn't result from any issues I
experienced, but it looks correct; PA doesn't say anywhere that that
function is thread-safe.
As os_gettime_ns() gets large the current scaling methods, mostly by casting
to uint64_t, may lead to numerical overflows. Sweep the code and use
util_mul_div64() where applicable.
Signed-off-by: Hans Petter Selasky <hps@selasky.org>
This reverts commit 22aa66a6eb.
Apparently, starting audio on the fly like this can introduce latency in
to the audio playback, so for now revert it. It was a bit of a
precautionary thing rather than an actual fix anyway, so it probably
wasn't all that necessary to begin with.
This prevents audio monitoring from actually initializing unless audio
is actually played back through the source. This prevents many browser
sources from initializing audio monitoring all at once needlessly if
audio is not being rerouted to OBS.
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
As of commit 2eb5a22, CoreAudio devices that use one device handle for
both input and output can no longer be used for audio monitoring. This
commit fixes that.
Tested with the built-in output, a Magewell XI100DUSB-HDMI which is
input only, and a MOTU UltraLite audio interface, which shows as
output/input capable.
When using a macOS running on a case-sensitive filesystem, the project
build fails due to AudioToolBox/AudioQueue.h not being found.
The correct path is AudioToolbox/AudioQueue.h. Since macOS comes with a
case-insensitive filesystem by default, this is a rare ocurrence
among developers who change it to be case-sensitive.
This little change skips returning true immediately when a call to
CoreAudio fails. Instead, it uses similar behavior as further 3 calls
below this change, just checking for unfreed pointers before also
returning true.
When using more than two channels, the channel map of pulse-audio is incorrect.
Add an API for getting a speaker map based on OBS speaker layout. Then use the
speaker map when connecting to a pulse-audio device, for both source and
monitor output.
This pull request changes the fallback sample format for pulse-audio
to from PA_SAMPLE_S16LE to PA_SAMPLE_FLOAT32LE.
The pulseaudio plugin can handle the following sample format:
* PA_SAMPlE_U8
* PA_SAMPLE_S16LE
* PA_SAMPLE_S32LE
* PA_SAMPLE_FLOAT32LE
When an audio device advertises itself as another format, the pulseaudio-plugin
will ask pulse audio to convert to the fallback sample format.
The fallback PA_SAMPLE_S16LE is not ideal when your audio interface advertises
as PA_SAMPLE_S24LE since the conversion will lose precision.
With PA_SAMPLE_FLOAT32LE there is no precision loss and it is also equals OBS's
internal format.
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
(also modifies obs-ffmpeg, audio-monitoring, win-wasapi, decklink,
obs-outputs)
Removes speaker layouts which are not exposed in UI. The speaker
layouts selectable by users in the UI are the most common ones. It is
not necessary to keep other layouts. (This basically removes
5POINT1_SURROUND, 7POINT1_SURROUND, SURROUND =3.0).
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.
Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
(Note: This commits also modifies the linux-pulseaudio, mac-capture, and
win-wasapi plugins)
Do not prevent the targeted output device from being monitored if the
selected monitor output device is a different one.
Closesjp9000/obs-studio#872
On windows, when a source has only audio (no video) yet is marked as
capable of both video and audio, it would be programmed to expect a
video frame to synchronize with. This fixes that potential issue by
detecting whether any video is actually playing or not.
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker. It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.
On windows, audio monitoring uses WASAPI. Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.
On mac, it uses AudioQueue.
On linux, it's not currently implemented and won't do anything (to be
implemented).