We're expecting a variable with double precision. Since we don't read
the value of these doubles with a particular precision, it can often
lead to unpredictable results where the value set isn't the one
intended due to the loss of precision from float->double conversion.
When drawing cursor to window capture area - use actual resource width
and height instead of system metric values for icons. Fixes an issue
where under rare circumstances, certain cursors would not draw at the
correct size.
Closesobsproject/obs-studio#1284
Allows the ability to (optionally) synchronously create/update a
directshow device source rather than always asynchronously update the
device. This is useful if creating/destroying scenes/sources very
quickly, and helps minimize the risk of creating new directshow sources
that use the same device, yet may not activate because an existing
source may already exist. To use, set "synchronous_activate" to true in
its settings when updating or creating. Note that this setting will be
erased after it's used, and will not be saved to user settings, so it
must be set each time in order to be used.
Closesobsproject/obs-studio#1228
After you call av_frame_alloc(), ffmpeg expects you to fill in certain
fields on the frame, depending on whether it's an audio or video frame.
obs-ffmpeg did this in the two places where it allocates video frames,
but not where it allocates audio frames. On my system, using trunk
ffmpeg and the Opus codec, this causes OBS to crash while calling
avcodec_send_frame, ultimately because av_frame_copy fails due to
'dst->format < 0' (as 'format' stays at the default of -1), causing a
null pointer to be added to a buffer queue, which later gets
dereferenced.
Oddly, the fields in question can just be copied directly from
corresponding fields in the AVCodecContext, but I don't see any ffmpeg
API to automatically copy all relevant fields, and all the examples I've
seen do it by hand. So this patch does the same.
If the user selected the "default" device for audio capture on mac, it
would no longer be able to be switched to another device due to the
"default_device" variable being set.
In the vt_h264_video_info function, the format of the video to be encoded is
always being set to VIDEO_FORMAT_NV12, dispite the presence of code to set
the video format to VIDEO_FORMAT_I420 or VIDEO_FORMAT_I444. This commit fixes
that function to respect the video format choice of the user.
In addition, whilst testing this fix initially, I also discovered that the
4:4:4 colour format is not supported by the VideoToolbox H264 encoder.
Looking at the VideoToolbox code in ffmpeg as a reference, the ffmpeg code
errors out if a color format other than NV12 or I420 is set. Therefore, this
commit also logs a warning about I444 not being supported, and uses the NV12
default.
The fullrange variable is used to set appropriate video format information in
vt_h264_video_info. Set fullrange first so the video format data is correct in
all cases.
The cutoff hack was added many, many years ago as recommended by
Konverter. Since then, there has been much work on the AAC encoder, so
this hack should no longer be necessary.
The windows media foundation H264 encoders have been deprecated for over
a year, and microsoft's media foundation AAC encoder has had a continued
issue with occasional random audio glitches. The FFmpeg AAC encoder has
had recent development, and is more than sufficient to be able to handle
the task of encoding in terms of both quality and performance, so it's
better just to use the FFmpeg encoder from here on out.
As this plugin is no longer needed, for the next year or two it'll still
be compiled and included, but as a blank plugin that does nothing. The
reason why it's still being included as a blank no-operation plugin is
to overwrite older versions of the plugin. That way if a user installs
a newer OBS version over an older one, it won't load up the older win-mf
plugin where the encoders still were enabled.
This also fixes some rarely reported media foundation crashes that can
happen on startup.
Fixes an issue where align_pos could be smaller than
sizeof(struct shmem_data), potentially overwriting memory of the header.
References jp9000/obs-studio#1202
(This commit also modifies the obs-ffmpeg module)
The default channel layouts from aac spec are implemented in FFmpeg
native aac encoder as follows:
AV_CH_LAYOUT_MONO,
AV_CH_LAYOUT_STEREO,
AV_CH_LAYOUT_SURROUND,
AV_CH_LAYOUT_4POINT0,
AV_CH_LAYOUT_5POINT0_BACK,
AV_CH_LAYOUT_5POINT1_BACK,
AV_CH_LAYOUT_7POINT1,
The correspondence of speaker layouts to AV_CH_LAYOUT from FFmpeg is
changed to reflect the previous table.
Although FFmpeg native aac encoder can now encode all the layouts listed
in avutil channel_layout.h (on master), there might be issues with older
FFmpeg binaries.
Note that 2.1 speaker layout will be encoded as AV_CH_LAYOUT_SURROUND
(FL FR FC) because it is not listed as the default layout for three
channels.
This just means some optimizations for LFE channel will not be used by
the encoder which will treat it as an SCE (single channel element).
Closesjp9000/obs-studio#1182
Adjusts the enc-amf submodule remote to the jp9000 fork, which tests AMF
support in a separate process before attempting to initialize AMF
in-process. Reduces the possibility of driver crashes caused by AMF
initialization.
The reason this code is being reverted/removed is because this code is a
risk to startup stability. This check will be added again in the
future, however this code needs to be executed from a secondary piped
process instead of directly in the process to reduce risk of
driver/hardware issues impacting program startup. The same needs to be
accomplished for the AMF plugin as well.
When using more than two channels, the channel map of pulse-audio is incorrect.
Add an API for getting a speaker map based on OBS speaker layout. Then use the
speaker map when connecting to a pulse-audio device, for both source and
monitor output.
This reverts commit 94b5982216.
Reverting this commit because it had some negative side effects, such as
adding 500 milliseconds to the startup time. NVENC detection should
really be done through its proper API, and not via creating an encoder
on startup.
There were cases where the channel format could be set to 7, which used
to be a valid format but now no longer is. If that format is set, just
use SPEAKERS_7POINT1 instead.
Makes it a bit more clear this option shouldn't be used unless you're on
SLI/crossfire.
In the future, something should be put in to the program that detects
laptops and warns on how to set up their adapter for efficient capture.
Closesjp9000/obs-studio#1138
This pull request changes the fallback sample format for pulse-audio
to from PA_SAMPLE_S16LE to PA_SAMPLE_FLOAT32LE.
The pulseaudio plugin can handle the following sample format:
* PA_SAMPlE_U8
* PA_SAMPLE_S16LE
* PA_SAMPLE_S32LE
* PA_SAMPLE_FLOAT32LE
When an audio device advertises itself as another format, the pulseaudio-plugin
will ask pulse audio to convert to the fallback sample format.
The fallback PA_SAMPLE_S16LE is not ideal when your audio interface advertises
as PA_SAMPLE_S24LE since the conversion will lose precision.
With PA_SAMPLE_FLOAT32LE there is no precision loss and it is also equals OBS's
internal format.
Some audio devices do not have a fixed number of channels. For example,
Soundflower. This was previously fixed by defaulting the speaker layout
to stereo. With surround sound support, the default has been changed to
the output speaker layout as set in Settings > Audio.
Closesjp9000/obs-studio#1110
The list of channel layouts available for decklink input is missing 2.1
& 4.1 layouts. The commit adds them. This aligns the decklink input
with the speaker layouts available at outputs. Having different layouts
as input and output invokes FFmpeg resampler, which remixes the channels
in non trivial way except when downmixing to stereo. This patch allows
to avoid such uncontrolled remix of channels with decklink input.
The core audio aac encoder has bitrates maps specific to speaker
layouts. Previously, the bitrate map maxed at 320 kbs and was the map
for stereo. The bitrate map is now tailored to the speaker layout. In
practice this unlocks higher bitrates. For instance up to 960 kbs for
7.1. Additionally the commit fixes a bug with 2.1 with channels not
ordered correctly.
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
(also modifies obs-ffmpeg, audio-monitoring, win-wasapi, decklink,
obs-outputs)
Removes speaker layouts which are not exposed in UI. The speaker
layouts selectable by users in the UI are the most common ones. It is
not necessary to keep other layouts. (This basically removes
5POINT1_SURROUND, 7POINT1_SURROUND, SURROUND =3.0).
Noise Suppression: Clamp sample values before converting to integer.
This fixes an issue where samples exceeding full scale would overflow,
resulting in heavy distortion.
Closesjp9000/obs-studio#1113
Fixes ticket 1070.
See also
https://obsproject.com/forum/threads/ffmpeg-recording.77378/#post-330473
(related bugs).
The ffmpeg constant AVFMT_RAWPICTURE was deprecated in october 2015
and marked for removal at avformat major bump to version 58
(ffmpeg commit 34ed5c2 , oct 12, 2015).
The bump occured with commit 69b5ce6 (oct21, 2017).
The constant was subsequently removed (commit 693a11b, oct 26 2017).
It was removed from obs-studio with commit d670d7b (from me).
But the code block which was executed with this constant was not
removed, causing issues with ffmpeg output.
The commit fixes the issue for old ffmpeg builds as well as new ones.
The constant is reintegrated for avformat major version < 58 and removed
for version >= 58 (along with its accompanying code).
Thanks to J Lowe for help in solving the bug.
(tested on win 10, macos 10.13, ubuntu 17.10 with ffmpeg head & ffmpeg
3.4.1)
If the target process re-creates its D3D context, the game capture tick
can trigger before the capture is setup, in which case OBS gets a
CAPTURE_RETRY message. However with the memory capture method, it
continues to try and copy from the shared memory pointer which is no
longer valid, resulting in a crash. The fix uses the old texture until
the next tick at which point the new capture should be ready for use.
Code Changes:
- Removed flag BUILD_AMF_ENCODER.
- AMF SDK is now a submodule of the plugin, no longer requiring extra steps.
Plugin Changes:
- 'Bitrate.Target' has been renamed to 'bitrate' internally, improving support for "New Networking Code" and "Replay Buffer" which incorrectly rely on this value instead of taking the average bitrate of packets sent over the last second.
- Drivers with a runtime older than 1.4.6.0 are blacklisted now.
- All hidden and experimental options have been removed.
Windows SDK 10.0.16299.0 defines these structures as part of winternl.h
but using different types and names. Unfortunately there's no macro to
detect the SDK version, so to avoid conflicting with newer / older SDKs
the OBS structs have been renamed.
When this was being fixed up, the incorrect function name was used --
however it still compiled because the author was using the newer FFmpeg
version at the time.
Common multi-channel setup is 5.1 and 7.1 with rear speakers.
Thus only setups that include SPEAKER_SIDE_LEFT and SPEAKER_SIDE_RIGHT
needs the marking as not common (or "side" use), while it stays the
true Side setup (with side speakers) by its internal meaning.
This "side" is named "surround" by Microsoft. To not confuse users and
translators, it is wise to use "Side" mark next to format name.
Certain NVIDIA GPUs don't support NVENC, but ship with the NVENC
library, causing OBS to mistakenly think that NVENC is available when it
actually isn't.
Closesjp9000/obs-studio#1087
Implement automatic audio device selection for devices that use two
separate DirectShow filters for audio and video instead of having a
single filter with audio and video output pins.
Please note that this fix is currently only active for Elgato USB and
PCIe devices (e.g. Cam Link, HD60 S, HD60 Pro, 4K60 Pro) to avoid
unintentionally changing the behavior for any other devices (e.g.
webcams).
(Jim edit: This fixes an issue with newer Elgato devices where the
devices would not automatically have their audio coupled with the video;
users would have to manually select the audio device in order to get
audio functioning.)
Closesjp9000/obs-studio#1081
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
When device timing is used, it shouldn't be modifying the timestamp.
Fixes an issue where certain devices with large audio segments would
seem a bit out of sync.
(This commit also modifies the deps/media-playback, obs-ffmpeg, and
win-dshow modules)
More fixes due to ffmpeg renaming some constants and deprecating
AVFMT_RAWPICTURE and AV_PIX_FMT_VDA_VLD.
Latter replaced by AV_PIX_FMT_VIDEOTOOLBOX per ffmpeg dev advice.
Closesjp9000/obs-studio#1061
The new code in 3032535f56 would signal that the output has stopped to
the back-end and front-end, but the event used in the outputs themselves
to shut down the send thread would still be signaled, causing the next
connection to immediately stop as soon as it had started. This fixes it
so that the event does not get signaled unless the thread is active.
Use unbuffered async mode by default, and when in unbuffered mode,
decouple audio/video so that audio plays as soon as it's received.
This is a workaround for decklink device drivers having unreliable
video/audio timestamps (audio/video sync drifting over time). From
testing, it seems that the handling of video and audio is completely
separate in the driver; along with the timestamp calculations. For
example, when the thread of the decklink audio callback is stalled, it
would cause the timestamps of the audio alone to go out of sync, which
indicates timestamps are calculated more or less on the spot independent
of what video is doing (which is how we replicated the issue fixed by
b63e4b055e). Because decklink drivers treats the audio and video as
essentially decoupled, we must also treat it as decoupled. This is what
was causing video/audio to drift out of sync over time.
(This commit also modifies UI)
Instead of pinging Twitch every time the program starts up, only pings
for new servers when the ingests are actually being used, and when the
UI uses the auto-configuration dialog.
If ingests have not been cached when using the "Auto" server, it will
wait for 3 seconds max to query the Twitch ingest API. If it takes
longer than 3 seconds or fails, it will defer to SF. If ingests were
already cached, then it will use the existing cache immediately.
Video playback doesn't work if the default format is MJPEG and there are
other formats to use; this is because the useDefaultConfig variable is
still set to true, which overrides the format value that would normally
tell it to convert to RGB.
"Life is Feudal: Your Own" will use Direct3D to render the game, then
OpenGL to render its in-game menus, which causes a conflict with itself.
This specifically blacklists the game from capturing OpenGL to prevent
that from happening.
It would appear that the timestamp values returned by devices are not
perfectly accurate, and in some cases may be calculated on the spot --
to combat that, it's best to subtract the audio segment's duration from
the current audio segment's timestamp to ensure the timing is as
accurate as possible.
The end of an FLV tag would contain the size of the tag, but the code
was erroneously including the end size value in addition, which it's not
supposed to do normally.
(This commit also modifies the obs-outputs module)
The first video packet video offset (the value used to set the starting
point of video data) would be set to the DTS value of the first video
packet. However, when b-frames are used, the first DTS value will be
negative. This was originally done because FLV muxing requires that the
first packet's DTS start from 0. Unfortunately, this would also
effectively cause the first packet's PTS/DTS value to be shifted forward
by the negative amount, which would cause video sync to be off by a
video frame or two.
This fixes it to start at the PTS value instead and preserve any
negative offsets. Additionally, the FLV muxing code has been fixed to
ensure that it adjusts the starting video DTS to 0, and now correctly
adjusts the first audio packet's timestamp according to that DTS as well
(which it didn't do before).
The "Custom" device entry was added inside the loop that runs for each
device; this causes "Custom" to be added once for each device detected,
or not at all if no device was detected. Moving this outside the loop
(as would appear to have been the original intent, judging by the
indentation) fixes this by adding the entry only after all devices have
been processed.
(This commit also modifies deps/media-playback)
Before, the media-playback library would detect whether something was
seekable by checking the filename for "://", which is unideal because
there are other cases where targets may not be seekable. So instead, an
explicit "seekable" property (off by default) is now in place in the
media source when not in "local file" mode. Seeking will only be
enabled if local file mode is on, or if "seekable" is explicitly checked
by the user.
Closesjp9000/obs-studio#1022
Add option in properties that let you choose how audio is mixed during
transition:
- Fade Out/Fade In (existing behavior, default)
- Crossfade
Closesjp9000/obs-studio#1028