Fixes ticket 1070.
See also
https://obsproject.com/forum/threads/ffmpeg-recording.77378/#post-330473
(related bugs).
The ffmpeg constant AVFMT_RAWPICTURE was deprecated in october 2015
and marked for removal at avformat major bump to version 58
(ffmpeg commit 34ed5c2 , oct 12, 2015).
The bump occured with commit 69b5ce6 (oct21, 2017).
The constant was subsequently removed (commit 693a11b, oct 26 2017).
It was removed from obs-studio with commit d670d7b (from me).
But the code block which was executed with this constant was not
removed, causing issues with ffmpeg output.
The commit fixes the issue for old ffmpeg builds as well as new ones.
The constant is reintegrated for avformat major version < 58 and removed
for version >= 58 (along with its accompanying code).
Thanks to J Lowe for help in solving the bug.
(tested on win 10, macos 10.13, ubuntu 17.10 with ffmpeg head & ffmpeg
3.4.1)
If the target process re-creates its D3D context, the game capture tick
can trigger before the capture is setup, in which case OBS gets a
CAPTURE_RETRY message. However with the memory capture method, it
continues to try and copy from the shared memory pointer which is no
longer valid, resulting in a crash. The fix uses the old texture until
the next tick at which point the new capture should be ready for use.
Code Changes:
- Removed flag BUILD_AMF_ENCODER.
- AMF SDK is now a submodule of the plugin, no longer requiring extra steps.
Plugin Changes:
- 'Bitrate.Target' has been renamed to 'bitrate' internally, improving support for "New Networking Code" and "Replay Buffer" which incorrectly rely on this value instead of taking the average bitrate of packets sent over the last second.
- Drivers with a runtime older than 1.4.6.0 are blacklisted now.
- All hidden and experimental options have been removed.
Windows SDK 10.0.16299.0 defines these structures as part of winternl.h
but using different types and names. Unfortunately there's no macro to
detect the SDK version, so to avoid conflicting with newer / older SDKs
the OBS structs have been renamed.
When this was being fixed up, the incorrect function name was used --
however it still compiled because the author was using the newer FFmpeg
version at the time.
Common multi-channel setup is 5.1 and 7.1 with rear speakers.
Thus only setups that include SPEAKER_SIDE_LEFT and SPEAKER_SIDE_RIGHT
needs the marking as not common (or "side" use), while it stays the
true Side setup (with side speakers) by its internal meaning.
This "side" is named "surround" by Microsoft. To not confuse users and
translators, it is wise to use "Side" mark next to format name.
Certain NVIDIA GPUs don't support NVENC, but ship with the NVENC
library, causing OBS to mistakenly think that NVENC is available when it
actually isn't.
Closesjp9000/obs-studio#1087
Implement automatic audio device selection for devices that use two
separate DirectShow filters for audio and video instead of having a
single filter with audio and video output pins.
Please note that this fix is currently only active for Elgato USB and
PCIe devices (e.g. Cam Link, HD60 S, HD60 Pro, 4K60 Pro) to avoid
unintentionally changing the behavior for any other devices (e.g.
webcams).
(Jim edit: This fixes an issue with newer Elgato devices where the
devices would not automatically have their audio coupled with the video;
users would have to manually select the audio device in order to get
audio functioning.)
Closesjp9000/obs-studio#1081
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
When device timing is used, it shouldn't be modifying the timestamp.
Fixes an issue where certain devices with large audio segments would
seem a bit out of sync.
(This commit also modifies the deps/media-playback, obs-ffmpeg, and
win-dshow modules)
More fixes due to ffmpeg renaming some constants and deprecating
AVFMT_RAWPICTURE and AV_PIX_FMT_VDA_VLD.
Latter replaced by AV_PIX_FMT_VIDEOTOOLBOX per ffmpeg dev advice.
Closesjp9000/obs-studio#1061
The new code in 3032535f56 would signal that the output has stopped to
the back-end and front-end, but the event used in the outputs themselves
to shut down the send thread would still be signaled, causing the next
connection to immediately stop as soon as it had started. This fixes it
so that the event does not get signaled unless the thread is active.
Use unbuffered async mode by default, and when in unbuffered mode,
decouple audio/video so that audio plays as soon as it's received.
This is a workaround for decklink device drivers having unreliable
video/audio timestamps (audio/video sync drifting over time). From
testing, it seems that the handling of video and audio is completely
separate in the driver; along with the timestamp calculations. For
example, when the thread of the decklink audio callback is stalled, it
would cause the timestamps of the audio alone to go out of sync, which
indicates timestamps are calculated more or less on the spot independent
of what video is doing (which is how we replicated the issue fixed by
b63e4b055e68a). Because decklink drivers treats the audio and video as
essentially decoupled, we must also treat it as decoupled. This is what
was causing video/audio to drift out of sync over time.
(This commit also modifies UI)
Instead of pinging Twitch every time the program starts up, only pings
for new servers when the ingests are actually being used, and when the
UI uses the auto-configuration dialog.
If ingests have not been cached when using the "Auto" server, it will
wait for 3 seconds max to query the Twitch ingest API. If it takes
longer than 3 seconds or fails, it will defer to SF. If ingests were
already cached, then it will use the existing cache immediately.
Video playback doesn't work if the default format is MJPEG and there are
other formats to use; this is because the useDefaultConfig variable is
still set to true, which overrides the format value that would normally
tell it to convert to RGB.