The biHeight field can be negative, leading to crashes on some cards
like VisionRGB-E1S. Adding flip support is fairly straightforward.
There also appears to be a hack to automatically flip for RGB formats,
but I wish to remove it because it seems to fight with this change. We
already have a separate vertical flip checkbox to deal with non
compliant behavior.
Full color range seems to be active when decoding video with FFMmpeg
even when partial is explicitly selected. This should keep the range
synchronized.
Due to the recent change of using FFmpeg to decode MJPEG, MJPEG was
getting included in the delayed device check. This fixes that so that
it doesn't. MJPEG can decode in real time.
IsEncoded is meant to be used to indicated delayed devices, such as
older Elgato devices, or Hauppauge device. Devices that use H264 and
have a 800+ millisecond latency. This changes the function name to
better indicate that.
If a device produces video and audio timestamps atdifferent rates,
this divergence can cause massive buffering on the audio side, leading
to a capped audio buffer, and total sound loss. This change allows a
hardcoded list of devices to use the existing decoupling logic. For
now, only "GV-USB2" has been added.
When combined with another fix, 5+ hours of stable audio without any
buffering on my GV-USB2 where it used to drop sound completely after
an hour or so.
Partially fixes https://obsproject.com/mantis/view.php?id=1269
Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
Allows the ability to override and use partial range RGB with the
DirectShow and Decklink device sources when partial range RGB is
implemented. Fixes certain cases where devices could capture RGB in
limited range via HDMI (per the HDMI specs).
MatcherClosestFrameRateSelector updates best_match as a side effect of
visiting every VideoInfo instance, but CapsMatch uses std::any_of,
which will stop iterating prematurely on first match. This means that
the highest FPS is not selected.
This change switches to a for loop that doesn't exit early.
Allows the ability to (optionally) synchronously create/update a
directshow device source rather than always asynchronously update the
device. This is useful if creating/destroying scenes/sources very
quickly, and helps minimize the risk of creating new directshow sources
that use the same device, yet may not activate because an existing
source may already exist. To use, set "synchronous_activate" to true in
its settings when updating or creating. Note that this setting will be
erased after it's used, and will not be saved to user settings, so it
must be set each time in order to be used.
Closesobsproject/obs-studio#1228
(also obs, deps/media-playback, libobs/audio-monitoring, decklink,
linux-alsa, linux-pulseaudio, mac-capture, obs-ffmpeg, win-dshow,
win-wasapi)
Default channel layout for 4 channels is 4.0 in FFmpeg.
Replacing quad with 4.0 will improve compatibility since FFmpeg has
better support of its default channel layouts.
Implement automatic audio device selection for devices that use two
separate DirectShow filters for audio and video instead of having a
single filter with audio and video output pins.
Please note that this fix is currently only active for Elgato USB and
PCIe devices (e.g. Cam Link, HD60 S, HD60 Pro, 4K60 Pro) to avoid
unintentionally changing the behavior for any other devices (e.g.
webcams).
(Jim edit: This fixes an issue with newer Elgato devices where the
devices would not automatically have their audio coupled with the video;
users would have to manually select the audio device in order to get
audio functioning.)
Closesjp9000/obs-studio#1081
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closesjp9000/obs-studio#968
Video playback doesn't work if the default format is MJPEG and there are
other formats to use; this is because the useDefaultConfig variable is
still set to true, which overrides the format value that would normally
tell it to convert to RGB.
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)
Originally, async buffering for sources was supposed to be a
user-controllable flag. However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).
The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.
Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.
When the windows video device source source is set to only activate when
showing, it would still activate on first startup of the program even if
it was in another scene and not showing anywhere to the user. This
fixes that issue.
The LGP issue is caused by the device drivers returning two or more
packets in a single segment of audio data. This fixes it by detecting
that and decoding subsequent packets.
When the FFmpeg audio decoder returns, it returns how many bytes of data
was decoded. To have it decode multiple packets in a single segment,
just subtract the return value from the expected size, and if that size
is still larger than zero, then there are more packets in the segment to
decode. Otherwise, stop.
LGP devices are devices that induce anger in any sane developer because
they're prone to bad audio timestamps when using their decoded data
directly. For that reason, add a hack that smooths the timestamps
within a large threshold to prevent audio skipping.
Useful for two purposes:
1.) When many devices are hooked up to the system and used in separate
scenes, but only one device active at once is desired
2.) Allows users who are dependent on outputting audio to desktop to
disable that audio (via disabling that device) when the device isn't
being displayed
Certain types of sources (display captures, game captures, audio
device captures, video device captures) should not be duplicated. This
capability flag hints that the source prefers references over full
duplication.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Martell changed this function without realizing that this was calling a
function below it, not recursively calling itself. The reason why he
got the warning was because there was no forward declaration of the
function that was being called; I think he's used to C where only one
function definition can exist with the same name. In this case, it was
another function with the same name but with different parameters,
something that's permitted in C++. I wish I had realized this sooner.
This fixes the crashes people have been having with devices.