The scaler assumed the placeholder was the same size as the camera which
caused crashes if the user replaced the placeholder with a smaller
resolution image (or if the camera was potentially running at > 1080p).
This adds a separate scaler for the placeholder and uses the resolution
of the virtual camera instead of defaulting to 1080p.
Per MSDN: Do not call this function from a DLL that is linked to the static C
run-time library (CRT). The static CRT requires DLL_THREAD_ATTACH and
DLL_THREAD_DETATCH notifications to function properly.
EVGA/magewell devices seem to use the default system drivers rather than
custom drivers, which causes their audio to become decoupled and treated
as completely separate devices rather than as an audio pin on the video
device. Basically, this would cause those devices to not have audio by
default, and force the user to have to manually select the audio, which
is bad user experience.
We already had a workaround for this with elgato devices, so expand that
code to become a whitelist of devices, and include EVGA/magewell
devices.
This adds two batch scripts to install and uninstall the virtual cam
devices for installations where the installer could not be used. Most
commonly, this is for portable installations or those who prefer the
.zip file.
The virtual camera adds the ability to use the output of OBS itself as a
camera that can be selected within other Windows applications. This is
very loosely based upon the catxfish virtual camera plugin design.
There is a shared memory queue, but instead of having 10-20 frames in
the queue, there are now only 3 frames in the queue to minimize latency
and reduce memory usage. The third frame is mostly to ensure that
writing does not occur on the same frame being read; the delay is merely
one frame at all times.
The frames of the shared memory queue are NV12 instead of YUYV, which
reduces the memory and data copied, as well as eliminate unnecessary
conversion from NV12. Some programs (such as chrome, which uses webrtc
to capture) do not support NV12 however, so an I420 conversion is
provided, which is far less expensive than YUYV. The CPU cost of NV12
-> I420 is negligible in comparison.
The virtual camera filter itself is based upon the output filter within
the libdshowcapture library, which was originally implemented for other
purposes. This is more ideal than the Microsoft example code because
for one, it's far less convoluted, two, allows us to be able to
customize the filter to our needs a bit more easily, and three, has much
better RAII. The Microsoft CBaseFilter/etc code comprised of about 30
source files, where as the output filter comprises of two or three
required source files which we already had, so it's a huge win to
compile time.
Scaling is avoided whenever possible to minimize CPU usage. When the
virtual camera is activated in OBS, the width, height, and frame
interval are saved, that way if the filter is activated, it will always
remember the last OBS resolution/interval that the virtual camera was
activated with, even if OBS is not active. If for some reason the
filter activates before OBS starts up, and OBS starts up with a
different resolution, it will use simple point scaling intermittently,
and then will remember the new scaling in the future. The scaler could
use some optimization. FFmpeg was not opted for because the FFmpeg DLLs
would have to be provided for both architectures, which would be about
30 megabytes in total, and would make writing the plugin much more
painful. Thus a simple point scaling algorithm is used, and scaling is
avoided whenever possible.
(If another willing participant wants to have a go at improving the
scaling then go for it. But otherwise, it avoids scaling whenever
possible anyway, so it's not a huge deal)
The biHeight field can be negative, leading to crashes on some cards
like VisionRGB-E1S. Adding flip support is fairly straightforward.
There also appears to be a hack to automatically flip for RGB formats,
but I wish to remove it because it seems to fight with this change. We
already have a separate vertical flip checkbox to deal with non
compliant behavior.
This prevents VideoFormat::Any from unintentionally selecting H264 when
MJPEG is the only other format available.
This fixes a bug where certain devices (Logitech C920 with latest
drivers) will only have H264 and MJPEG available, and using
VideoFormat::Any will then select H264 over MJPEG because it's the first
format value and has the same priority as MJPEG. So now, MJPEG will be
prioritized over H264 instead.
Full color range seems to be active when decoding video with FFMmpeg
even when partial is explicitly selected. This should keep the range
synchronized.
Due to the recent change of using FFmpeg to decode MJPEG, MJPEG was
getting included in the delayed device check. This fixes that so that
it doesn't. MJPEG can decode in real time.
IsEncoded is meant to be used to indicated delayed devices, such as
older Elgato devices, or Hauppauge device. Devices that use H264 and
have a 800+ millisecond latency. This changes the function name to
better indicate that.