Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
When frames were dropped, it would also drop I-frames, which can mess
with the keyframe calculation of certain services that depend on
I-frames in their output protocol (such as HLS).
The kCGDisplayStreamShowCursor option used with the dictionary does not
work if you assign @true or @false to it. After some testing, it needs
to point to the id cast of either kCFBooleanTrue or kCFBooleanFalse in
order for it to work properly.
If it doesn't use either of those values, the display stream seems to
use its internal default, which on 10.8 and 10.9 is visible, and 10.10+
is invisible, which would explain why people on 10.10 couldn't get the
cursor to capture.
Some formats (like WMV) would send out audio packets that
had channels set but did not specify a channel layout.
Solution is to no longer rely on channel layout to get the
channels and just get the channel count directly off the
FFmpeg audio frame.
If capture starts too quickly, the file mapping will return 2, which
means file not found, and it would then reset the capture and try again.
Sometimes this would result in long intervals where it wouldn't capture.
This fixes the issue by simply making game capture retry if file mapping
returns error number 2.
This allows applying a mask based upon the chroma value (U/V) of a
specific color in YUV color space. Commonly used to mask out
greenscreens and bluescreens in live video.
Allows any source to be cropped, though note that this renders to
texture first, so for more optimal results, cropping values should
probably be put in to capture sources that can be cropped as they're
actually rendered by the source.
This filter allows the ability to use a texture to modify the video
output of a source, the ability to:
- apply a color-based mask (dark = transparent, light = opaque)
- apply a mask based upon the alpha of an image
- blend an image via subtraction, addition, or multiplication
Adds a filter for delaying asynchronous video/audio data, for example
from webcams, video devices, or media playback sources. Mainly intended
to allow users to sync up their webcams to other devices.
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Certain devices (particularly certain mixers and soundflower 64ch) would
have an arbitrary number of channels, and wouldn't really be mappable to
a specific speaker layout supported by libobs.
So to fix this issue, if the channel count is above 8, force the data to
stereo to ensure playback can still occur, rather than cause it to just
fail.
This fixes an issue primarily with filter rendering: when capturing
windows and displays, their alpha channel is almost always 0, causing
the image to be completely invisible unintentionally. The original fix
for this for many sources was just to turn off the blending, which would
be fine if you're not rendering any filters, but filters will render to
render targets first, and that lack of alpha will end up carrying over
in to the final image.
This doesn't apply to any mac captures because mac actually seems to set
the alpha channel to 1.
The code specific to Windows: helps convert `BSTR` instances to
`std::string`s; provides a Windows COM-specific implementation of
`CreateDeckLinkDiscoveryInstance`; aliases CFUUIDGetUUIDBytes,
CFUUIDBytes, and IUnknownUUID (the Linux SDK does this, but for some
reason the Windows SDK does not).
Some changes were made to the stock DeckLink SDK: removed all references
to legacy API headers in DeckLinkAPI.idl; removed all instances of
`importlib("stdole2.tlb");`.