Add the include directories found by cmake to the jack plugin.
This allows for the plugin to compile when the jack headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the v4l2 plugin.
This allows for the plugin to compile when the vl42 headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the pulseaudio plugin.
This allows for the plugin to compile when the pulseaudio headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Check the actual name of the codec before applying an x264-specific
preset so we don't encounter an "Invalid argument" error when using
other h264 encoders in FFmpeg (such as NVEnc).
Closesjp9000/obs-studio#412
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
When frames were dropped, it would also drop I-frames, which can mess
with the keyframe calculation of certain services that depend on
I-frames in their output protocol (such as HLS).
The kCGDisplayStreamShowCursor option used with the dictionary does not
work if you assign @true or @false to it. After some testing, it needs
to point to the id cast of either kCFBooleanTrue or kCFBooleanFalse in
order for it to work properly.
If it doesn't use either of those values, the display stream seems to
use its internal default, which on 10.8 and 10.9 is visible, and 10.10+
is invisible, which would explain why people on 10.10 couldn't get the
cursor to capture.
Some formats (like WMV) would send out audio packets that
had channels set but did not specify a channel layout.
Solution is to no longer rely on channel layout to get the
channels and just get the channel count directly off the
FFmpeg audio frame.
If capture starts too quickly, the file mapping will return 2, which
means file not found, and it would then reset the capture and try again.
Sometimes this would result in long intervals where it wouldn't capture.
This fixes the issue by simply making game capture retry if file mapping
returns error number 2.