Add the relevant header file needed on FreeBSD and utilize yet another
ifdef to call pthread_set_name_np as the function name differs from
those on the other platforms.
Add definition on FreeBSD to enable getline in stdio, as it is not
(yet ?) available by default. According to the manpage getline was a
GNU extension but was standardized in POSIX.1-2008.
Always use -fPIC when not on WIN32 or APPLE and not just with gcc.
This allows for building obs with clang on linux and FreeBSD
without explicitly specifying -fPIC as compiler flag to cmake.
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.
Non-NV12 video formats are primarily intended for recording. For
streaming, if the libobs color format is not set to NV12, it's likely
that the video frames will have to be converted to NV12, which will use
extra CPU usage. Due to that fact, it's important to warn the user of
that potential extra increased CPU usage that may be required when
streaming.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I realized that the get_video_info and get_audio_info encoder callbacks
always have to manually query the libobs audio/video information.
This fixes that problem by passing the libobs video/audio information in
the structures passed to those callbacks so they don't have to query it
each time, reducing needless boilerplate code for encoders.
Allows the ability to hint at encoders what format should be used.
This is particularly useful if libobs is currently operating in planar
4:4:4, but you want to force an encoder used for streaming to convert to
NV12 to prevent streaming issues.
This allows using NV12, I420, or RGB output video formats. This option
will set what obs itself outputs frames as.
It's important to note that this is only ideal for specific FFmpeg
encoders that support the desired video format; for example, if you use
RGB and use the huffyuv encoder, huffyuv will now properly output in RGB
instead of YUV NV12/I420.
I420 is useful for eliminating the NV12->I420 conversion for the
AVerMedia encoders, as AVerMedia encoders only support I420 input.
A second even more important note about RGB is that if the encoder does
not support the format you are using, it will be converted on the CPU to
a format that the encoder supports as it's encoded; so for example
setting the obs output format to RGB and then using x264 will be futile
and end up using needless amounts of extra CPU than if you just had obs
set to NV12, which is the most common and ideal format for x264.
In the future, native output of other YUV formats might be implemented
(such as YUV 4:2:2).
Fixes a crash that could happen if any of the mutexes are used in the
create callback, or before the obs_source_init function is called.
I'm not sure how this function order slipped because it seems fairly
obvious that these mutexes should be created before the create callback.
Had this crash happen to me when creating a WASAPI output source, the
create callback of the WASAPI source creates a thread which outputs
audio, and that thread managed to call obs_source_output_audio before
the obs_source_init function was called, which in turn caused it to try
to use a null mutex.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.