If the sample format used by PulseAudio can not be converted into an
OBS audio format it will be handled as AUDIO_FORMAT_UNKNOWN which will
not result in a proper audio recording. So instead we request a format
that OBS supports from PulseAudio and let it do the format conversion.
The format PA_SAMPLE_S24_32LE is a 24 bit audio format in 32 bit integers
and not a 32 bit audio format and so it should no be mapped to
AUDIO_FORMAT_32BIT.
Before it was giving timestamps based upon system time for each new
segment of audio data. Also, it was subtracting pulse latency from the
audio timestamp, which seems like it was really meant for use with the
pulse audio time rather than system time.
Now, it just uses system time for timestamps. Still might not be
totally perfect, but seems to be much better than it was.
This also removes the latency calculation. Latency is no longer used
because we're not using pulseaudio timing.
Allows adding Syphon servers as sources, and provides game-capture if
used with SyphonInject (specifically the scripting additions have to be
installed for SyphonInject to work from within OBS)
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This adds a check whether the video format from the device is compatible
with obs. This could either happen if the "Leave unchanged" option is
selected for the video format, or if the driver simply overwrites the
requested video format.
Due to the refactoring of the update function the separation of data
members only to be accessed from inside/outside the capture thread is
no longer needed.
The old implementation of this function assumed that there would be some
settings that could be changed on the fly without restarting the
capture. That was actually never used for any setting.
Since the helper function also needs to pack/unpack the resolution, the
pack/unpack functions were moved to the helper library and prefixed with
v4l2_ in order to avoid possible collisions.
This was added at a time where the source properties dialog did not
pop up automatically on source creation. Now when the properties are
displayed the first device in the select input will be selected by
default if there was none already specified by the source settings.
This will make the code cleaner and also save one redundant round of
device enumeration.
The capabilities flags that were used previously describe all
capabilities the physical device offers. This would cause devices
that are accessible through multiple device nodes to show up with
all device nodes while only one of the nodes might actually offer
the needed video capture capability.
If the device has more nodes the CAP_DEVICES_CAP flag might be set
in which case the device_caps field is filled with the capabilities
that only apply to that specific node that is opened.
This prevents certain issues I've encountered with devices where they
expect to shut down in a specific thread they started up in, as well as
a number of other issues, such as the configuration dialogs.
The configuration dialogs require that a message loop be present, and
this was not the case previously because everything was in the video
thread, which has no windows-specific code.
Configuration/crossbar/etc dialogs will now execute correctly.
This adds support for dynamic format changes on the fly. Format,
resolution, sample rate, can all now be changed by the current
directshow device on the fly.
On an asynchronous video source, the source resolution is automatically
handled by the core, and set to the resolution of the last video data
that was sent. There is no need to manually specify a resolution.
When setting up the capture, the plugin will now query pulse for the default
format of the specific source instead of the server.
This is useful if a source has different settings than what the defaults are
for the server, e.g. when the source is an output with 5.1 surround sound
and the microphone input is mono while the server defaults to stereo sound.