This adds helper function to disable/enable all properties which is
used in the device selected callback to enable/disable the properties
when the selected device is available/unavailable.
This adds some code to the device enumeration that checks if the
currently selected device is present. In case it is not it will
add the device but disable it.
This moves the calls to the property modified functions so the old
handler can close the device. Otherwise this would cause the device
to be opened multiple times.
This replaces the var in the source struct that are handling the
timestamp offset with a local one in the capture thread.
This change is mostly to make the code more readable.
This moves the enabling/resetting of the file descriptors inside the
capture loop so it is done before each select call. Why this even worked
before is unclear, but doing it the *right* way seems to reduce latency.
Transparency is now disabled by default, so that alpha values from
injected back buffers don't propagate to OBS (e.g. Minecraft doesn't
render properly in OBS unless "Allow Transparency" is disabled)
When a new device starts up, make it so that the first timestamp that
occurs starts from 0. This prevents the internal source timestamp
handling from trying to buffer new frames to the new timestamp value in
case the device changes.
Due to potential driver issues with certain devices, the timestamps are
not always reliable. This option allows of using the time in which the
frame was received as a timestamp instead.
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
This reverts commit cd306d975a.
This removes the 'unbuffered' property for the time being. There should
be a better way of handling this, such as using system timestamps.
Also, the obs_source_frame::flags member needs to be removed and
replaced with something a bit more ideal.
This allows the user to select whether to use unbuffered video or not.
Unbuffered video cause the video frames to play back as soon as they're
received, rather than be buffered and attempt to play them back
according to the timestamp value of each frame.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.
This adds a check to change the capture settings to use 2 channels when
a channel number is encountered that would otherwise be interpreted as
SPEAKERS_UNKNOWN.
Because other capture methods may end up needing to share this code,
separate the window finding source code to window-helpers.c and
window-helpers.h.
This include a function to fill out a property list with windows, a
function to find a window based upon priority/title/class/exe, and a
function to decode the window title/class/exe strings from a window
setting string.
This adds code to set up the udev monitoring library and use the events
to detect connects/disconnects of devices.
When the currently used device is disconnected the plugin will stop
recording and clean up, so that the device node is freed up.
On reconnection of the device the plugin will use the event to
automatically start the capture again.
If the sample format used by PulseAudio can not be converted into an
OBS audio format it will be handled as AUDIO_FORMAT_UNKNOWN which will
not result in a proper audio recording. So instead we request a format
that OBS supports from PulseAudio and let it do the format conversion.
The format PA_SAMPLE_S24_32LE is a 24 bit audio format in 32 bit integers
and not a 32 bit audio format and so it should no be mapped to
AUDIO_FORMAT_32BIT.
Before it was giving timestamps based upon system time for each new
segment of audio data. Also, it was subtracting pulse latency from the
audio timestamp, which seems like it was really meant for use with the
pulse audio time rather than system time.
Now, it just uses system time for timestamps. Still might not be
totally perfect, but seems to be much better than it was.
This also removes the latency calculation. Latency is no longer used
because we're not using pulseaudio timing.
Allows adding Syphon servers as sources, and provides game-capture if
used with SyphonInject (specifically the scripting additions have to be
installed for SyphonInject to work from within OBS)
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This adds a check whether the video format from the device is compatible
with obs. This could either happen if the "Leave unchanged" option is
selected for the video format, or if the driver simply overwrites the
requested video format.
Due to the refactoring of the update function the separation of data
members only to be accessed from inside/outside the capture thread is
no longer needed.
The old implementation of this function assumed that there would be some
settings that could be changed on the fly without restarting the
capture. That was actually never used for any setting.