Certain plugins/modules that are loaded on windows may be dependent on
libraries that are located in the same directory as the module in
question. This makes is so that LoadLibrary will also search for
dependent libraries for that module in the module's directory.
When globbing for modules, the intent was to ignore the . and ..
directories that might also be included in the glob search. It was
mistakenly doing a string compare on the 'path' variable which contains
the full path, rather than the 'file' variable which just contains the
found file itself, so the string compare always failed.
In case the encoder has to use a different sample rate (due to the
sample rate being unsupported), we need an API function for the encoder
to get the sample rate that the encoder is actually running at.
When an audio filter is applied to a video source that also has
accompanying audio, it would cause the video from the source to stop
rendering.
The original code this was from was to prevent audio-only sources from
rendering video, but I neglected to make sure that this would not apply
to filters, and thus when an audio filter is on a source with video, the
code would kill the video.
The size parameter is the size of the elements, not the size of the
data. The size parameter should be 1, and the elements should be the
number of bytes.
The reason why I'm making this change is because the fread/fwrite would
fail when the parameters were swapped.
In the (unlikely) event of multiple concurrent calls to
input_method_changed it was possible that the log messages would appear
out of order with respect to which layout would actually be active
after the last log message
Due to the fact that async timestamps themselves can be susceptible to
minor jitter from certain types of inputs, increase the allowable jitter
compensation value to ensure that the rendered frame timing from async
video sources is always as close as possible to the compositor.
When the framerate of the source is the same as the framerate as the
compositor, this (combined with the fact that clamped video timing now
being used with async video frames) helps ensure that buffered async
video sources will sync up their rendering to the compositor as
accurately as possible despite jitter from the source's timestamps.
If there is no jitter in the source's timestamps then it'll always sync
up perfectly with the compositor, thanks to clamped video timing.
When playing back buffered async frames, this reduces the probability
that new frames will be missed/skipped due to jitter in the system
timestamps.
If a buffered async source is playing at the same framerate as the
compositor and there is no jitter in the async source's timestamps, then
the async source will play back perfectly in sync with the compositor
thanks to this change, ensuring that there's no skipped or missed frames
in video playback.
The "clamped" video time is the system time per video frame that is
closest to the current system time, but always divisible by the frame
interval. For example, if the last frame system timestamp was 1600 and
the new frame is 2500, but the frame interval is 800, then the
"clamped" video time is 2400.
This clamped value is useful to get the relative system time without any
jitter.
When buffering is enabled for an async video source, sometimes minor
drift in timestamps or unexpected delays to frames can cause frames to
slowly buffer more and more in memory, in some cases eventually causing
the system to run out of memory.
The circumstances in which this can happen seems to depend on both the
computer and the devices in use. So far, the only known circumstances
in which this happens are with heavily buffered devices, such as
hauppauge, where decoding can sometimes take too long and cause
continual frame playback delay, and thus continual buffering until
memory runs out. I've never been able to replicate it on any of my
machines however, even after hours of testing.
This patch is a precautionary measure that puts a hard limit on the
number of async frames that can be currently queued to prevent any case
where memory might continually build for whatever reason. If it goes
over the limit, it clears the cache to reset the buffering.
I had a user with this problem test this patch with success and positive
feedback, and the intervals between buffering resets were long to where
it wasn't even noticeable while streaming/recording.
Ideally when decoding frames (such as from those devices), frame
dropping should be used to ensure playback doesn't incur extra delay,
although this sort of hard limit on the frame cache should still be
implemented regardless just as a safety precaution. For DirectShow
encoded devices I should just switch to faruton's libff for decoding and
enable the frame dropping options. It would probably explain why no
one's ever reported it for the media source, and pretty much only from
DirectShow device usage.
Found via UBSan, actual errors (addresses not pruned for illustrative purposes):
"runtime error: store to misaligned address 0x7f9a9178e84c for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
"runtime error: load of misaligned address 0x7f9a9140f2cf for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
The screen index returned from XDefaultScreen is 0-based, and we were
decrementing it before the check to see if it had reached 0 rather than
after, so in the default_screen function it would always end up getting
either the wrong screen or no screen.
When xcb_query_pointer and xcb_query_pointer_reply was called with no
valid screen, it would fail with an error, thus making it so that the
mouse buttons could not be properly captured as hotkeys.