When the bitrate was set to 64 CoreAudio would call
complex_input_data_proc more than once, which in turn would cause
consumed bytes in the input buffer to be "freed" more than once (once
for every additional call of complex_input_data_proc and once in
aac_encode)
In the (unlikely) event of multiple concurrent calls to
input_method_changed it was possible that the log messages would appear
out of order with respect to which layout would actually be active
after the last log message
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
The audio bitrate required is insignificant relative to the video
bitrate, and due to the fact that it's possible that a lower-quality
encoder may be in use (such as FFmpeg's AAC encoder), setting the
default to 160 is really more ideal to reducee any potential quality
loss.
Due to the fact that async timestamps themselves can be susceptible to
minor jitter from certain types of inputs, increase the allowable jitter
compensation value to ensure that the rendered frame timing from async
video sources is always as close as possible to the compositor.
When the framerate of the source is the same as the framerate as the
compositor, this (combined with the fact that clamped video timing now
being used with async video frames) helps ensure that buffered async
video sources will sync up their rendering to the compositor as
accurately as possible despite jitter from the source's timestamps.
If there is no jitter in the source's timestamps then it'll always sync
up perfectly with the compositor, thanks to clamped video timing.
When playing back buffered async frames, this reduces the probability
that new frames will be missed/skipped due to jitter in the system
timestamps.
If a buffered async source is playing at the same framerate as the
compositor and there is no jitter in the async source's timestamps, then
the async source will play back perfectly in sync with the compositor
thanks to this change, ensuring that there's no skipped or missed frames
in video playback.
The "clamped" video time is the system time per video frame that is
closest to the current system time, but always divisible by the frame
interval. For example, if the last frame system timestamp was 1600 and
the new frame is 2500, but the frame interval is 800, then the
"clamped" video time is 2400.
This clamped value is useful to get the relative system time without any
jitter.
When buffering is enabled for an async video source, sometimes minor
drift in timestamps or unexpected delays to frames can cause frames to
slowly buffer more and more in memory, in some cases eventually causing
the system to run out of memory.
The circumstances in which this can happen seems to depend on both the
computer and the devices in use. So far, the only known circumstances
in which this happens are with heavily buffered devices, such as
hauppauge, where decoding can sometimes take too long and cause
continual frame playback delay, and thus continual buffering until
memory runs out. I've never been able to replicate it on any of my
machines however, even after hours of testing.
This patch is a precautionary measure that puts a hard limit on the
number of async frames that can be currently queued to prevent any case
where memory might continually build for whatever reason. If it goes
over the limit, it clears the cache to reset the buffering.
I had a user with this problem test this patch with success and positive
feedback, and the intervals between buffering resets were long to where
it wasn't even noticeable while streaming/recording.
Ideally when decoding frames (such as from those devices), frame
dropping should be used to ensure playback doesn't incur extra delay,
although this sort of hard limit on the frame cache should still be
implemented regardless just as a safety precaution. For DirectShow
encoded devices I should just switch to faruton's libff for decoding and
enable the frame dropping options. It would probably explain why no
one's ever reported it for the media source, and pretty much only from
DirectShow device usage.
Ensures that the "Show Recordings" an "Remux Recordings" file menu
items will open the recordings folder from the currently active
output mode rather than always the simple output mode.
On windows vista/7, you cannot really use display capture efficiently
without disabling aero, so this will add an option to settings to allow
it to be disabled and cause it to be disabled on startup.
Portable mode can be enabled via command line options (--portable or -p)
or by having any of the following files present in the base directory of
a portable install:
portable_mode
obs_portable_mode
portable_mode.txt
obs_portable_mode.txt
Portable mode is omitted when obs is built with a unix program
structure.
Found via UBSan, actual errors (addresses not pruned for illustrative purposes):
"runtime error: store to misaligned address 0x7f9a9178e84c for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
"runtime error: load of misaligned address 0x7f9a9140f2cf for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
Found via UBSan, actual (sample) error:
"plugins/text-freetype2/text-functionality.c:284:26: runtime error: left
shift of 194 by 24 places cannot be represented in type 'int'"
In my recent update to add a "show" button to the passworded text
property, I neglected to connect the edit widget to
WidgetInfo::ControlChanged, so it isn't able to detect when the text is
changed by the user.
The Qt5Network classes seem to only support OpenSSL, and because OpenSSL
isn't available on windows, we would have to distribute it with the
program to get SSL access working. The problem with that is that
OpenSSL is not GPL-compatible, so we cannot distribute OpenSSL with the
program, which means we have to find a better (and preferably superior)
library for accessing remote files that can use the windows SSPI for our
SSL needs, which comes with the operating system.
Fortunately, libcurl is probably the best library out there, and can be
compiled with SSPI instead of OpenSSL, so we're just going to switch to
libcurl instead. Originally I thought it didn't support SSPI, otherwise
I would have implemented it sooner.
As a side note, this will make it so we'll able to get files from the
internet via plugins, which will be quite useful.
The RemoteTextThread class is a QThread that is used to get text
remotely in a separate thread with libcurl. This is intended to replace
the Qt5Network classes because of their dependency on OpenSSL, which we
can't distribute.
Reduces required scrolling when lots of new audio sources are added
(e.g. aux devices being enabled in the same dialog) when the dialog
was opened with just a few audio sources being present. Unfortunately,
the "restart required" warning is pushed all the way to the bottom
even if the source list is empty
The screen index returned from XDefaultScreen is 0-based, and we were
decrementing it before the check to see if it had reached 0 rather than
after, so in the default_screen function it would always end up getting
either the wrong screen or no screen.
When xcb_query_pointer and xcb_query_pointer_reply was called with no
valid screen, it would fail with an error, thus making it so that the
mouse buttons could not be properly captured as hotkeys.