Suppress signals of the volume input when setting a value. This stops
the volume control from setting the source volume when it receives the
volume changed event from the source.
Suppress signals of the volume slider when setting a value. This stops
the volume control from setting the source volume when it receives the
volume changed event from the source.
Use a dedicated method for setting the dB label in the volume control
and make sure to call it regardless if the volume was changed through
the slider or from somewhere else.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
The boolean variables which stored whether frames have been
rendered/downloaded/converted/etc were not being reset when video
restarted, causing frames to not be sent in the correct order whenever
video was reset. This could lead to minor desync of video/audio.
The 'sent_headers' member variable of the FLV output would not be reset
when the output was restarted, causing important data to not be written,
thus creating an invalid FLV file.
In certain circumstances where the output was stopping, and where data
took a long enough time to send (such as when using an encoding preset
that causes high CPU usage), the output would sometimes still send data
even after it was stopped, typically causing the output to crash.
I unintentionally made it use obs_source::sample_info instead of using
the actual target channel count, which is designated by the OBS output
sampler info. obs_source::sample_info is actually used to indicate the
currently set sampler information for incoming samples. So if a source
is outputting 5.1 channel 48khz audio, and OBS is running at stereo
44.1khz, then the obs_source::sample_info value would be set to
5.1/48khz, not the other way around. It indicates what the source
itself is running at, not what OBS is running at.
I suppose the variable needs a better name because even I used it
incorrectly despite actually having been the one who wrote it.
This dialog gives options such as increasing audio past 100%, forcing
the audio of a source to mono, and setting the audio sync offset of a
source (which was an oft-requested feature)
The copy_audio_data function really shouldn't be inlined because it's
being called twice. It's somewhat unnecessary, I think I left it inline
by accident.
This changes the way source volume handles transitioning between being
active and inactive states.
The previous way that transitioning handled volume was that it set the
presentation volume of the source and all of its sub-sources to 0.0 if
the source was inactive, and 1.0 if active. Transition sources would
then also set the presentation volume for sub-sources to whatever their
transitioning volume was. However, the problem with this is that the
design didn't take in to account if the source or its sub-sources were
active anywhere else, so because of that it would break if that ever
happened, and I didn't realize that when I was designing it.
So instead, this completely overhauls the design of handling
transitioning volume. Each frame, it'll go through all sources and
check whether they're active or inactive and set the base volume
accordingly. If transitions are currently active, it will actually walk
the active source tree and check whether the source is in a
transitioning state somewhere.
- If the source is a sub-source of a transition, and it's not active
outside of the transition, then the transition will control the
volume of the source.
- If the source is a sub-source of a transition, but it's also active
outside of the transition, it'll defer to whichever is louder.
This also adds a new callback to the obs_source_info structure for
transition sources, get_transition_volume, which is called to get the
transitioning volume of a sub-source.
The reason to keep a reference counter for transitions is due to an
optimization I'm planning on when calculating transition volumes. I'm
planning on walking the source tree to be able to calculate the current
base volume of a source, but *only* if there are transitions active,
because the only time that the volume can be anything other than 1.0
or 0.0 is when there are active transitions, which may change the base
volume of a source.
When the presentation volume is set for a source, it's set for all of
its children and their children. The original intention for doing this
was to be able to use it for transitioning, but honestly it's just bad
design, and I feel there are better ways to handle transitioning volume.
Changed the design from using obs_source::enum_refs to just simply
preventing infinite source recursion in general, rather than allowing it
through the enum_refs variable. obs_source_add_child has been changed
so that it now returns a boolean, and if the function fails, it means
that the child cannot be added due to that potential recursion.
Two integers are needlessly converted to floating points for what should
be an integer operation. One of those floats is then used for another
integer operation later, where the original integer value should have
been used. So essentially there was an int -> float -> int conversion
going on, which could lead to potential loss of data due to floating
point precision.
There were also some general 64bit -> 32bit conversion warnings.
obs_encoder_getdisplayname declaration was not changed to match the
definition (obs_encoder_get_display_name) when the API consistency
update occurred.
On i3wm, windows aren't unmapped when switching away from a window's
workspace, but it does cause OBS to lose the capture. Because
switching back will not trigger a MapNotify, the capture fails to
restart unless you resize or move the window (ConfigureNotify). An
Expose event is fired by the wm, however, so catching this correctly
restarts the capture.
Refactor the screen enumeration code a little to make sure xinerama is present
and active before using it. If the extension is present but not active it will
no longer fail.
Add a new helper library to handle the mouse cursor using xcb.
Since porting the old library without either keeping legacy code or
breaking the api would have been non-trivial, this is added as a
completely separate implementation. Once all code is ported over to
use this library, the old one can be removed.