This makes a minor adjustment to the interval at which the inject helper
tries to post the inject message to the target process. Only 2 seconds
before, now up to 4 seconds, with the PostThreadMessage called every
half second for the duration.
The reason I did this is because I noticed that on rare occasions that
it wouldn't hook due to the low interval; usually just because the
target process is busy and isn't able to process its message queue, and
therefor the hook wouldn't go through due to the fact that
SetWindowsHookEx won't inject until the set event has occurred. The
inject helper program would just close before the thread message had
finally been processed, which would cancel the SetWindowsHookEx hooking.
The code neglected to take in to account that start_capture can also be
called when the texture updates its size/format in the hook and 'ready'
is signaled again, so it's possible that existing variables in the game
capture structure could be overwritten with new ones unintentionally.
The game capture 'Activate' button is likely to fool users in to
thinking it's not actually active if the game capture displays black, so
if it's active, rename the button to 'Reactivate' in order to sort of
hint at the user that it's actually active.
When a source has a lot of properties, the scroll area containing them
would try to expand to fit them all, often leaving the preview area
super squished. So this just sets a maximum height for the properties
scroll area.
The temporary unoptimized code we were using before just completely
allocated a new copy of each frame every single time a new async frame
was output by the source plugin. This just creates a cache of frames as
needed for the current format/width/height to minimize the allocation
and deallocation. If new frames come in that are of a different
format/width/height, it'll just clear the cache. This is a fairly
important optimization.
all the async video related stuff usually started with async_*, and
there were two that didn't. So I just renamed them so they have the
same naming convention
This is a bit of an optimization to reduce load a little bit if any of
the video capture sources are not currently being displayed on the
screen. They will simply not capture or update their texture data if
they are not currently being shown anywhere.
The mac and window game capture sources don't really apply due to the
fact that their textures aren't updated on the source's end (they update
inside of the hooks).
The DirectShow input source would always turn on first use, whether the
user wanted it to or not. I feel like having an activate/deactivate
option is a really nice thing to have, and makes configuration feel a
little bit less awkward.
If an async video source stops video for whatever reason, it would get
stuck on the last frame that was played. This was particularly awkward
when I wanted to give the user the ability to deactivate a source such
as a webcam because it would get stuck on the last frame.
This allows us to change the visible UI name of a property after it's
been created (particularly for a case where I want to change an
'Activate' button to 'Deactivate')
There appears to be a bug with displaying the vertical scroll bar widget
where the horizontal scroll bar will show when it's not supposed to.
Fortunately it can be completely disabled.
The regular scroll area can expand horizontally, but the problem with
this is that sometimes there are controls within it that expand way too
big.
For example, the properties window for window capture can have a list of
windows where the titles of the windows are really really long, and it
causes the properties to extend way too far to the right, making the
window look really unusual.
Another example are the volume controls in the main window that can
expand way to the right if the name of a source is really long, causing
the volume control to stretch way too far to the right, making the
volume controls difficult to use when that happens.
So this just makes it so it sets the maximum width of a scroll area's
internal widget to the actual width of the scroll area, preventing it
from going off the side of the scroll area.
This crash report dialog is mostly just for the windows crash handling
code. If a crash occurs, the user will be able to view the crash report
and post it on the forums or give it to a developer for debugging
purposes.
A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.
I originally had it set the color space and color range in the video
info callback, but I forgot that it's a function that's called after the
encoder is initialized. You can change the color space and color range,
but you have to reconfigure the encoder, and there's no real reason to
do that.
When the encoder is set to scale to a different resolution than the obs
output resolution, make sure it uses the current video colorspace and
range by default.
Uses the output duplicator API in order to get a high performance
monitor capture on windows 8+. This is actually designed to be
interchangeable with regular GDI-based monitor capture (uses the same
source id).
Allow the user to select whether to buffer the source or not. The
settings are auto-detect, on, and off. Auto-Detect turns it off for
non-encoded devices, and on for encoded devices.
Webcams, internal devices, and other such things on windows do not
really need to be buffered, and buffering incurs a tiny bit of delay, so
turning off buffering is actually a little better for non-encoded
devices.
I actually kind of hate how strstr returns a non-const even though it
takes a const parameter, but I can understand why they made it that way.
They really should have split it in to two functions though, one const
and one non-const or something. But alas, ultimately for a C programmer
who knows what they're doing it isn't a huge deal.
This adds support for the windows 8+ output duplicator feature which
allows the efficient capturing of a specific monitor connected to the
currently used device.
Suppress signals of the volume input when setting a value. This stops
the volume control from setting the source volume when it receives the
volume changed event from the source.
Suppress signals of the volume slider when setting a value. This stops
the volume control from setting the source volume when it receives the
volume changed event from the source.
Use a dedicated method for setting the dB label in the volume control
and make sure to call it regardless if the volume was changed through
the slider or from somewhere else.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
The boolean variables which stored whether frames have been
rendered/downloaded/converted/etc were not being reset when video
restarted, causing frames to not be sent in the correct order whenever
video was reset. This could lead to minor desync of video/audio.
The 'sent_headers' member variable of the FLV output would not be reset
when the output was restarted, causing important data to not be written,
thus creating an invalid FLV file.