This adds a new signal to (audio) sources which is emitted whenever new
audio data is received from the source. This enables other code that is
interested in the raw audio data to directly access it when it becomes
available.
This was an important change because we were originally using an
hard-coded 709/partial range color matrix for the output, which was
causing problems for people wanting to use different formats or color
spaces. This will now automatically generate the color matrix depending
on the format, color space, and range, or use an identity matrix if the
video format is RGB instead of YUV.
Just for a quick background: D3D's fmod intrinsic is very imprecise.
Naturally floating points aren't precise at all, and when the numbers
you're dealing with become very large, it can often be off by 0.1 or
more.
However, apparently 0.1 isn't enough of an offset to ensure a proper
value when using the fmod intrinsic and then flooring the value. 0.2
seems to fix the issue and make the image display properly.
On certain GPUs, if you don't flush and the window is minimized it can
endlessly accumulate memory due to what I'm assuming are driver design
flaws (though I can't know for sure). The flush seems to prevent this
from happening, at least from my tests. It would be nice if this
weren't necessary.
This replaces the old code for the audio meter that was using
calculations in two different places with the new audio meter api.
The source signal will now emit simple levels instead of dB values,
in order to avoid dB conversion and calculation in the source.
The GUI on the other hand now expects simple position values from
the volume meter api with no knowledge about dB calculus either.
That way all code that handles those conversions is in one place,
with the option to easily add new mappings that can be used
everywhere.
This adds a volume meter object to libobs that can be used by the GUI
or plugins to convert the raw audio level data from sources to values
that can easily be used to display the audio data.
The volume meter object will use the same mapping functions as the
fader object to map dB levels to a scale.
In older versions of visual studio 2013 microsoft's WORTHLESS C compiler
has a bug where it will, almost at random, not be able to handle having
variables declared in the middle of a function and give the warning:
"illegal use of this type as an expression". It was fixed in recent
VS2013 updates, but I'm not about to force everyone to update to it.
Because a vec3 structure can contain a __m128 variable and not the
expected three floats x, y, and z, you must use vec3_set when
setting a value for a vec3 structure to ensure that it uses the proper
intrinsics internally if necessary.
This adds functions for piping a command line program's stdin or stdout.
Note however that this is unidirectional only.
This will be especially useful later on when implementing MP4 output,
because MP4 output has to be piped to prevent unexpected program
termination from corrupting the file.
This adds a new library of audio control functions mainly for the use in
GUIS. For now it includes an implementation of a software fader that can
be attached to sources in order to easily control the volume.
The fader can translate between fader-position, volume in dB and
multiplier with a configurable mapping function.
Currently only a cubic mapping (mul = fader_pos ^ 3) is included, but
different mappings can easily be added.
Due to libobs saving/restoring the source volume from the multiplier,
the volume levels for existing source will stay the same, and live
changing of the mapping will work without changing the source volume.
This function greatly simplifies the use of effects by making it so you
can call this function in a simple loop. This reduces boilerplate and
makes drawing with effects much easier. The gs_effect_loop function
will now automatically handle all the functions required to do drawing.
---------------------
Before:
gs_technique_t *technique = gs_effect_get_technique("technique");
size_t passes = gs_technique_begin(technique);
for (size_t pass = 0; pass < passes; pass++) {
gs_technique_begin_pass(technique, pass);
[draw]
gs_technique_end_pass(technique);
}
gs_technique_end(technique);
---------------------
After:
while (gs_effect_loop(effect, "technique")) {
[draw]
}
If you look at the previous commits, you'll see I had added
obs_source_draw before. For custom drawn sources in particular, each
time obs_source_draw was called, it would restart the effect and its
passes for each draw call, which was not optimal. It should really use
the effect functions for that. I'll have to add a function to simplify
effect usage.
I also realized that including the color matrix parameters in
obs_source_draw made the function kind of messy to use; instead,
separating the color matrix stuff out to
obs_source_draw_set_color_matrix feels a lot more clean.
On top of that, having the ability to set the position would be nice to
have as well, rather than having to mess with the matrix stuff each
time, so I also added that for the sake of convenience.
obs_source_draw will draw a texture sprite, optionally of a specific
size and/or at a specific position, as well as optionally inverted. The
texture used will be set to the 'image' parameter of whatever effect is
currently active.
obs_source_draw_set_color_matrix will set the color matrix value if the
drawing requires color matrices. It will set the 'color_matrix',
'color_range_min', and 'color_range_max' parameters of whatever effect
is currently active.
Overall, these feel much more clean to use than the previous iteration.
This function simplifies drawing textures for sources in order to help
reduce boilerplate code. If a source is a custom drawn source, it will
automatically set up the effect to draw the sprite. If it's not a
custom drawn source, it will simply draw the sprite as per normal. If
the source uses a specific color matrix, it will also handle that as
well.
When the image data is copied into a texture with flipping set to true
each row has to be copied into the (height - row - 1)th row instead of
the row with the same number. Otherwise it will just create an unflipped
copy.
Apparently the audio isn't guaranteed to start up past the first video
frame, so it would trigger that assert (which I'm glad I put in). I
didn't originally have this happen when I was testing because my audio
buffering was not at the default value and didn't trigger it to occur.
A blunder on my part, and once again a fine example of how you should
never make assumptions about possible code path.
This moves the 'flags' variable from the obs_source_frame structure to
the obs_source structure, and allows user flags to be set for a specific
source. Having it set on the obs_source_frame structure didn't make
much sense.
OBS_SOURCE_UNBUFFERED makes it so that the source does not buffer its
async video output in order to try to play it on time. In other words,
frames are played as soon as possible after being received.
Useful when you want a source to play back as quickly as possible
(webcams, certain types of capture devices)
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
This bug would happen if audio packets started being received before
video packets. It would erroneously cause audio packets to be
completely thrown away, and in certain cases would cause audio and video
to start way out of sync.
My original intention was "don't accept audio until video has started",
but instead mistakenly had the effect of "don't start audio until a
video packet has been received". This was originally was intended as a
way to handle outputs hooking in to active encoders and compensating
their existing timestamp information.
However, this made me realize that there was a major flaw in the design
for handling this, so I basically rewrote the entire thing.
Now, it does the following steps when inserting packets:
- Insert packets in to the interleaved packet array
- When both audio/video packets are received, prune packets up until the
point in which both audio/video start at the same time
- Resort the interleaved packet array
I have tested this code extensively and it appears to be working well,
regardless of whether or not the encoders were already active with
another output.
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump. This could mess up syncing
slightly when the frame is finally given to an outputs.
Apparently I unintentionally typed received_video = false twice instead
of one for video and one for audio.
This fixes a bug where audio would not start up again on an output that
had recently started and then stopped.
When the output sets a new audio/video encoder, it was not properly
removing itself from the previous audio/video encoders it was associated
with. It was erroneously removing itself from the encoder parameter
instead.
At the start of each render loop, it would get the timestamp, and then
it would then assign that timestamp to whatever frame was downloaded.
However, the frame that was downloaded was usually occurred a number of
frames ago, so it would assign the wrong timestamp value to that frame.
This fixes that issue by storing the timestamps in a circular buffer.
If audio timestamps are within the operating system timing threshold,
always use those values directly as a timestamp, and do not apply the
regular jump checks and timing adjustments that normally occur.
This potentially fixes an issue with plugins that use OS timestamps
directly as timestamp values for their audio samples, and bypasses the
timing conversions to system time for the audio line and uses it
directly as the timestamp value. It prevents those calculations from
potentially affecting the audio timestamp value when OS timestamps are
used.
For example, if the first set of audio samples from the audio source
came in delayed, while the subsequent samples were not delayed, those
first samples could have potentially inadvertently triggered the timing
adjustments, which would affect all subsequent audio samples.
This combines the 'direct' timestamp variance threshold with the maximum
timestamp jump threshold (or rather just removes the max timestamp jump
threshold and uses the timestamp variance threshold for both timestamp
jumps and detecting timestamps).
The reason why this was done was because a timestamp jump could occur at
a higher threshold than the threshold used for detecting OS timestamps
within a certain threshold. If timestamps got between those two
thresholds it kind of became a weird situation where timestamps could be
sort of 'stuck' back or forward in time more than intended. Better to
be consistent and use the same threshold for both values.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold. The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about. For
the time being, I feel it may be worth it to try 50 milliseconds.