Resetting audio while libobs is active is a real pain. I think I'm just
going to do audio resetting later, or maybe just require restart
regardless just because having to shut down audio streams/lines while
there's sources currently active requires recreating all the audio
lines for each audio source. Very painful.
Video fortunately is no big deal, so at least there's that.
Split off activate to activate and show callbacks, and split off
deactivate to deactivate and hide callbacks. Sources didn't previously
have a means to know whether it was actually being displayed in the main
view or just happened to be visible somewhere. Now, for things like
transition sources, they have a means of knowing when they have actually
been "activated" so they can initiate their sequence.
A source is now only considered "active" when it's being displayed by
the main view. When a source is shown in the main view, the activate
callback/signal is triggered. When it's no longer being displayed by
the main view, deactivate callback/signal is triggered.
When a source is just generally visible to see by any view, the show
callback/signal is triggered. If it's no longer visible by any views,
then the hide callback/signal is triggered.
Presentation volume will now only be active when a source is active in
the main view rather than also in auxilary views.
Also fix a potential bug where parents wouldn't properly increment or
decrement all the activation references of a child source when a child
was added or removed.
Implement a few audio options in to the user interface as well as a few
inline audio functions in audio-io.h.
Make it so ffmpeg plugin automatically converts to the desired format.
Use regular interleaved float internally for audio instead of planar
float.
This allows the changing of bideo settings without having to completely
reset all graphics data. Will recreate internal output/conversion
buffers and such and reset the main preview.
Make it so obs_data settings input in to *_update are applied to the
existing settings rather than fully replace the existing settings. That
way you can update with only certain specific settings, leaving other
settings untouched. Of course if you're already using the original
settings pointer in the first place then you've already done that, so
it'll just ignore it because you've already applied them.
- Remove obs_source::type because it became redundant now that the
type is always stored in the obs_source::info variable.
- Apply presentation volumes of 1.0 and 0.0 to sources when they
activate/deactivate, respectively. It also applies that presentation
volume to all sub-sources, with exception of transition sources.
Transition sources must apply presentation volume manually to their
sub-sources with the new transition functions below.
- Add a "transition_volume" variable to obs_source structure, and add
three functions for handling volume for transitions:
* obs_transition_begin_frame
* obs_source_set_transition_vol
* obs_transition_end_frame
Because the to/from targets of a transition source might both contain
some of the same sources, handling the transitioning of volumes for
that specific situation becomes an issue.
So for transitions, instead of modifying the presentation volumes
directly for both sets of sources, we do this:
- First, call obs_transition_begin_frame at the beginning of each
transition frame, which will reset transition volumes for all
sub-sources to 0. Presentation volumes remain unchanged.
- Call obs_source_set_transition_vol on each sub-source, which will
then add the volume to the transition volume for each source in
that source's tree. Presentation volumes still remain unchanged.
- Then you call obs_trandition_end_frame when complete, which will
then finally set the presentation volumes to the transition
volumes.
For example, let's say that there's one source that's within both the
"transitioning from" sources and "transition to" sources. It would
add both the fade in and fade out volumes to that source, and then
when the frame is complete, it would set the presentation volume to
the sum of those two values, rather than set the presentation volume
for that same source twice which would cause weird volume jittering
and also set the wrong values.
Now sources will be properly activated and deactivated when they are in
use or not in use.
Had to figure out a way to handle child sources, and children of
children, just ended up implementing simple functions that parents use
to signal adding/removal to help with hierarchial activation and
deactivation of child sources.
To prevent the source activate/deactivate callbacks from being called
more than once, added an activation reference counter. The first
increment will call the activate callback, and the last decrement will
call the deactivate callback.
Added "source-activate" and "source-deactivate" signals to the main obs
signal handler, and "activate" and "deactivate" to individual source
signal handlers.
Also, fixed the main window so it properly selects a source when the
current active scene has been changed.
Added a "master" volume for the entire audio subsystem.
Also, added a "presentation" volume for both the master volume and for
each invidiaul source. The presentation volume is used to control
things like transitioning volumes, preventing sources from outputting
any audio when they're inactive, as well as some other uses in the
future.
If audio was under, it originally did a full reset of the audio timing.
However, resetting the audio timing when this happens is kind of a bad
thing. It's better just to clamp the value to the expected timestamp to
ensure seamless audio output.
Also, implement audio timestamp smoothing to ensure audio tries to be as
seamless as possible.
I actually did compile that last commit and misread the failed projects
as 0. I'm just going to put the conversion stuff in video-io.h stuff
because it requires it anyway, and video-scaler.h already depends on
video-io.h for the video_format enum anyway.
Add a scaler interface (defaults to swscale), and if a separate output
wants to use a different scale or format than the default output format,
allow a scaler instance to be created automatically for that output,
which will then receive the new scaled output.
If there are for example more than one audio outputs and they have
different sample rates or channels and such, this will allow automatic
conversion of that audio to the request formats/channels/rates (but only
if requested).
Turns out that on some adapters, due to some sort of internal GPU
precision error, fmod(x, y) can return x when x == y, wich is incorrect
(and no, they were actually equal, not off due to precision errors).
This would cause the shader to sample wrong coordinates on the edges
sometimes. Just adding 0.1 to the x value before being put in to fmod
and then flooring the result after fixes the issue.
- Changed glMapBuffer to glMapBufferRange to allow invalidation. Using
just glMapBuffer alone was causing some unacceptable stalls.
- Changed dynamic buffers from GL_DYNAMIC_WRITE to GL_STREAM_WRITE
because I had misunderstood the OpenGL specification
- Added _OPENGL and _D3D11 builtin preprocessor macros to effects to
allow special processing if needed
- Added fmod support to shaders (NOTE: D3D and GL do not function
identically with negative numbers when using this. Positive numbers
however function identically)
- Created a planar conversion shader that converts from packed YUV to
planar 420 right on the GPU without any CPU processing. Reduces
required GPU download size to approximately 37.5% of its normal rate
as well. GPU usage down by 10 entire percentage points despite the
extra required pass.
Staging surfaces with GL originally copied to a texture and then
downloaded that copied texture, but I realized that there was really no
real need to do that. Now instead they'll copy directly from the
texture that's given to them rather than copying to a buffer first.
Secondly, hopefully fix the mac issue where the only way to perform an
asynchronous texture download is via FBOs and glReadPixels. It's a
really dumb issue with macs and the amount of "gotchas" and non-standard
internal GL functionaly on mac is really annoying.
There were a *lot* of warnings, managed to remove most of them.
Also, put warning flags before C_FLAGS and CXX_FLAGS, rather than after,
as -Wall -Wextra was overwriting flags that came before it.
Originally, the rendering system was designed to only display sources
and such, but I realized there would be a flaw; if you wanted to render
the main viewport in a custom way, or maybe even the entire application
as a graphics-based front end, you wouldn't have been able to do that.
Displays have now been separated in to viewports and displays. A
viewport is used to store and draw sources, a display is used to handle
draw callbacks. You can even use displays without using viewports to
draw custom render displays containing graphics calls if you wish, but
usually they would be used in combination with source viewports at
least.
This requires a tiny bit more work to create simple source displays, but
in the end its worth it for the added flexibility and options it brings.
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.