A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
The boolean variables which stored whether frames have been
rendered/downloaded/converted/etc were not being reset when video
restarted, causing frames to not be sent in the correct order whenever
video was reset. This could lead to minor desync of video/audio.
This adds bicubic and lanczos scaling capability to libobs to improve
scaling quality and sharpness when the output resolution has to be
scaled relative to the base resolution. Bilinear is also available,
although bilinear has rather poor quality and causes scaling to appear
blurry.
If the output resolution is close to the base resolution, then bilinear
is used instead as an optimization, as there's no need to use these
shaders if scaling is not in use.
The Bicubic and Lanczos effects are also exposed via exported function
to allow the ability to use those shaders in plugin modules if desired.
The API change adds a variable 'scale_type' to the obs_video_info
structure that allows the user interface to choose what type of scaling
filter should be used.
This was an important change because we were originally using an
hard-coded 709/partial range color matrix for the output, which was
causing problems for people wanting to use different formats or color
spaces. This will now automatically generate the color matrix depending
on the format, color space, and range, or use an identity matrix if the
video format is RGB instead of YUV.
At the start of each render loop, it would get the timestamp, and then
it would then assign that timestamp to whatever frame was downloaded.
However, the frame that was downloaded was usually occurred a number of
frames ago, so it would assign the wrong timestamp value to that frame.
This fixes that issue by storing the timestamps in a circular buffer.
The graphics subsystem was not being freed here, for example if a
required effect failed to compile it would still successfully have the
graphics subsystem sans required effect. The graphics subsystem should
be completely shut down if required libobs effects fail to compile.
This Fixes a minor flaw with the API where data had to always be mutable
to be usable by the API.
Functions that do not modify the fundamental underlying data of a
structure should be marked as constant, both for safety and to signify
that the parameter is input only and will not be modified by the
function using it.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This makes it easier to do two things:
1.) Get the skipped frames count relative to each specific output
2.) Make it so that getting the 'current' log will always contain
information about skipped frames. Before, you'd have to force the
user to restart the program and get the last log, which was really
annoying when you just wanted to see how the encoders were
performing.
Instead of having functions like obs_signal_handler() that can fail to
properly specify their actual intent in the name (does it signal a
handler, or does it return a signal handler?), always prefix functions
that are meant to get information with 'get' to make its functionality
more explicit.
Previous names: New names:
-----------------------------------------------------------
obs_audio obs_get_audio
obs_video obs_get_video
obs_signalhandler obs_get_signal_handler
obs_prochandler obs_get_proc_handler
obs_source_signalhandler obs_source_get_signal_handler
obs_source_prochandler obs_source_get_proc_handler
obs_output_signalhandler obs_output_get_signal_handler
obs_output_prochandler obs_output_get_proc_handler
obs_service_signalhandler obs_service_get_signal_handler
obs_service_prochandler obs_service_get_proc_handler
API Removed:
- graphics_t obs_graphics();
Replaced With:
- void obs_enter_graphics();
- void obs_leave_graphics();
Description:
obs_graphics() was somewhat of a pointless function. The only time
that it was ever necessary was to pass it as a parameter to
gs_entercontext() followed by a subsequent gs_leavecontext() call after
that. So, I felt that it made a bit more sense just to implement
obs_enter_graphics() and obs_leave_graphics() functions to do the exact
same thing without having to repeat that code. There's really no need
to ever "hold" the graphics pointer, though I suppose that could change
in the future so having a similar function come back isn't out of the
question.
Still, this at least reduces the amount of unnecessary repeated code for
the time being.
Changed:
- obs_source_gettype
To:
- enum obs_source_type obs_source_get_type(obs_source_t source);
- const char *obs_source_get_id(obs_source_t source);
This function was inconsistent for a number of reasons. First, it
returns both the ID and the type of source (input/transition/filter),
which is inconsistent with the name of "get type". Secondly, the
'squishy' naming convention which has just turned out to be bad
practice and causes inconsistencies. So it's now replaced with two
functions that just return the type and the ID.
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
The version macro that modules use to compile versus the actual core
version that may be in use may be different, so this is a way to compare
them to check for compatibility issues later on.
Changed API functions:
libobs: obs_reset_video
Before, video initialization returned a boolean, but "failed" is too
little information, if it fails due to lack of device capabilities or
bad video device parameters, the front-end needs to know that.
The OBS Basic UI has also been updated to reflect this API change.
There was no need to call the context free function in the
initialization function, and it's safer to just initialize the memory to
0 before using (which also negates the need for da_init)
This just ensures that if an obs object is renamed that the pointer to
older names will still be valid. Prevents renames from causing any
invalid memory access.
When the obs object is destroyed, so are the cached names.
The module callback obs_module_set_locale will be called after loading
the module, and any time the locale is manually changed via core API.
When this function is called, the module is expected to load new text
lookup values for all the text it uses based upon the current locale.
This API is used to set the current locale for libobs, which it will set
for all modules when a module is loaded or specifically when the locale
is manually changed.
This replaces the older code which simply queried the max volume level
value for any given audio.
I'm still not 100% sure on if this is how I want to approach the
problem, particularly, whether this should be done in obs_source or in
audio_line, but it can always be moved later if needed.
This uses the calculations by the awesome Bill Hamilton that OBS1 used
for its volume levels. It calculates the current max (level),
magnitude, and current peak. This data then can be used to create
awesome volume meter controls later on.
NOTE: Will probably need optimization, does one float at a time right
now.
Also, change some of the naming conventions. I actually need to change
a lot of the naming conventions in general so that all words are
separated by underscores. Kind of a bad practice there on my part.
Not entirely sure how this happened but I *think* that a null source was
somehow being added to the list of user sources for one particular user,
and then I noticed this code does not check to see whether the source is
null or not.
- Add volume control
These volume controls are basically nothing more than sliders. They
look terrible and hopefully will be as temporary as they are
terrible.
- Allow saving of specific non-user sources via obs_load_source and
obs_save_source functions.
- Save data of desktop/mic audio sources (sync data, volume data, etc),
and load the data on startup.
- Make it so that a scene is created by default if first time using the
application. On certain operating systems where supported, a default
capture will be created. Desktop capture on mac, particularly. Not
sure what to do about windows because monitor capture on windows 7 is
completely terrible and is bad to start users off with.
This saves scenes/sources from json on exit, and properly loads it back
up when starting up the program again, as well as the currently active
scene.
I had to add a 'load' and 'save' callback to the source interface
structure because I realizes that certain sources (such as scenes)
operate different with their saved data; scenes for example would have
to keep track of their settings information constantly, and that was
somewhat unacceptable to make it functional.
The optional 'load' callback will be called only after having loaded
setttings specifically from file/imported data, and the 'save' function
will be called only specifically when data actually needs to be saved.
I also had to adjust the obs_scene code so that it's a regular input
source type now, and I also modified it so that it doesn't have some
strange custom creation code anymore. The obs_scene_create function is
now simply just a wrapper for obs_source_create. You could even create
a scene with obs_source_create manually as well.
- Updated the services API so that it links up with an output and
the output gets data from that service rather than via settings.
This allows the service context to have control over how an output is
used, and makes it so that the URL/key/etc isn't necessarily some
static setting.
Also, if the service is attached to an output, it will stick around
until the output is destroyed.
- The settings interface has been updated so that it can allow the
usage of service plugins. What this means is that now you can create
a service plugin that can control aspects of the stream, and it
allows each service to create their own user interface if they create
a service plugin module.
- Testing out saving of current service information. Saves/loads from
JSON in to obs_data_t, seems to be working quite nicely, and the
service object information is saved/preserved on exit, and loaded
again on startup.
- I agonized over the settings user interface for days, and eventually
I just decided that the only way that users weren't going to be
fumbling over options was to split up the settings in to simple/basic
output, pre-configured, and then advanced for advanced use (such as
multiple outputs or services, which I'll implement later).
This was particularly painful to really design right, I wanted more
features and wanted to include everything in one interface but
ultimately just realized from experience that users are just not
technically knowledgable about it and will end up fumbling with the
settings rather than getting things done.
Basically, what this means is that casual users only have to enter in
about 3 things to configure their stream: Stream key, audio bitrate,
and video bitrate. I am really happy with this interface for those
types of users, but it definitely won't be sufficient for advanced
usage or for custom outputs, so that stuff will have to be separated.
- Improved the JSON usage for the 'common streaming services' context,
I realized that JSON arrays are there to ensure sorting, while
forgetting that general items are optimized for hashing. So
basically I'm just using arrays now to sort items in it.
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.