2013-09-30 19:37:13 -07:00
|
|
|
/******************************************************************************
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
Copyright (C) 2013-2014 by Hugh Bailey <obs.jim@gmail.com>
|
2013-09-30 19:37:13 -07:00
|
|
|
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
2013-12-02 21:24:38 -08:00
|
|
|
the Free Software Foundation, either version 2 of the License, or
|
2013-09-30 19:37:13 -07:00
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
******************************************************************************/
|
|
|
|
|
2013-10-14 04:21:15 -07:00
|
|
|
#pragma once
|
2013-09-30 19:37:13 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include "util/c99defs.h"
|
2013-09-30 19:37:13 -07:00
|
|
|
#include "util/darray.h"
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
#include "util/circlebuf.h"
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include "util/dstr.h"
|
2013-09-30 19:37:13 -07:00
|
|
|
#include "util/threading.h"
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
#include "util/platform.h"
|
2015-07-10 23:04:46 -07:00
|
|
|
#include "util/profiler.h"
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include "callback/signal.h"
|
|
|
|
#include "callback/proc.h"
|
2013-09-30 19:37:13 -07:00
|
|
|
|
|
|
|
#include "graphics/graphics.h"
|
libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
2016-01-03 16:41:14 -08:00
|
|
|
#include "graphics/matrix4.h"
|
2013-09-30 19:37:13 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include "media-io/audio-resampler.h"
|
2013-09-30 19:37:13 -07:00
|
|
|
#include "media-io/video-io.h"
|
|
|
|
#include "media-io/audio-io.h"
|
|
|
|
|
|
|
|
#include "obs.h"
|
2014-01-17 05:24:34 -08:00
|
|
|
|
2013-09-30 19:37:13 -07:00
|
|
|
#define NUM_TEXTURES 2
|
2019-07-26 23:21:41 -07:00
|
|
|
#define NUM_CHANNELS 3
|
2014-04-10 11:59:42 -07:00
|
|
|
#define MICROSECOND_DEN 1000000
|
2019-02-05 17:37:40 -08:00
|
|
|
#define NUM_ENCODE_TEXTURES 3
|
|
|
|
#define NUM_ENCODE_TEXTURE_FRAMES_TO_WAIT 1
|
2013-09-30 19:37:13 -07:00
|
|
|
|
2014-04-10 11:59:42 -07:00
|
|
|
static inline int64_t packet_dts_usec(struct encoder_packet *packet)
|
|
|
|
{
|
|
|
|
return packet->dts * MICROSECOND_DEN / packet->timebase_den;
|
|
|
|
}
|
2013-11-20 14:00:16 -08:00
|
|
|
|
2017-12-06 23:13:56 -08:00
|
|
|
struct tick_callback {
|
|
|
|
void (*tick)(void *param, float seconds);
|
|
|
|
void *param;
|
|
|
|
};
|
|
|
|
|
2014-02-13 07:58:31 -08:00
|
|
|
struct draw_callback {
|
|
|
|
void (*draw)(void *param, uint32_t cx, uint32_t cy);
|
|
|
|
void *param;
|
|
|
|
};
|
|
|
|
|
2015-09-06 15:26:47 -07:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* validity checks */
|
|
|
|
|
|
|
|
static inline bool obs_object_valid(const void *obj, const char *f,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *t)
|
2015-09-06 15:26:47 -07:00
|
|
|
{
|
|
|
|
if (!obj) {
|
2015-10-19 00:53:21 -07:00
|
|
|
blog(LOG_DEBUG, "%s: Null '%s' parameter", f, t);
|
2015-09-06 15:26:47 -07:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-10-17 04:19:43 -07:00
|
|
|
#define obs_ptr_valid(ptr, func) obs_object_valid(ptr, func, #ptr)
|
2019-06-22 22:13:45 -07:00
|
|
|
#define obs_source_valid obs_ptr_valid
|
|
|
|
#define obs_output_valid obs_ptr_valid
|
2015-10-17 04:19:43 -07:00
|
|
|
#define obs_encoder_valid obs_ptr_valid
|
|
|
|
#define obs_service_valid obs_ptr_valid
|
2015-09-06 15:26:47 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* ------------------------------------------------------------------------- */
|
2014-02-13 07:58:31 -08:00
|
|
|
/* modules */
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
|
|
|
struct obs_module {
|
2015-08-09 05:35:24 -07:00
|
|
|
char *mod_name;
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
const char *file;
|
|
|
|
char *bin_path;
|
|
|
|
char *data_path;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
void *module;
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
bool loaded;
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool (*load)(void);
|
|
|
|
void (*unload)(void);
|
|
|
|
void (*post_load)(void);
|
|
|
|
void (*set_locale)(const char *locale);
|
|
|
|
void (*free_locale)(void);
|
|
|
|
uint32_t (*ver)(void);
|
|
|
|
void (*set_pointer)(obs_module_t *module);
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
const char *(*name)(void);
|
|
|
|
const char *(*description)(void);
|
|
|
|
const char *(*author)(void);
|
|
|
|
|
|
|
|
struct obs_module *next;
|
2013-09-30 19:37:13 -07:00
|
|
|
};
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
extern void free_module(struct obs_module *mod);
|
|
|
|
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
struct obs_module_path {
|
|
|
|
char *bin;
|
|
|
|
char *data;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline void free_module_path(struct obs_module_path *omp)
|
|
|
|
{
|
|
|
|
if (omp) {
|
|
|
|
bfree(omp->bin);
|
|
|
|
bfree(omp->data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool check_path(const char *data, const char *path,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct dstr *output)
|
(API Change) Refactor module handling
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
2014-07-27 12:00:11 -07:00
|
|
|
{
|
|
|
|
dstr_copy(output, path);
|
|
|
|
dstr_cat(output, data);
|
|
|
|
|
|
|
|
return os_file_exists(output->array);
|
|
|
|
}
|
|
|
|
|
2014-11-01 13:41:17 -07:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* hotkeys */
|
|
|
|
|
|
|
|
struct obs_hotkey {
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_id id;
|
|
|
|
char *name;
|
|
|
|
char *description;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_func func;
|
|
|
|
void *data;
|
|
|
|
int pressed;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_registerer_t registerer_type;
|
|
|
|
void *registerer;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_id pair_partner_id;
|
2014-11-01 13:41:17 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
struct obs_hotkey_pair {
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_pair_id pair_id;
|
|
|
|
obs_hotkey_id id[2];
|
|
|
|
obs_hotkey_active_func func[2];
|
|
|
|
bool pressed0;
|
|
|
|
bool pressed1;
|
|
|
|
void *data[2];
|
2014-11-01 13:41:17 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct obs_hotkey_pair obs_hotkey_pair_t;
|
|
|
|
|
|
|
|
typedef struct obs_hotkeys_platform obs_hotkeys_platform_t;
|
|
|
|
|
|
|
|
void *obs_hotkey_thread(void *param);
|
|
|
|
|
|
|
|
struct obs_core_hotkeys;
|
|
|
|
bool obs_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys);
|
|
|
|
void obs_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys);
|
|
|
|
bool obs_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context,
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_key_t key);
|
2014-11-01 13:41:17 -07:00
|
|
|
|
|
|
|
const char *obs_get_hotkey_translation(obs_key_t key, const char *def);
|
|
|
|
|
|
|
|
struct obs_context_data;
|
|
|
|
void obs_hotkeys_context_release(struct obs_context_data *context);
|
|
|
|
|
|
|
|
void obs_hotkeys_free(void);
|
|
|
|
|
|
|
|
struct obs_hotkey_binding {
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_key_combination_t key;
|
|
|
|
bool pressed;
|
|
|
|
bool modifiers_match;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_id hotkey_id;
|
|
|
|
obs_hotkey_t *hotkey;
|
2014-11-01 13:41:17 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
struct obs_hotkey_name_map;
|
|
|
|
void obs_hotkey_name_map_free(void);
|
|
|
|
|
2014-02-13 07:58:31 -08:00
|
|
|
/* ------------------------------------------------------------------------- */
|
2014-02-13 09:21:16 -08:00
|
|
|
/* views */
|
2014-02-13 07:58:31 -08:00
|
|
|
|
2014-02-13 09:21:16 -08:00
|
|
|
struct obs_view {
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t channels_mutex;
|
|
|
|
obs_source_t *channels[MAX_CHANNELS];
|
2014-02-13 07:58:31 -08:00
|
|
|
};
|
|
|
|
|
2014-02-13 09:21:16 -08:00
|
|
|
extern bool obs_view_init(struct obs_view *view);
|
|
|
|
extern void obs_view_free(struct obs_view *view);
|
2014-02-13 07:58:31 -08:00
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* displays */
|
|
|
|
|
|
|
|
struct obs_display {
|
2019-06-22 22:13:45 -07:00
|
|
|
bool size_changed;
|
|
|
|
bool enabled;
|
|
|
|
uint32_t cx, cy;
|
|
|
|
uint32_t background_color;
|
|
|
|
gs_swapchain_t *swap;
|
|
|
|
pthread_mutex_t draw_callbacks_mutex;
|
|
|
|
pthread_mutex_t draw_info_mutex;
|
|
|
|
DARRAY(struct draw_callback) draw_callbacks;
|
|
|
|
|
|
|
|
struct obs_display *next;
|
|
|
|
struct obs_display **prev_next;
|
2014-02-13 07:58:31 -08:00
|
|
|
};
|
|
|
|
|
|
|
|
extern bool obs_display_init(struct obs_display *display,
|
2019-06-22 22:13:45 -07:00
|
|
|
const struct gs_init_data *graphics_data);
|
2014-02-13 07:58:31 -08:00
|
|
|
extern void obs_display_free(struct obs_display *display);
|
|
|
|
|
2013-11-20 14:00:16 -08:00
|
|
|
/* ------------------------------------------------------------------------- */
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* core */
|
2013-11-20 14:00:16 -08:00
|
|
|
|
libobs: Redesign/optimize frame encoding handling
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
2014-12-31 01:53:13 -08:00
|
|
|
struct obs_vframe_info {
|
|
|
|
uint64_t timestamp;
|
|
|
|
int count;
|
|
|
|
};
|
|
|
|
|
2019-02-05 17:37:40 -08:00
|
|
|
struct obs_tex_frame {
|
|
|
|
gs_texture_t *tex;
|
|
|
|
gs_texture_t *tex_uv;
|
|
|
|
uint32_t handle;
|
|
|
|
uint64_t timestamp;
|
|
|
|
uint64_t lock_key;
|
|
|
|
int count;
|
|
|
|
bool released;
|
|
|
|
};
|
|
|
|
|
2014-02-05 20:03:06 -08:00
|
|
|
struct obs_core_video {
|
2019-06-22 22:13:45 -07:00
|
|
|
graphics_t *graphics;
|
2019-07-26 23:21:41 -07:00
|
|
|
gs_stagesurf_t *copy_surfaces[NUM_TEXTURES][NUM_CHANNELS];
|
2019-06-22 22:13:45 -07:00
|
|
|
gs_texture_t *render_texture;
|
|
|
|
gs_texture_t *output_texture;
|
2019-07-26 23:21:41 -07:00
|
|
|
gs_texture_t *convert_textures[NUM_CHANNELS];
|
2019-06-22 22:13:45 -07:00
|
|
|
bool texture_rendered;
|
|
|
|
bool textures_copied[NUM_TEXTURES];
|
|
|
|
bool texture_converted;
|
|
|
|
bool using_nv12_tex;
|
|
|
|
struct circlebuf vframe_info_buffer;
|
|
|
|
struct circlebuf vframe_info_buffer_gpu;
|
|
|
|
gs_effect_t *default_effect;
|
|
|
|
gs_effect_t *default_rect_effect;
|
|
|
|
gs_effect_t *opaque_effect;
|
|
|
|
gs_effect_t *solid_effect;
|
|
|
|
gs_effect_t *repeat_effect;
|
|
|
|
gs_effect_t *conversion_effect;
|
|
|
|
gs_effect_t *bicubic_effect;
|
|
|
|
gs_effect_t *lanczos_effect;
|
|
|
|
gs_effect_t *area_effect;
|
|
|
|
gs_effect_t *bilinear_lowres_effect;
|
|
|
|
gs_effect_t *premultiplied_alpha_effect;
|
|
|
|
gs_samplerstate_t *point_sampler;
|
2019-07-26 23:21:41 -07:00
|
|
|
gs_stagesurf_t *mapped_surfaces[NUM_CHANNELS];
|
2019-06-22 22:13:45 -07:00
|
|
|
int cur_texture;
|
|
|
|
long raw_active;
|
|
|
|
long gpu_encoder_active;
|
|
|
|
pthread_mutex_t gpu_encoder_mutex;
|
|
|
|
struct circlebuf gpu_encoder_queue;
|
|
|
|
struct circlebuf gpu_encoder_avail_queue;
|
|
|
|
DARRAY(obs_encoder_t *) gpu_encoders;
|
|
|
|
os_sem_t *gpu_encode_semaphore;
|
|
|
|
os_event_t *gpu_encode_inactive;
|
|
|
|
pthread_t gpu_encode_thread;
|
|
|
|
bool gpu_encode_thread_initialized;
|
|
|
|
volatile bool gpu_encode_stop;
|
|
|
|
|
|
|
|
uint64_t video_time;
|
2019-07-06 15:30:52 -07:00
|
|
|
uint64_t video_frame_interval_ns;
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t video_avg_frame_time_ns;
|
|
|
|
double video_fps;
|
|
|
|
video_t *video;
|
|
|
|
pthread_t video_thread;
|
|
|
|
uint32_t total_frames;
|
|
|
|
uint32_t lagged_frames;
|
|
|
|
bool thread_initialized;
|
|
|
|
|
|
|
|
bool gpu_conversion;
|
2019-07-26 23:21:41 -07:00
|
|
|
const char *conversion_techs[NUM_CHANNELS];
|
|
|
|
bool conversion_needed;
|
|
|
|
float conversion_width_i;
|
2019-06-22 22:13:45 -07:00
|
|
|
|
|
|
|
uint32_t output_width;
|
|
|
|
uint32_t output_height;
|
|
|
|
uint32_t base_width;
|
|
|
|
uint32_t base_height;
|
|
|
|
float color_matrix[16];
|
|
|
|
enum obs_scale_type scale_type;
|
|
|
|
|
|
|
|
gs_texture_t *transparent_texture;
|
|
|
|
|
|
|
|
gs_effect_t *deinterlace_discard_effect;
|
|
|
|
gs_effect_t *deinterlace_discard_2x_effect;
|
|
|
|
gs_effect_t *deinterlace_linear_effect;
|
|
|
|
gs_effect_t *deinterlace_linear_2x_effect;
|
|
|
|
gs_effect_t *deinterlace_blend_effect;
|
|
|
|
gs_effect_t *deinterlace_blend_2x_effect;
|
|
|
|
gs_effect_t *deinterlace_yadif_effect;
|
|
|
|
gs_effect_t *deinterlace_yadif_2x_effect;
|
|
|
|
|
|
|
|
struct obs_video_info ovi;
|
2013-11-20 14:00:16 -08:00
|
|
|
};
|
|
|
|
|
2017-02-05 21:37:35 -08:00
|
|
|
struct audio_monitor;
|
|
|
|
|
2014-02-05 20:03:06 -08:00
|
|
|
struct obs_core_audio {
|
2019-06-22 22:13:45 -07:00
|
|
|
audio_t *audio;
|
2014-02-20 14:53:16 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
DARRAY(struct obs_source *) render_order;
|
|
|
|
DARRAY(struct obs_source *) root_nodes;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t buffered_ts;
|
|
|
|
struct circlebuf buffered_timestamps;
|
|
|
|
int buffering_wait_ticks;
|
|
|
|
int total_buffering_ticks;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
float user_volume;
|
2017-02-05 21:37:35 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t monitoring_mutex;
|
|
|
|
DARRAY(struct audio_monitor *) monitors;
|
|
|
|
char *monitoring_device_name;
|
|
|
|
char *monitoring_device_id;
|
2013-11-20 14:00:16 -08:00
|
|
|
};
|
|
|
|
|
2013-11-20 17:36:46 -08:00
|
|
|
/* user sources, output channels, and displays */
|
2014-02-05 20:03:06 -08:00
|
|
|
struct obs_core_data {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_source *first_source;
|
|
|
|
struct obs_source *first_audio_source;
|
|
|
|
struct obs_display *first_display;
|
|
|
|
struct obs_output *first_output;
|
|
|
|
struct obs_encoder *first_encoder;
|
|
|
|
struct obs_service *first_service;
|
|
|
|
|
|
|
|
pthread_mutex_t sources_mutex;
|
|
|
|
pthread_mutex_t displays_mutex;
|
|
|
|
pthread_mutex_t outputs_mutex;
|
|
|
|
pthread_mutex_t encoders_mutex;
|
|
|
|
pthread_mutex_t services_mutex;
|
|
|
|
pthread_mutex_t audio_sources_mutex;
|
|
|
|
pthread_mutex_t draw_callbacks_mutex;
|
|
|
|
DARRAY(struct draw_callback) draw_callbacks;
|
|
|
|
DARRAY(struct tick_callback) tick_callbacks;
|
|
|
|
|
|
|
|
struct obs_view main_view;
|
|
|
|
|
|
|
|
long long unnamed_index;
|
|
|
|
|
|
|
|
obs_data_t *private_data;
|
|
|
|
|
|
|
|
volatile bool valid;
|
2013-11-20 14:00:16 -08:00
|
|
|
};
|
|
|
|
|
2014-11-01 13:41:17 -07:00
|
|
|
/* user hotkeys */
|
|
|
|
struct obs_core_hotkeys {
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t mutex;
|
|
|
|
DARRAY(obs_hotkey_t) hotkeys;
|
|
|
|
obs_hotkey_id next_id;
|
|
|
|
DARRAY(obs_hotkey_pair_t) hotkey_pairs;
|
|
|
|
obs_hotkey_pair_id next_pair_id;
|
|
|
|
|
|
|
|
pthread_t hotkey_thread;
|
|
|
|
bool hotkey_thread_initialized;
|
|
|
|
os_event_t *stop_event;
|
|
|
|
bool thread_disable_press;
|
|
|
|
bool strict_modifiers;
|
|
|
|
bool reroute_hotkeys;
|
|
|
|
DARRAY(obs_hotkey_binding_t) bindings;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
|
|
|
obs_hotkey_callback_router_func router_func;
|
2019-06-22 22:13:45 -07:00
|
|
|
void *router_func_data;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkeys_platform_t *platform_context;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_once_t name_map_init_token;
|
|
|
|
struct obs_hotkey_name_map *name_map;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
signal_handler_t *signals;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
char *translations[OBS_KEY_LAST_VALUE];
|
|
|
|
char *mute;
|
|
|
|
char *unmute;
|
|
|
|
char *push_to_mute;
|
|
|
|
char *push_to_talk;
|
|
|
|
char *sceneitem_show;
|
|
|
|
char *sceneitem_hide;
|
2014-11-01 13:41:17 -07:00
|
|
|
};
|
|
|
|
|
2014-02-05 20:03:06 -08:00
|
|
|
struct obs_core {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_module *first_module;
|
|
|
|
DARRAY(struct obs_module_path) module_paths;
|
|
|
|
|
|
|
|
DARRAY(struct obs_source_info) source_types;
|
|
|
|
DARRAY(struct obs_source_info) input_types;
|
|
|
|
DARRAY(struct obs_source_info) filter_types;
|
|
|
|
DARRAY(struct obs_source_info) transition_types;
|
|
|
|
DARRAY(struct obs_output_info) output_types;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
DARRAY(struct obs_encoder_info) encoder_types;
|
|
|
|
DARRAY(struct obs_service_info) service_types;
|
2019-06-22 22:13:45 -07:00
|
|
|
DARRAY(struct obs_modal_ui) modal_ui_callbacks;
|
|
|
|
DARRAY(struct obs_modeless_ui) modeless_ui_callbacks;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
signal_handler_t *signals;
|
|
|
|
proc_handler_t *procs;
|
2013-12-26 22:10:15 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
char *locale;
|
|
|
|
char *module_config_path;
|
|
|
|
bool name_store_owned;
|
|
|
|
profiler_name_store_t *name_store;
|
2014-06-25 00:21:16 -07:00
|
|
|
|
2013-11-20 14:00:16 -08:00
|
|
|
/* segmented into multiple sub-structures to keep things a bit more
|
|
|
|
* clean and organized */
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_core_video video;
|
|
|
|
struct obs_core_audio audio;
|
|
|
|
struct obs_core_data data;
|
|
|
|
struct obs_core_hotkeys hotkeys;
|
2013-09-30 19:37:13 -07:00
|
|
|
};
|
|
|
|
|
2014-02-05 20:03:06 -08:00
|
|
|
extern struct obs_core *obs;
|
2013-10-14 12:37:52 -07:00
|
|
|
|
2017-10-27 22:54:29 -07:00
|
|
|
extern void *obs_graphics_thread(void *param);
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2016-03-15 20:22:03 -07:00
|
|
|
extern gs_effect_t *obs_load_effect(gs_effect_t **effect, const char *file);
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
extern bool audio_callback(void *param, uint64_t start_ts_in,
|
|
|
|
uint64_t end_ts_in, uint64_t *out_ts,
|
|
|
|
uint32_t mixers, struct audio_output_data *mixes);
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
extern void
|
|
|
|
start_raw_video(video_t *video, const struct video_scale_info *conversion,
|
2018-01-31 18:54:36 -08:00
|
|
|
void (*callback)(void *param, struct video_data *frame),
|
|
|
|
void *param);
|
|
|
|
extern void stop_raw_video(video_t *video,
|
2019-06-22 22:13:45 -07:00
|
|
|
void (*callback)(void *param,
|
|
|
|
struct video_data *frame),
|
|
|
|
void *param);
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
/* obs shared context data */
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
struct obs_context_data {
|
2019-06-22 22:13:45 -07:00
|
|
|
char *name;
|
|
|
|
void *data;
|
|
|
|
obs_data_t *settings;
|
|
|
|
signal_handler_t *signals;
|
|
|
|
proc_handler_t *procs;
|
|
|
|
enum obs_obj_type type;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
DARRAY(obs_hotkey_id) hotkeys;
|
|
|
|
DARRAY(obs_hotkey_pair_id) hotkey_pairs;
|
|
|
|
obs_data_t *hotkey_data;
|
2014-11-01 13:41:17 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
DARRAY(char *) rename_cache;
|
|
|
|
pthread_mutex_t rename_cache_mutex;
|
2014-07-02 20:58:30 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t *mutex;
|
|
|
|
struct obs_context_data *next;
|
|
|
|
struct obs_context_data **prev_next;
|
2016-01-09 13:05:44 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool private;
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
};
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
extern bool obs_context_data_init(struct obs_context_data *context,
|
|
|
|
enum obs_obj_type type, obs_data_t *settings,
|
|
|
|
const char *name, obs_data_t *hotkey_data,
|
|
|
|
bool private);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
extern void obs_context_data_free(struct obs_context_data *context);
|
|
|
|
|
|
|
|
extern void obs_context_data_insert(struct obs_context_data *context,
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t *mutex, void *first);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
extern void obs_context_data_remove(struct obs_context_data *context);
|
|
|
|
|
|
|
|
extern void obs_context_data_setname(struct obs_context_data *context,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *name);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
|
2015-05-03 16:37:14 -07:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* ref-counting */
|
|
|
|
|
|
|
|
struct obs_weak_ref {
|
|
|
|
volatile long refs;
|
|
|
|
volatile long weak_refs;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline void obs_ref_addref(struct obs_weak_ref *ref)
|
|
|
|
{
|
|
|
|
os_atomic_inc_long(&ref->refs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool obs_ref_release(struct obs_weak_ref *ref)
|
|
|
|
{
|
|
|
|
return os_atomic_dec_long(&ref->refs) == -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void obs_weak_ref_addref(struct obs_weak_ref *ref)
|
|
|
|
{
|
|
|
|
os_atomic_inc_long(&ref->weak_refs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool obs_weak_ref_release(struct obs_weak_ref *ref)
|
|
|
|
{
|
|
|
|
return os_atomic_dec_long(&ref->weak_refs) == -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool obs_weak_ref_get_ref(struct obs_weak_ref *ref)
|
|
|
|
{
|
|
|
|
long owners = ref->refs;
|
|
|
|
while (owners > -1) {
|
|
|
|
if (os_atomic_compare_swap_long(&ref->refs, owners, owners + 1))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
owners = ref->refs;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* sources */
|
|
|
|
|
2015-01-04 00:18:36 -08:00
|
|
|
struct async_frame {
|
|
|
|
struct obs_source_frame *frame;
|
2015-03-27 00:02:18 -07:00
|
|
|
long unused_count;
|
2015-01-04 00:18:36 -08:00
|
|
|
bool used;
|
|
|
|
};
|
|
|
|
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
enum audio_action_type {
|
|
|
|
AUDIO_ACTION_VOL,
|
|
|
|
AUDIO_ACTION_MUTE,
|
|
|
|
AUDIO_ACTION_PTT,
|
|
|
|
AUDIO_ACTION_PTM,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct audio_action {
|
|
|
|
uint64_t timestamp;
|
|
|
|
enum audio_action_type type;
|
|
|
|
union {
|
|
|
|
float vol;
|
2019-06-22 22:13:45 -07:00
|
|
|
bool set;
|
libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
2015-12-20 03:06:35 -08:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
2015-05-03 11:45:41 -07:00
|
|
|
struct obs_weak_source {
|
|
|
|
struct obs_weak_ref ref;
|
|
|
|
struct obs_source *source;
|
|
|
|
};
|
|
|
|
|
2016-01-07 19:48:36 -08:00
|
|
|
struct audio_cb_info {
|
|
|
|
obs_source_audio_capture_t callback;
|
|
|
|
void *param;
|
|
|
|
};
|
|
|
|
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
struct obs_source {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_context_data context;
|
|
|
|
struct obs_source_info info;
|
|
|
|
struct obs_weak_source *control;
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
|
2014-10-23 09:56:50 -07:00
|
|
|
/* general exposed flags that can be set for the source */
|
2019-06-22 22:13:45 -07:00
|
|
|
uint32_t flags;
|
|
|
|
uint32_t default_flags;
|
2019-09-20 00:13:51 -07:00
|
|
|
uint32_t last_obs_ver;
|
2014-10-23 09:56:50 -07:00
|
|
|
|
2014-07-29 09:38:55 -07:00
|
|
|
/* indicates ownership of the info.id buffer */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool owns_info_id;
|
2014-07-29 09:38:55 -07:00
|
|
|
|
2014-04-13 02:22:28 -07:00
|
|
|
/* signals to call the source update in the video thread */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool defer_update;
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
2014-02-23 16:46:00 -08:00
|
|
|
/* ensures show/hide are only called once */
|
2019-06-22 22:13:45 -07:00
|
|
|
volatile long show_refs;
|
2014-02-23 16:46:00 -08:00
|
|
|
|
2014-02-20 21:04:14 -08:00
|
|
|
/* ensures activate/deactivate are only called once */
|
2019-06-22 22:13:45 -07:00
|
|
|
volatile long activate_refs;
|
2014-02-20 21:04:14 -08:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* used to indicate that the source has been removed and all
|
|
|
|
* references to it should be released (not exactly how I would prefer
|
|
|
|
* to handle things but it's the best option) */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool removed;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool active;
|
|
|
|
bool showing;
|
2015-03-02 18:46:46 -08:00
|
|
|
|
2015-03-17 18:15:50 -07:00
|
|
|
/* used to temporarily disable sources if needed */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool enabled;
|
2015-03-17 18:15:50 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* timing (if video is present, is based upon video) */
|
2019-06-22 22:13:45 -07:00
|
|
|
volatile bool timing_set;
|
|
|
|
volatile uint64_t timing_adjust;
|
|
|
|
uint64_t resample_offset;
|
|
|
|
uint64_t last_audio_ts;
|
|
|
|
uint64_t next_audio_ts_min;
|
|
|
|
uint64_t next_audio_sys_ts_min;
|
|
|
|
uint64_t last_frame_ts;
|
|
|
|
uint64_t last_sys_timestamp;
|
|
|
|
bool async_rendered;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
|
|
|
/* audio */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool audio_failed;
|
|
|
|
bool audio_pending;
|
|
|
|
bool pending_stop;
|
2019-09-19 23:37:29 -07:00
|
|
|
bool audio_active;
|
2019-06-22 22:13:45 -07:00
|
|
|
bool user_muted;
|
|
|
|
bool muted;
|
|
|
|
struct obs_source *next_audio_source;
|
|
|
|
struct obs_source **prev_next_audio_source;
|
|
|
|
uint64_t audio_ts;
|
|
|
|
struct circlebuf audio_input_buf[MAX_AUDIO_CHANNELS];
|
|
|
|
size_t last_audio_input_buf_size;
|
|
|
|
DARRAY(struct audio_action) audio_actions;
|
|
|
|
float *audio_output_buf[MAX_AUDIO_MIXES][MAX_AUDIO_CHANNELS];
|
2019-08-21 14:35:40 -07:00
|
|
|
float *audio_mix_buf[MAX_AUDIO_CHANNELS];
|
2019-06-22 22:13:45 -07:00
|
|
|
struct resample_info sample_info;
|
|
|
|
audio_resampler_t *resampler;
|
|
|
|
pthread_mutex_t audio_actions_mutex;
|
|
|
|
pthread_mutex_t audio_buf_mutex;
|
|
|
|
pthread_mutex_t audio_mutex;
|
|
|
|
pthread_mutex_t audio_cb_mutex;
|
|
|
|
DARRAY(struct audio_cb_info) audio_cb_list;
|
|
|
|
struct obs_audio_data audio_data;
|
|
|
|
size_t audio_storage_size;
|
|
|
|
uint32_t audio_mixers;
|
|
|
|
float user_volume;
|
|
|
|
float volume;
|
|
|
|
int64_t sync_offset;
|
|
|
|
int64_t last_sync_offset;
|
|
|
|
float balance;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
|
|
|
/* async video data */
|
2019-08-09 20:43:14 -07:00
|
|
|
gs_texture_t *async_textures[MAX_AV_PLANES];
|
2019-06-22 22:13:45 -07:00
|
|
|
gs_texrender_t *async_texrender;
|
|
|
|
struct obs_source_frame *cur_async_frame;
|
|
|
|
bool async_gpu_conversion;
|
|
|
|
enum video_format async_format;
|
|
|
|
bool async_full_range;
|
|
|
|
enum video_format async_cache_format;
|
|
|
|
bool async_cache_full_range;
|
2019-08-09 20:43:14 -07:00
|
|
|
enum gs_color_format async_texture_formats[MAX_AV_PLANES];
|
|
|
|
int async_channel_count;
|
2019-06-22 22:13:45 -07:00
|
|
|
bool async_flip;
|
|
|
|
bool async_active;
|
|
|
|
bool async_update_texture;
|
|
|
|
bool async_unbuffered;
|
|
|
|
bool async_decoupled;
|
|
|
|
struct obs_source_frame *async_preload_frame;
|
|
|
|
DARRAY(struct async_frame) async_cache;
|
|
|
|
DARRAY(struct obs_source_frame *) async_frames;
|
|
|
|
pthread_mutex_t async_mutex;
|
|
|
|
uint32_t async_width;
|
|
|
|
uint32_t async_height;
|
|
|
|
uint32_t async_cache_width;
|
|
|
|
uint32_t async_cache_height;
|
2019-08-09 20:43:14 -07:00
|
|
|
uint32_t async_convert_width[MAX_AV_PLANES];
|
|
|
|
uint32_t async_convert_height[MAX_AV_PLANES];
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2016-03-15 20:39:36 -07:00
|
|
|
/* async video deinterlacing */
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t deinterlace_offset;
|
|
|
|
uint64_t deinterlace_frame_ts;
|
|
|
|
gs_effect_t *deinterlace_effect;
|
|
|
|
struct obs_source_frame *prev_async_frame;
|
2019-08-09 20:43:14 -07:00
|
|
|
gs_texture_t *async_prev_textures[MAX_AV_PLANES];
|
2019-06-22 22:13:45 -07:00
|
|
|
gs_texrender_t *async_prev_texrender;
|
|
|
|
uint32_t deinterlace_half_duration;
|
|
|
|
enum obs_deinterlace_mode deinterlace_mode;
|
|
|
|
bool deinterlace_top_first;
|
|
|
|
bool deinterlace_rendered;
|
2016-03-15 20:39:36 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* filters */
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_source *filter_parent;
|
|
|
|
struct obs_source *filter_target;
|
|
|
|
DARRAY(struct obs_source *) filters;
|
|
|
|
pthread_mutex_t filter_mutex;
|
|
|
|
gs_texrender_t *filter_texrender;
|
|
|
|
enum obs_allow_direct_render allow_direct;
|
|
|
|
bool rendering_filter;
|
2015-04-30 18:22:12 -07:00
|
|
|
|
|
|
|
/* sources specific hotkeys */
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_hotkey_pair_id mute_unmute_key;
|
|
|
|
obs_hotkey_id push_to_mute_key;
|
|
|
|
obs_hotkey_id push_to_talk_key;
|
|
|
|
bool push_to_mute_enabled;
|
|
|
|
bool push_to_mute_pressed;
|
|
|
|
bool user_push_to_mute_pressed;
|
|
|
|
bool push_to_talk_enabled;
|
|
|
|
bool push_to_talk_pressed;
|
|
|
|
bool user_push_to_talk_pressed;
|
|
|
|
uint64_t push_to_mute_delay;
|
|
|
|
uint64_t push_to_mute_stop_time;
|
|
|
|
uint64_t push_to_talk_delay;
|
|
|
|
uint64_t push_to_talk_stop_time;
|
libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
2016-01-03 16:41:14 -08:00
|
|
|
|
|
|
|
/* transitions */
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t transition_start_time;
|
|
|
|
uint64_t transition_duration;
|
|
|
|
pthread_mutex_t transition_tex_mutex;
|
|
|
|
gs_texrender_t *transition_texrender[2];
|
|
|
|
pthread_mutex_t transition_mutex;
|
|
|
|
obs_source_t *transition_sources[2];
|
2019-12-27 16:33:38 -08:00
|
|
|
float transition_manual_clamp;
|
|
|
|
float transition_manual_torque;
|
|
|
|
float transition_manual_target;
|
2019-12-27 16:31:18 -08:00
|
|
|
float transition_manual_val;
|
2019-06-22 22:13:45 -07:00
|
|
|
bool transitioning_video;
|
|
|
|
bool transitioning_audio;
|
|
|
|
bool transition_source_active[2];
|
|
|
|
uint32_t transition_alignment;
|
|
|
|
uint32_t transition_actual_cx;
|
|
|
|
uint32_t transition_actual_cy;
|
|
|
|
uint32_t transition_cx;
|
|
|
|
uint32_t transition_cy;
|
|
|
|
uint32_t transition_fixed_duration;
|
|
|
|
bool transition_use_fixed_duration;
|
|
|
|
enum obs_transition_mode transition_mode;
|
|
|
|
enum obs_transition_scale_type transition_scale_type;
|
|
|
|
struct matrix4 transition_matrices[2];
|
|
|
|
|
|
|
|
struct audio_monitor *monitor;
|
|
|
|
enum obs_monitoring_type monitoring_type;
|
|
|
|
|
|
|
|
obs_data_t *private_settings;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|
|
|
|
|
2017-12-25 12:20:54 -08:00
|
|
|
extern struct obs_source_info *get_source_info(const char *id);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
extern bool obs_source_init_context(struct obs_source *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_data_t *settings, const char *name,
|
|
|
|
obs_data_t *hotkey_data, bool private);
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
2016-01-03 16:41:14 -08:00
|
|
|
extern bool obs_transition_init(obs_source_t *transition);
|
|
|
|
extern void obs_transition_free(obs_source_t *transition);
|
2019-12-27 16:33:38 -08:00
|
|
|
extern void obs_transition_tick(obs_source_t *transition, float t);
|
libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
2016-01-03 16:41:14 -08:00
|
|
|
extern void obs_transition_enum_sources(obs_source_t *transition,
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_source_enum_proc_t enum_callback,
|
|
|
|
void *param);
|
libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
2016-01-03 16:41:14 -08:00
|
|
|
extern void obs_transition_save(obs_source_t *source, obs_data_t *data);
|
|
|
|
extern void obs_transition_load(obs_source_t *source, obs_data_t *data);
|
|
|
|
|
2017-02-05 21:37:35 -08:00
|
|
|
struct audio_monitor *audio_monitor_create(obs_source_t *source);
|
|
|
|
void audio_monitor_reset(struct audio_monitor *monitor);
|
|
|
|
extern void audio_monitor_destroy(struct audio_monitor *monitor);
|
|
|
|
|
2019-09-20 00:13:51 -07:00
|
|
|
extern obs_source_t *obs_source_create_set_last_ver(const char *id,
|
|
|
|
const char *name,
|
|
|
|
obs_data_t *settings,
|
|
|
|
obs_data_t *hotkey_data,
|
|
|
|
uint32_t last_obs_ver);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
extern void obs_source_destroy(struct obs_source *source);
|
|
|
|
|
2014-02-23 16:46:00 -08:00
|
|
|
enum view_type {
|
|
|
|
MAIN_VIEW,
|
2019-06-22 22:13:45 -07:00
|
|
|
AUX_VIEW,
|
2014-02-23 16:46:00 -08:00
|
|
|
};
|
|
|
|
|
2016-01-03 15:12:58 -08:00
|
|
|
static inline void obs_source_dosignal(struct obs_source *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *signal_obs,
|
|
|
|
const char *signal_source)
|
2016-01-03 15:12:58 -08:00
|
|
|
{
|
|
|
|
struct calldata data;
|
2016-01-18 20:01:58 -08:00
|
|
|
uint8_t stack[128];
|
2016-01-03 15:12:58 -08:00
|
|
|
|
2016-01-18 20:01:58 -08:00
|
|
|
calldata_init_fixed(&data, stack, sizeof(stack));
|
2016-01-03 15:12:58 -08:00
|
|
|
calldata_set_ptr(&data, "source", source);
|
2016-01-09 13:27:16 -08:00
|
|
|
if (signal_obs && !source->context.private)
|
2016-01-03 15:12:58 -08:00
|
|
|
signal_handler_signal(obs->signals, signal_obs, &data);
|
|
|
|
if (signal_source)
|
|
|
|
signal_handler_signal(source->context.signals, signal_source,
|
2019-06-22 22:13:45 -07:00
|
|
|
&data);
|
2016-01-03 15:12:58 -08:00
|
|
|
}
|
|
|
|
|
2016-03-15 20:26:50 -07:00
|
|
|
/* maximum timestamp variance in nanoseconds */
|
2019-06-22 22:13:45 -07:00
|
|
|
#define MAX_TS_VAR 2000000000ULL
|
2016-03-15 20:26:50 -07:00
|
|
|
|
|
|
|
static inline bool frame_out_of_bounds(const obs_source_t *source, uint64_t ts)
|
|
|
|
{
|
|
|
|
if (ts < source->last_frame_ts)
|
|
|
|
return ((source->last_frame_ts - ts) > MAX_TS_VAR);
|
|
|
|
else
|
|
|
|
return ((ts - source->last_frame_ts) > MAX_TS_VAR);
|
|
|
|
}
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
static inline enum gs_color_format
|
|
|
|
convert_video_format(enum video_format format)
|
2016-03-15 20:26:50 -07:00
|
|
|
{
|
2019-08-11 11:26:22 -07:00
|
|
|
switch (format) {
|
|
|
|
case VIDEO_FORMAT_RGBA:
|
2016-03-15 20:26:50 -07:00
|
|
|
return GS_RGBA;
|
2019-08-11 11:26:22 -07:00
|
|
|
case VIDEO_FORMAT_BGRA:
|
|
|
|
case VIDEO_FORMAT_I40A:
|
|
|
|
case VIDEO_FORMAT_I42A:
|
|
|
|
case VIDEO_FORMAT_YUVA:
|
|
|
|
case VIDEO_FORMAT_AYUV:
|
2016-03-15 20:26:50 -07:00
|
|
|
return GS_BGRA;
|
2019-08-11 11:26:22 -07:00
|
|
|
default:
|
|
|
|
return GS_BGRX;
|
|
|
|
}
|
2016-03-15 20:26:50 -07:00
|
|
|
}
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
extern void obs_source_activate(obs_source_t *source, enum view_type type);
|
|
|
|
extern void obs_source_deactivate(obs_source_t *source, enum view_type type);
|
|
|
|
extern void obs_source_video_tick(obs_source_t *source, float seconds);
|
libobs: Refactor source volume transition design
This changes the way source volume handles transitioning between being
active and inactive states.
The previous way that transitioning handled volume was that it set the
presentation volume of the source and all of its sub-sources to 0.0 if
the source was inactive, and 1.0 if active. Transition sources would
then also set the presentation volume for sub-sources to whatever their
transitioning volume was. However, the problem with this is that the
design didn't take in to account if the source or its sub-sources were
active anywhere else, so because of that it would break if that ever
happened, and I didn't realize that when I was designing it.
So instead, this completely overhauls the design of handling
transitioning volume. Each frame, it'll go through all sources and
check whether they're active or inactive and set the base volume
accordingly. If transitions are currently active, it will actually walk
the active source tree and check whether the source is in a
transitioning state somewhere.
- If the source is a sub-source of a transition, and it's not active
outside of the transition, then the transition will control the
volume of the source.
- If the source is a sub-source of a transition, but it's also active
outside of the transition, it'll defer to whichever is louder.
This also adds a new callback to the obs_source_info structure for
transition sources, get_transition_volume, which is called to get the
transitioning volume of a sub-source.
2014-12-27 22:16:10 -08:00
|
|
|
extern float obs_source_get_target_volume(obs_source_t *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_source_t *target);
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2015-12-17 06:46:10 -08:00
|
|
|
extern void obs_source_audio_render(obs_source_t *source, uint32_t mixers,
|
2019-06-22 22:13:45 -07:00
|
|
|
size_t channels, size_t sample_rate,
|
|
|
|
size_t size);
|
2015-12-17 06:46:10 -08:00
|
|
|
|
2016-01-03 15:13:51 -08:00
|
|
|
extern void add_alignment(struct vec2 *v, uint32_t align, int cx, int cy);
|
|
|
|
|
2016-03-15 20:26:50 -07:00
|
|
|
extern struct obs_source_frame *filter_async_video(obs_source_t *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_source_frame *in);
|
2016-03-15 20:33:20 -07:00
|
|
|
extern bool update_async_texture(struct obs_source *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
const struct obs_source_frame *frame,
|
|
|
|
gs_texture_t *tex, gs_texrender_t *texrender);
|
2019-08-09 20:43:14 -07:00
|
|
|
extern bool update_async_textures(struct obs_source *source,
|
|
|
|
const struct obs_source_frame *frame,
|
|
|
|
gs_texture_t *tex[MAX_AV_PLANES],
|
|
|
|
gs_texrender_t *texrender);
|
2016-03-15 20:26:50 -07:00
|
|
|
extern bool set_async_texture_size(struct obs_source *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
const struct obs_source_frame *frame);
|
2016-03-15 20:26:50 -07:00
|
|
|
extern void remove_async_frame(obs_source_t *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_source_frame *frame);
|
2016-03-15 20:26:50 -07:00
|
|
|
|
2016-03-15 20:39:36 -07:00
|
|
|
extern void set_deinterlace_texture_size(obs_source_t *source);
|
|
|
|
extern void deinterlace_process_last_frame(obs_source_t *source,
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t sys_time);
|
2016-03-15 20:39:36 -07:00
|
|
|
extern void deinterlace_update_async_video(obs_source_t *source);
|
|
|
|
extern void deinterlace_render(obs_source_t *s);
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* outputs */
|
|
|
|
|
libobs: Add encoded output delay support
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
2015-09-06 15:39:46 -07:00
|
|
|
enum delay_msg {
|
|
|
|
DELAY_MSG_PACKET,
|
|
|
|
DELAY_MSG_START,
|
|
|
|
DELAY_MSG_STOP,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct delay_data {
|
|
|
|
enum delay_msg msg;
|
|
|
|
uint64_t ts;
|
|
|
|
struct encoder_packet packet;
|
|
|
|
};
|
|
|
|
|
2015-09-06 15:29:17 -07:00
|
|
|
typedef void (*encoded_callback_t)(void *data, struct encoder_packet *packet);
|
|
|
|
|
2015-05-03 16:55:43 -07:00
|
|
|
struct obs_weak_output {
|
|
|
|
struct obs_weak_ref ref;
|
|
|
|
struct obs_output *output;
|
|
|
|
};
|
|
|
|
|
2016-11-17 05:25:23 -08:00
|
|
|
#define CAPTION_LINE_CHARS (32)
|
2019-06-22 22:13:45 -07:00
|
|
|
#define CAPTION_LINE_BYTES (4 * CAPTION_LINE_CHARS)
|
2016-11-17 05:25:23 -08:00
|
|
|
struct caption_text {
|
2019-06-22 22:13:45 -07:00
|
|
|
char text[CAPTION_LINE_BYTES + 1];
|
2019-02-19 20:33:33 -08:00
|
|
|
double display_duration;
|
2016-11-17 05:25:23 -08:00
|
|
|
struct caption_text *next;
|
|
|
|
};
|
|
|
|
|
2019-07-07 12:27:13 -07:00
|
|
|
struct pause_data {
|
|
|
|
pthread_mutex_t mutex;
|
|
|
|
uint64_t last_video_ts;
|
|
|
|
uint64_t ts_start;
|
|
|
|
uint64_t ts_end;
|
|
|
|
uint64_t ts_offset;
|
|
|
|
};
|
|
|
|
|
|
|
|
extern bool video_pause_check(struct pause_data *pause, uint64_t timestamp);
|
|
|
|
extern bool audio_pause_check(struct pause_data *pause, struct audio_data *data,
|
|
|
|
size_t sample_rate);
|
|
|
|
extern void pause_reset(struct pause_data *pause);
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct obs_output {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_context_data context;
|
|
|
|
struct obs_output_info info;
|
|
|
|
struct obs_weak_output *control;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2015-09-13 11:55:06 -07:00
|
|
|
/* indicates ownership of the info.id buffer */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool owns_info_id;
|
|
|
|
|
|
|
|
bool received_video;
|
|
|
|
bool received_audio;
|
|
|
|
volatile bool data_active;
|
|
|
|
volatile bool end_data_capture_thread_active;
|
|
|
|
int64_t video_offset;
|
|
|
|
int64_t audio_offsets[MAX_AUDIO_MIXES];
|
|
|
|
int64_t highest_audio_ts;
|
|
|
|
int64_t highest_video_ts;
|
|
|
|
pthread_t end_data_capture_thread;
|
|
|
|
os_event_t *stopping_event;
|
|
|
|
pthread_mutex_t interleaved_mutex;
|
|
|
|
DARRAY(struct encoder_packet) interleaved_packets;
|
|
|
|
int stop_code;
|
|
|
|
|
|
|
|
int reconnect_retry_sec;
|
|
|
|
int reconnect_retry_max;
|
|
|
|
int reconnect_retries;
|
|
|
|
int reconnect_retry_cur_sec;
|
|
|
|
pthread_t reconnect_thread;
|
|
|
|
os_event_t *reconnect_stop_event;
|
|
|
|
volatile bool reconnecting;
|
|
|
|
volatile bool reconnect_thread_active;
|
|
|
|
|
|
|
|
uint32_t starting_drawn_count;
|
|
|
|
uint32_t starting_lagged_count;
|
|
|
|
uint32_t starting_frame_count;
|
|
|
|
|
|
|
|
int total_frames;
|
|
|
|
|
|
|
|
volatile bool active;
|
2019-07-07 12:27:13 -07:00
|
|
|
volatile bool paused;
|
2019-06-22 22:13:45 -07:00
|
|
|
video_t *video;
|
|
|
|
audio_t *audio;
|
|
|
|
obs_encoder_t *video_encoder;
|
|
|
|
obs_encoder_t *audio_encoders[MAX_AUDIO_MIXES];
|
|
|
|
obs_service_t *service;
|
|
|
|
size_t mixer_mask;
|
|
|
|
|
2019-07-07 12:27:13 -07:00
|
|
|
struct pause_data pause;
|
|
|
|
|
2019-07-06 20:07:37 -07:00
|
|
|
struct circlebuf audio_buffer[MAX_AUDIO_MIXES][MAX_AV_PLANES];
|
|
|
|
uint64_t audio_start_ts;
|
|
|
|
uint64_t video_start_ts;
|
|
|
|
size_t audio_size;
|
|
|
|
size_t planes;
|
|
|
|
size_t sample_rate;
|
|
|
|
size_t total_audio_frames;
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint32_t scaled_width;
|
|
|
|
uint32_t scaled_height;
|
|
|
|
|
|
|
|
bool video_conversion_set;
|
|
|
|
bool audio_conversion_set;
|
|
|
|
struct video_scale_info video_conversion;
|
|
|
|
struct audio_convert_info audio_conversion;
|
|
|
|
|
|
|
|
pthread_mutex_t caption_mutex;
|
|
|
|
double caption_timestamp;
|
|
|
|
struct caption_text *caption_head;
|
|
|
|
struct caption_text *caption_tail;
|
|
|
|
|
|
|
|
bool valid;
|
|
|
|
|
|
|
|
uint64_t active_delay_ns;
|
|
|
|
encoded_callback_t delay_callback;
|
|
|
|
struct circlebuf delay_data; /* struct delay_data */
|
|
|
|
pthread_mutex_t delay_mutex;
|
|
|
|
uint32_t delay_sec;
|
|
|
|
uint32_t delay_flags;
|
|
|
|
uint32_t delay_cur_flags;
|
|
|
|
volatile long delay_restart_refs;
|
|
|
|
volatile bool delay_active;
|
|
|
|
volatile bool delay_capturing;
|
|
|
|
|
|
|
|
char *last_error_message;
|
2019-07-06 20:07:37 -07:00
|
|
|
|
|
|
|
float audio_data[MAX_AUDIO_CHANNELS][AUDIO_OUTPUT_FRAMES];
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|
|
|
|
|
libobs: Add encoded output delay support
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
2015-09-06 15:39:46 -07:00
|
|
|
static inline void do_output_signal(struct obs_output *output,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *signal)
|
libobs: Add encoded output delay support
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
2015-09-06 15:39:46 -07:00
|
|
|
{
|
|
|
|
struct calldata params = {0};
|
|
|
|
calldata_set_ptr(¶ms, "output", output);
|
|
|
|
signal_handler_signal(output->context.signals, signal, ¶ms);
|
|
|
|
calldata_free(¶ms);
|
|
|
|
}
|
|
|
|
|
|
|
|
extern void process_delay(void *data, struct encoder_packet *packet);
|
|
|
|
extern void obs_output_cleanup_delay(obs_output_t *output);
|
|
|
|
extern bool obs_output_delay_start(obs_output_t *output);
|
|
|
|
extern void obs_output_delay_stop(obs_output_t *output);
|
|
|
|
extern bool obs_output_actual_start(obs_output_t *output);
|
2016-06-11 11:42:29 -07:00
|
|
|
extern void obs_output_actual_stop(obs_output_t *output, bool force,
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t ts);
|
libobs: Add encoded output delay support
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
2015-09-06 15:39:46 -07:00
|
|
|
|
2014-07-28 16:08:56 -07:00
|
|
|
extern const struct obs_output_info *find_output(const char *id);
|
|
|
|
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
extern void obs_output_remove_encoder(struct obs_output *output,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_encoder *encoder);
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
extern void
|
|
|
|
obs_encoder_packet_create_instance(struct encoder_packet *dst,
|
|
|
|
const struct encoder_packet *src);
|
2015-05-03 16:55:43 -07:00
|
|
|
void obs_output_destroy(obs_output_t *output);
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* encoders */
|
|
|
|
|
2015-05-03 17:01:38 -07:00
|
|
|
struct obs_weak_encoder {
|
|
|
|
struct obs_weak_ref ref;
|
|
|
|
struct obs_encoder *encoder;
|
|
|
|
};
|
|
|
|
|
2014-02-13 07:58:31 -08:00
|
|
|
struct encoder_callback {
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
bool sent_first_packet;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
void (*new_packet)(void *param, struct encoder_packet *packet);
|
|
|
|
void *param;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct obs_encoder {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_context_data context;
|
|
|
|
struct obs_encoder_info info;
|
|
|
|
struct obs_weak_encoder *control;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2018-10-12 20:16:04 -07:00
|
|
|
/* allows re-routing to another encoder */
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_encoder_info orig_info;
|
2018-10-12 20:16:04 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t init_mutex;
|
2016-01-25 01:42:06 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint32_t samplerate;
|
|
|
|
size_t planes;
|
|
|
|
size_t blocksize;
|
|
|
|
size_t framesize;
|
|
|
|
size_t framesize_bytes;
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
size_t mixer_idx;
|
(API Change) Add support for multiple audio mixers
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
2015-01-14 02:12:08 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint32_t scaled_width;
|
|
|
|
uint32_t scaled_height;
|
|
|
|
enum video_format preferred_format;
|
2014-08-10 16:50:44 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
volatile bool active;
|
2019-07-07 12:27:13 -07:00
|
|
|
volatile bool paused;
|
2019-06-22 22:13:45 -07:00
|
|
|
bool initialized;
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
|
2015-09-13 11:55:06 -07:00
|
|
|
/* indicates ownership of the info.id buffer */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool owns_info_id;
|
2015-09-13 11:55:06 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint32_t timebase_num;
|
|
|
|
uint32_t timebase_den;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
int64_t cur_pts;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
struct circlebuf audio_input_buffer[MAX_AV_PLANES];
|
|
|
|
uint8_t *audio_output_buffer[MAX_AV_PLANES];
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
|
|
|
/* if a video encoder is paired with an audio encoder, make it start
|
|
|
|
* up at the specific timestamp. if this is the audio encoder,
|
|
|
|
* wait_for_video makes it wait until it's ready to sync up with
|
|
|
|
* video */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool wait_for_video;
|
|
|
|
bool first_received;
|
|
|
|
struct obs_encoder *paired_encoder;
|
|
|
|
int64_t offset_usec;
|
|
|
|
uint64_t first_raw_ts;
|
|
|
|
uint64_t start_ts;
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t outputs_mutex;
|
|
|
|
DARRAY(obs_output_t *) outputs;
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool destroy_on_stop;
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
/* stores the video/audio media output pointer. video_t *or audio_t **/
|
2019-06-22 22:13:45 -07:00
|
|
|
void *media;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
pthread_mutex_t callbacks_mutex;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
DARRAY(struct encoder_callback) callbacks;
|
2015-07-10 23:04:46 -07:00
|
|
|
|
2019-07-07 12:27:13 -07:00
|
|
|
struct pause_data pause;
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *profile_encoder_encode_name;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
|
2014-07-28 16:08:56 -07:00
|
|
|
extern struct obs_encoder_info *find_encoder(const char *id);
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
extern bool obs_encoder_initialize(obs_encoder_t *encoder);
|
2015-09-13 15:49:06 -07:00
|
|
|
extern void obs_encoder_shutdown(obs_encoder_t *encoder);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
extern void obs_encoder_start(obs_encoder_t *encoder,
|
2019-06-22 22:13:45 -07:00
|
|
|
void (*new_packet)(void *param,
|
|
|
|
struct encoder_packet *packet),
|
|
|
|
void *param);
|
2014-09-25 17:44:05 -07:00
|
|
|
extern void obs_encoder_stop(obs_encoder_t *encoder,
|
2019-06-22 22:13:45 -07:00
|
|
|
void (*new_packet)(void *param,
|
|
|
|
struct encoder_packet *packet),
|
|
|
|
void *param);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
extern void obs_encoder_add_output(struct obs_encoder *encoder,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_output *output);
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
extern void obs_encoder_remove_output(struct obs_encoder *encoder,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_output *output);
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
|
2019-02-05 17:37:40 -08:00
|
|
|
extern bool start_gpu_encode(obs_encoder_t *encoder);
|
|
|
|
extern void stop_gpu_encode(obs_encoder_t *encoder);
|
|
|
|
|
2019-05-17 01:19:36 -07:00
|
|
|
extern bool do_encode(struct obs_encoder *encoder, struct encoder_frame *frame);
|
2019-02-05 17:32:25 -08:00
|
|
|
extern void send_off_encoder_packet(obs_encoder_t *encoder, bool success,
|
2019-06-22 22:13:45 -07:00
|
|
|
bool received, struct encoder_packet *pkt);
|
2019-02-05 17:32:25 -08:00
|
|
|
|
2015-05-03 17:01:38 -07:00
|
|
|
void obs_encoder_destroy(obs_encoder_t *encoder);
|
|
|
|
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
/* services */
|
|
|
|
|
2015-05-03 17:07:43 -07:00
|
|
|
struct obs_weak_service {
|
|
|
|
struct obs_weak_ref ref;
|
|
|
|
struct obs_service *service;
|
|
|
|
};
|
|
|
|
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
struct obs_service {
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_context_data context;
|
|
|
|
struct obs_service_info info;
|
|
|
|
struct obs_weak_service *control;
|
obs-studio UI: Implement stream settings UI
- Updated the services API so that it links up with an output and
the output gets data from that service rather than via settings.
This allows the service context to have control over how an output is
used, and makes it so that the URL/key/etc isn't necessarily some
static setting.
Also, if the service is attached to an output, it will stick around
until the output is destroyed.
- The settings interface has been updated so that it can allow the
usage of service plugins. What this means is that now you can create
a service plugin that can control aspects of the stream, and it
allows each service to create their own user interface if they create
a service plugin module.
- Testing out saving of current service information. Saves/loads from
JSON in to obs_data_t, seems to be working quite nicely, and the
service object information is saved/preserved on exit, and loaded
again on startup.
- I agonized over the settings user interface for days, and eventually
I just decided that the only way that users weren't going to be
fumbling over options was to split up the settings in to simple/basic
output, pre-configured, and then advanced for advanced use (such as
multiple outputs or services, which I'll implement later).
This was particularly painful to really design right, I wanted more
features and wanted to include everything in one interface but
ultimately just realized from experience that users are just not
technically knowledgable about it and will end up fumbling with the
settings rather than getting things done.
Basically, what this means is that casual users only have to enter in
about 3 things to configure their stream: Stream key, audio bitrate,
and video bitrate. I am really happy with this interface for those
types of users, but it definitely won't be sufficient for advanced
usage or for custom outputs, so that stuff will have to be separated.
- Improved the JSON usage for the 'common streaming services' context,
I realized that JSON arrays are there to ensure sorting, while
forgetting that general items are optimized for hashing. So
basically I'm just using arrays now to sort items in it.
2014-04-24 01:49:07 -07:00
|
|
|
|
2015-09-13 11:55:06 -07:00
|
|
|
/* indicates ownership of the info.id buffer */
|
2019-06-22 22:13:45 -07:00
|
|
|
bool owns_info_id;
|
2015-09-13 11:55:06 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool active;
|
|
|
|
bool destroy;
|
|
|
|
struct obs_output *output;
|
libobs: Add services API, reduce repeated code
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
|
|
|
};
|
obs-studio UI: Implement stream settings UI
- Updated the services API so that it links up with an output and
the output gets data from that service rather than via settings.
This allows the service context to have control over how an output is
used, and makes it so that the URL/key/etc isn't necessarily some
static setting.
Also, if the service is attached to an output, it will stick around
until the output is destroyed.
- The settings interface has been updated so that it can allow the
usage of service plugins. What this means is that now you can create
a service plugin that can control aspects of the stream, and it
allows each service to create their own user interface if they create
a service plugin module.
- Testing out saving of current service information. Saves/loads from
JSON in to obs_data_t, seems to be working quite nicely, and the
service object information is saved/preserved on exit, and loaded
again on startup.
- I agonized over the settings user interface for days, and eventually
I just decided that the only way that users weren't going to be
fumbling over options was to split up the settings in to simple/basic
output, pre-configured, and then advanced for advanced use (such as
multiple outputs or services, which I'll implement later).
This was particularly painful to really design right, I wanted more
features and wanted to include everything in one interface but
ultimately just realized from experience that users are just not
technically knowledgable about it and will end up fumbling with the
settings rather than getting things done.
Basically, what this means is that casual users only have to enter in
about 3 things to configure their stream: Stream key, audio bitrate,
and video bitrate. I am really happy with this interface for those
types of users, but it definitely won't be sufficient for advanced
usage or for custom outputs, so that stuff will have to be separated.
- Improved the JSON usage for the 'common streaming services' context,
I realized that JSON arrays are there to ensure sorting, while
forgetting that general items are optimized for hashing. So
basically I'm just using arrays now to sort items in it.
2014-04-24 01:49:07 -07:00
|
|
|
|
2014-07-28 16:08:56 -07:00
|
|
|
extern const struct obs_service_info *find_service(const char *id);
|
|
|
|
|
2014-06-16 21:29:11 -07:00
|
|
|
extern void obs_service_activate(struct obs_service *service);
|
|
|
|
extern void obs_service_deactivate(struct obs_service *service, bool remove);
|
|
|
|
extern bool obs_service_initialize(struct obs_service *service,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct obs_output *output);
|
2015-05-03 17:07:43 -07:00
|
|
|
|
|
|
|
void obs_service_destroy(obs_service_t *service);
|