It's a sad day when I realize that I did not add any null pointer
checking to any of the functions in this file. Discovered it while
checking all the different languages, happened when there was a missing
locale file for a certain module that hadn't had the language uploaded
yet.
These macros are used as easy helper functions to load/unload module
locale that's based upon the text_lookup system. You simple place the
OBS_MODULE_USE_DEFAULT_LOCALE macro once in the module, call
OBS_MODULE_FREE_DEFAULT LOCALE in obs_module_unload, and then
call obs_module_text anywhere in your module where you need to look up
text.
By default, it will look for a locale directory in your module's data
directory, and look for language files within it (INI locale format)
This function is used to simplify the process when using the default
locale handling for modules. It will automatically search in the plugin
data directory associated with the specific module specified, load the
default locale text (for example english if its default language is
english), and then it will load the set locale on top of the default
locale, which will cause text to use the default locale if the desired
locale text is not found.
Total bytes, total frames, and frames dropped. Total frames is
generated automatically, but total bytes and total dropped frames are
returned via callbacks.
Before it would assign the encoder/media callbacks directly to the
output's callbacks, so instead of doing that, it now goes through
intermediary functions for the sake of counting the frames.
Usually if you are reconnecting after network outage, it will give a
different code (such as OBS_OUTPUT_CONNECT_FAILED). So, if already
reconnecting, ignore the code unless it's OBS_OUTPUT_SUCCESS.
I was implementing a pushing/popping attributes function like with GL,
but I realized that for our particular purposes (and actually for most
purposes) its usage was somewhat.. niche. I may still implement
pushing/popping of attributes in the future, though right now I feel
using a function to reset the state is sufficient for our purposes.
There was no need to call the context free function in the
initialization function, and it's safer to just initialize the memory to
0 before using (which also negates the need for da_init)
This just ensures that if an obs object is renamed that the pointer to
older names will still be valid. Prevents renames from causing any
invalid memory access.
When the obs object is destroyed, so are the cached names.
The core itself now provides reconnection options (enabled by default, 2
second timeout between reconnects, 20 retries max until actual
disconnection occurs). This will make things easier for both module
developers and UI developers.
Reconnecting treats the stream as though it were still active, and
signals are sent when reconnecting and upon successful reconnection.
Need to implement user interface information for reconnections.
This implements the 'frame skipping' mechanism to forcibly cause frames
to be duplicated in order to reduce encoder complexity so the encoder
can catch up to the video, otherwise it will continue to be
progressively behind and will cause a desync of the video.
Typically, if a user gets this issue, they should turn down their
settings. For the love of god do not tell them that 'frames are
skipping', just tell them that CPU usage is high, and that they should
consider turning down their settings.
MagickCore is provided here as an alternative to FFmpeg in case FFmpeg
is not easily supported (for example, debian-based linux distros).
NOTE: Cmake configuration needs to be changed in order to allow
MagickCore image support.
NOTE: In texture_setimage, I had to move variables to the top of the
scope because microsoft's C compiler will give the legacy C90 error of:
'illegal use of this type as an expression'.
To sum it up, microsoft's C compiler is still utter garbage.
Similar to the shader functions, the effect parameter functions take
the effect as a parameter. However, the effect parameter is pretty
pointless, because the effect parameter.. parameter stores the effect
pointer interally.
...I'm actually concerned that I went a bit overkill trying to prevent
backwards compatibility issues with this abstraction design, because
this is a large number of files that have to be modified just to add a
single graphics subsystem export. Someone's going to strangle me, and
when you know that someone might strangle you, that means that you did
something wrong. We'll have to look in to simplifying this in the
future without killing backward compatibility safety.
The module callback obs_module_set_locale will be called after loading
the module, and any time the locale is manually changed via core API.
When this function is called, the module is expected to load new text
lookup values for all the text it uses based upon the current locale.
The locale parameter was a mistake, because it puts extra needless
burden upon the module developer to have to handle this variable for
each and every single callback function. The parameter is being removed
in favor of a single centralized module callback function that
specifically updates locale information for a module only when needed.
Having the value stored here is somewhat pointless, so this is one step
in fixing the locale handling. Locale should be handled by the modules
themselves with their own loaded locale lookup information.
This API is used to set the current locale for libobs, which it will set
for all modules when a module is loaded or specifically when the locale
is manually changed.
These functions were mostly related to being able to set true fullscreen
mode -- however, this has no place for our purposes, and these functions
were just sitting empty and unused, so they should be removed.
Besides, fullscreen mode only applies to the windows operating system.
This variable is currently somewhat pointless, I was originally going to
use it to tell the graphics subsystem to completely rebuild the internal
vertex buffers, but it would be bad/inefficient to allow that
functionality.
These are meant to reflect auto-detection configuration changes that
should not be written to the config, for example, frame rate changes
for a camera where the (user-/config-file-)configured frame rate isn't
available but a similar frame rate can be automatically chosen
Default values are now permanently stored in the obs_data_items and
can be accessed via the new get_default functions
Also default values are no longer serialized to JSON to ease transition
to new default values
This allows, for example, disconnected devices for dshow/avcapture to
be listed (and to stay selected in the user config) while making them
unavailable for selection in new sources
The 'initialize' callback is used before the encoders/output start up so
it can adjust encoder settings to required values if needed.
Also added the function 'obs_encoder_active' that returns true or false
depending on whether that encoder is active or not.
Structures with anonymous unions would a warning when you do a brace
assignment on them.
Also fixed some unused parameters and removed some unused variables.
So, scene editing was interesting (and by interesting I mean
excruciating). I almost implemented 'manipulator' visuals (ala 3dsmax
for example), and used 3 modes for controlling position/rotation/size,
but in a 2D editing, it felt clunky, so I defaulted back to simply
click-and-drag for movement, and then took a similar though slightly
different looking approach for handling scaling and reszing.
I also added a number of menu item helpers related to positioning,
scaling, rotating, flipping, and resetting the transform back to
default.
There is also a new 'transform' dialog (accessible via menu) which will
allow you to manually edit every single transform variable of a scene
item directly if desired.
If a scene item does not have bounds active, pulling on the sides of a
source will cause it to resize it via base scale rather than by the
bounding box system (if the source resizes that scale will apply). If
bounds are active, it will modify the bounding box only instead.
How a source scales when a bounding box is active depends on the type of
bounds being used. You can set it to scale to the inner bounds, the
outer bounds, scale to bounds width only, scale to bounds height only,
and a setting to stretch to bounds (which forces a source to always draw
at the bounding box size rather than be affected by its internal size).
You can also set it to be used as a 'maximum' size, so that the source
doesn't necessarily get scaled unless it extends beyond the bounds.
Like in OBS1, objects will snap to the edges unless the control key is
pressed. However, this will now happen even if the object is rotated or
oriented in any strange way. Snapping will also occur when stretching
or changing the bounding box size.
There are a ridiculous number of features related to scaling and
positioning due to requests by a number of people who complained that
they hated the way that OBS1 would always resize their sources when the
source's base size changed. There were also people who wanted more
control for how the resizing was handled, or the ability to completely
prevent resizing entirely if desired. So I made it so that you can
optionally use a 'bounds' system, which allows you to specify different
styles of controlling resizing.
If disabled, the source will always automatically resize and only the
base scale is applied. If enabled, you have a variety of different ways
to limit/control how it can resize within the bounds, or make it so it
can't resize at all. You can also control alignment within that
bounding box, so you can make it so that a source always aligns to a
side or corner of the box.
I also added an alignment value which changes how the source is oriented
relative to the position of the scene item. For example, setting
bottom-right alignment will make it so that the position of the item is
the bottom right corner of the source. When the source resizies, it
will resize leftward and upward in that case, which solves the problem
of how a source resizes relative to a desired position.
I encountered a situation where I wanted to delete a callback for a
signal while inside of that signal. However it would hard lock, and
even after that, it would mess up the loop for the callback list.
So, change the mutex of the individual signals to a recursive-style
mutex, and then if a callback of a signal is deleted while currently in
that signal, just mark it for deletion, which will happen after the
signal is complete.
This replaces the older code which simply queried the max volume level
value for any given audio.
I'm still not 100% sure on if this is how I want to approach the
problem, particularly, whether this should be done in obs_source or in
audio_line, but it can always be moved later if needed.
This uses the calculations by the awesome Bill Hamilton that OBS1 used
for its volume levels. It calculates the current max (level),
magnitude, and current peak. This data then can be used to create
awesome volume meter controls later on.
NOTE: Will probably need optimization, does one float at a time right
now.
Also, change some of the naming conventions. I actually need to change
a lot of the naming conventions in general so that all words are
separated by underscores. Kind of a bad practice there on my part.
There was a fundamental flaw with the string type conversion functions
where the sizes were not being properly accounted for. They were using
the 'len' value as a value for the output rather than only for the
input, which was bad because the output could have more or less
characters than the input.
When a source's private data is being created by a module, it wasn't
able to call most source functions because most functions rely on the
obs_source_info part of the context to be set. This fixes that issue.
It was strange that this wasn't already the case because the other
context types already did the same thing.
This uses the reverse planar YUV 4:2:0 conversion shader to output a YUV
texture without having to convert it via CPU. Again, this will reduce
video upload bandwidth usage to 37.5% of the original rate. I suspect
this will be particularly useful for when an FFmpeg or libav input
plugin for playing videos is made.
NOTE: There's an issue with certain texture sizes right now I haven't
been able to identify, if the full size of texture data divided by the
base texture width is an uneven number, the V chroma plane seems like it
can potentially shift, though I only had this happen with 160x90
resolution C920. Almost all resolutions tend to be even. Needs further
testing with more devices that support planar YUV 4:2:0 output.
This adds button support to properties, which will allow a properties
pane to let the user click a button to activate something with a
particular obs context. When pressed, the button will execute the
callback given, with the context's private data as a parameter.
Character conversion functions did not previously ask for a maximum
buffer size for their 'dst' parameter, it's unsafe to assume some given
destination buffer may have enough size to accommodate a conversion.
Added github gist API uploading to the help menu to help make problems a
bit easier to debug in the future. It's somewhat vital that this
functionality be implemented before any release in order to analyze any
given problem a user may be experiencing.
This fixes an issue reported by valgrind where overlapping memory
was copied with memcpy.
This also removes a redundant assignment where the array size was
explicitly set to zero when it was already zero.
This doesn't add FLV file output to the user interface yet, but we'll
get around to that eventually. This just adds an FLV output type.
Also, removed ftello/fseeko because off_t is a really annoying data
type, and I'd rather have a firm int64_t for large sizes, so I named it
to os_fseeki64 and os_ftelli64 instead, and changed the file size
function to return an int64_t.
Not entirely sure how this happened but I *think* that a null source was
somehow being added to the list of user sources for one particular user,
and then I noticed this code does not check to see whether the source is
null or not.
Just use platform-nix.c code for general stuff that mac is compliant
with, and put a define around everything else. Take that code out of
platform-cocoa.m.
Added os_opendir, os_readdir, and os_closedir to be able to query
available files within a directory.
First, if the private data of the source fails to be created, then do
not destroy the source. If the source is destroyed, all the user's data
associated with that source is lost, which could end up being a
potential problem. Instead, let it linger as a 'dead' source until the
user chooses to fix the problem (though this should never really happen,
the source module functions should be programmed to handle this
scenario)
Secondly, rename new_frame_ready to ready_async_frame, and fix a
potential memory leak with it.
obs_source_output_video can cause cached frames to be freed twice if
called with a partially destroyed source, among other undesirable
effects; freeing the source private data right after the destroy signal
has been processed ensures proper behavior
- Add volume control
These volume controls are basically nothing more than sliders. They
look terrible and hopefully will be as temporary as they are
terrible.
- Allow saving of specific non-user sources via obs_load_source and
obs_save_source functions.
- Save data of desktop/mic audio sources (sync data, volume data, etc),
and load the data on startup.
- Make it so that a scene is created by default if first time using the
application. On certain operating systems where supported, a default
capture will be created. Desktop capture on mac, particularly. Not
sure what to do about windows because monitor capture on windows 7 is
completely terrible and is bad to start users off with.
If a source with async video wasn't currently active, it would endlessly
buffer the video data, which would cause memory to grow endlessly until
available memory was extinguished.
This really needs to be replaced with a proper caching mechanism at some
point.
This saves scenes/sources from json on exit, and properly loads it back
up when starting up the program again, as well as the currently active
scene.
I had to add a 'load' and 'save' callback to the source interface
structure because I realizes that certain sources (such as scenes)
operate different with their saved data; scenes for example would have
to keep track of their settings information constantly, and that was
somewhat unacceptable to make it functional.
The optional 'load' callback will be called only after having loaded
setttings specifically from file/imported data, and the 'save' function
will be called only specifically when data actually needs to be saved.
I also had to adjust the obs_scene code so that it's a regular input
source type now, and I also modified it so that it doesn't have some
strange custom creation code anymore. The obs_scene_create function is
now simply just a wrapper for obs_source_create. You could even create
a scene with obs_source_create manually as well.
The 'wait' constant was a terrible means of trying to ensure that the
packets were interleaved. Instead, calculate the current highest
timestamps of each encoder that's present in the interleaved buffer, and
use that as a means of detecting whether the current packet should be
sent off. This will guarantee sorting without relying on some arbirary
constant that 'assumes' that it'll be interleaved. It also reduces
buffering any more than what is needed to interleave.
- Updated the services API so that it links up with an output and
the output gets data from that service rather than via settings.
This allows the service context to have control over how an output is
used, and makes it so that the URL/key/etc isn't necessarily some
static setting.
Also, if the service is attached to an output, it will stick around
until the output is destroyed.
- The settings interface has been updated so that it can allow the
usage of service plugins. What this means is that now you can create
a service plugin that can control aspects of the stream, and it
allows each service to create their own user interface if they create
a service plugin module.
- Testing out saving of current service information. Saves/loads from
JSON in to obs_data_t, seems to be working quite nicely, and the
service object information is saved/preserved on exit, and loaded
again on startup.
- I agonized over the settings user interface for days, and eventually
I just decided that the only way that users weren't going to be
fumbling over options was to split up the settings in to simple/basic
output, pre-configured, and then advanced for advanced use (such as
multiple outputs or services, which I'll implement later).
This was particularly painful to really design right, I wanted more
features and wanted to include everything in one interface but
ultimately just realized from experience that users are just not
technically knowledgable about it and will end up fumbling with the
settings rather than getting things done.
Basically, what this means is that casual users only have to enter in
about 3 things to configure their stream: Stream key, audio bitrate,
and video bitrate. I am really happy with this interface for those
types of users, but it definitely won't be sufficient for advanced
usage or for custom outputs, so that stuff will have to be separated.
- Improved the JSON usage for the 'common streaming services' context,
I realized that JSON arrays are there to ensure sorting, while
forgetting that general items are optimized for hashing. So
basically I'm just using arrays now to sort items in it.
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
Just wanted the ability to be able to add private data to the properties
data. Makes it a little easier to manage data if you get updates from
controls.
Before, async video sources would flicker because they were only being
drawn when they were updated. So when updated, they'd draw that frame,
then it would stop drawing it until it updated again. This fixes that
issue and they should now draw properly.
Also, fix a few other minor bugs and issues relating to async video,
and make it so that non-async video filters can be properly applied to
them.
For the purposes of testing, change the 'test-random' source to an async
video source that updates every quarter of a second with a new random
face.
Also fix a bug where non-async video sources wouldn't have filter
effects applied properly.
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
- Fix an issue that could occur when using more than one video encoder.
Audio/video would not sync up correctly because they were expected to
be paired with a particular encoder. This simply adds a little
helper variable to encoder packets that specifies the system time in
microseconds. We then use that system time to sync
- Fix an issue with x264 with fractional FPS rates (29.97 and 59.94
particularly) where it would create ridiculously large stream
outputs. The problem was that you shouldn't set the timebase_*
variables in the x264 params manually, let x264 handle the default
values for it and leave them at 0.
- Make x264 use CFR output, because there's no reason to ever use VFR
in this case.
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
I was getting cases where the CPU cache was causing issues with the
allocation counter, for the longest time I thought I was doing something
wrong, but when the allocation counter went below 0, I realized it was
because I didn't use atomics for incrementing/decrementing the
allocation counter variable. The allocation counter now always should
have the correct value.
- Add interleaving of video/audio packets for outputs that are encoded
and expect both video and audio data, sorting the packets and sending
them to the output when both video and audio is received.
- Combine create and initialize callbacks for the encoder API callback
interface.
Improve the properties API so that it can actually respond somewhat to
user input. Maybe later this might be further improved or replaced with
something script-based.
When creating a property, you can now add a callback to that property
that notifies when the property has been changed in the user interface.
Return true if you want the properties to be refreshed, or false if not.
Though now that I think about it I doubt there would ever be a case
where you would have this callback and *not* refresh the properties.
Regardless, this allows functions to change the values of properties or
settings, or enable/disable/hide other property controls from view
dynamically.
- Add start/stop code to obs-output module
- Use a circular buffer for the buffered encoder packets instead of a
dynamic array
- Add pthreads.lib as a dependency to obs-output module on windows in
visual studio project files
- Fix an windows export bug for avc parsing functions on windows.
Also, rename those functions to be more consistent with each other.
- Make outputs use a single function for encoded data rather than
multiple functions
- Add the ability to make 'text' properties be passworded
- obs-outputs module: Add preliminary code to send out data, and add
an FLV muxer. This time we don't really need to build the packets
ourselves, we can just use the FLV muxer and send it directly to
RTMP_Write and it should automatically parse the entire stream for us
without us having to do much manual code at all. We'll see how it
goes.
- libobs: Add AVC NAL packet parsing code
- libobs/media-io: Add quick helper functions for audio/video to get
the width/height/fps/samplerate/etc rather than having to query the
info structures each time.
- libobs (obs-output.c): Change 'connect' signal to 'start' and 'stop'
signals. 'start' now specifies an error code rather than whether it
simply failed, that way the client can actually know *why* a failure
occurred. Added those error codes to obs-defs.h.
- libobs: Add a few functions to duplicate/free encoder packets
The serializer code is meant to be used as a means of reading/writing
data from any arbitrary type of input/output.
The array output serializer makes it so we can stream data to a dynamic
array on the fly.
- Add dummy GL texture support to allow libobs texture references to be
created for GL without
- Add a texture_getobj function to allow the retrieval of the
context-specific object, such as the D3D texture pointer, or the
OpenGL texture object handle.
- Also cleaned up the export stuff. I realized it was all totally
superfluous. Kind of a dumb moment, but nice to clean it up
regardless.
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
- Add a properties window for sources so that you can now actually edit
the settings for sources. Also, display the source by itself in the
window (Note: not working on mac, and possibly not working on linux).
When changing the settings for a source, it will call
obs_source_update on that source when you have modified any values
automatically.
- Add a properties 'widget', eventually I want to turn this in to a
regular nice properties view like you'd see in the designer, but
right now it just uses a form layout in a QScrollArea with regular
controls to display the properties. It's clunky but works for the
time being.
- Make it so that swap chains and the main graphics subsystem will
automatically use at least one backbuffer if none was specified
- Fix bug where displays weren't added to the main display array
- Make it so that you can get the properties of a source via the actual
pointer of a source/encoder/output in addition to being able to look
up properties via identifier.
- When registering source types, check for required functions (wasn't
doing it before). getheight/getwidth should not be optional if it's
a video source as well.
- Add an RAII OBSObj wrapper to obs.hpp for non-reference-counted
libobs pointers
- Add an RAII OBSSignal wrapper to obs.hpp for libobs signals to
automatically disconnect them on destruction
- Move the "scale and center" calculation in window-basic-main.cpp to
its own function and in its own source file
- Add an 'update' callback to WASAPI audio sources
Microsoft's garbage compiler just doesn't even.. read the names of
enums. It sees an enum and goes "durr, that's an int" without even
properly evaluating it. Just total garbage, as per usual.
Also, rename atomic functions to be consistent with the rest of the
platform/threading functions, and move atomic functions to threading*
files rather than platform* files
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean