When using multiple video encoders together with a single audio encoder,
the audio wouldn't be in sync.
The reason why this occurred is because the dts_usec variable of the
encoder packet (which is based on system time) would always be reset to
a value based upon the dts (which is not guaranteed to be based on
system time) in the apply_interleaved_packet_offset function. This
would then in turn cause it to miscalculate the starting audio/video
offsets, which are required to calculate sync.
So instead of calling that function unnecessarily, separate the check
for whether audio/video has been received in to a new function, and only
start applying the interleaved offsets after audio and video have
actually started up and the starting offsets have been calculated.
Instead of having services automatically apply encoder settings on
initialization (whether the output wants to or not), instead make it
something that must be explicitly called by the developer. There are
cases where the developer may not wish to apply the service-specific
settings, or may wish to override them for whatever reason.
If on windows, use the windows UTF conversion functions due to the fact
that the existing utf code is meant for 32bit wide characters, while the
windows conversion functions will properly handle 16bit wide characters.
Adds an additional search path for UI-independent and
installation-independent plugins for windows/mac.
Windows:
%appdata%/obs-plugins/
Mac:
~/Library/Application Support/obs-plugins/
Plugin directory format is [module]/bin and [module]/data.
On windows, for 32bit binaries:
[module]/bin/32bit
and 64bit binaries:
[module]/bin/64bit
Before: After:
obs_service_gettype obs_service_get_type
It seems there was an API function that was missed when we were doing
our big API consistency update. Unsquishes obs_service_gettype to
obs_service_get_type.
I didn't think it would ever need to be exported, but this function is
actually useful for applying settings to properties (to call all of
their update callbacks based upon the settings) without necessarily
having to have an object associated with it.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
The comment says "these are different", but doesn't state why.
Actually, I should really rename the output flags so they're not flags,
but instead just "caps", because that's really all that they are.
obs_data_apply is used to apply the changes of a source object in to a
destination object. Problem with this however is that if sub-objects
are in use, it currently just copies the pointer of the sub-object,
meaning that the source and destination will both share the same
sub-object via reference. If anything modifies that sub-object data,
it'll modify it for both objects, which was not intended.
Instead of copying the object pointer, create a new copy and then
recursively repeat the process to ensure the data is always completely
separate.
Changed:
char *os_get_config_path(const char *name);
To:
int os_get_config_path(char *dst, size_t size, const char *name);
Also added:
char *os_get_config_path_ptr(const char *name);
I don't like this function returning an allocation by default.
Similarly to what was done with the wide character conversion functions,
this function now operates on an array argument, and if you really want
to just get a pointer for convenience, you use the *_ptr version of the
function that clearly indicates that it's returning an allocation.
The gs_enum_adapters function is an optional implementation to allow
enumeration of available graphics adapters that can be used with the
program. The ID associated with the adapter can be an index or a hash
depending on the implementation.
Direct3D textures are usually aligned to a specific pitch, so their
internal width is often not equal to the expected output width; this
means that if we want to use it on our texture output, that we must
de-align the texture while copying the texture data.
However, I unintentionally messed up the calculation at some point with
RGBA textures, so the variable size I was supposed to be using was
supposed to be multiplied by 4 (for RGBA), while I was still expecting
single channel data. So, if the texture width was something like 1332,
the source (directx) texture line size would be somewhere at or above
5328 (because it's RGBA), then destination is at 1332 (YUV luma plane),
and it would unintentionally treat 3996 (or 5328 - 1332) bytes as the
unused alignment data. So this fixes that miscalculation.
Refer to https://www.opengl.org/registry/doc/GLSLangSpec.4.10.6.clean.pdf
for a list of current (reserved) keywords.
In the future the shader compiler in libobs-opengl should probably take
care of avoiding those name conflicts (bonus points for transparently
remapping the names of effect parameters)
Instead of returning a valid string value when there are no more strings
available in the list, return NULL to indicate failure. An empty string
should really be allowed to be a valid value for the list.
The return value of os_sleepto_ns is true if it waited to the specified
time, and false if the current time is past the specified time. So it
basically returns true if it successfully waited.
I just didn't check the return value properly here, so it ended up just
setting the count of frames to 1 if overshot, ultimately causing sync
issues.
The temporary unoptimized code we were using before just completely
allocated a new copy of each frame every single time a new async frame
was output by the source plugin. This just creates a cache of frames as
needed for the current format/width/height to minimize the allocation
and deallocation. If new frames come in that are of a different
format/width/height, it'll just clear the cache. This is a fairly
important optimization.
all the async video related stuff usually started with async_*, and
there were two that didn't. So I just renamed them so they have the
same naming convention
If an async video source stops video for whatever reason, it would get
stuck on the last frame that was played. This was particularly awkward
when I wanted to give the user the ability to deactivate a source such
as a webcam because it would get stuck on the last frame.
This allows us to change the visible UI name of a property after it's
been created (particularly for a case where I want to change an
'Activate' button to 'Deactivate')
A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.
When the encoder is set to scale to a different resolution than the obs
output resolution, make sure it uses the current video colorspace and
range by default.
I actually kind of hate how strstr returns a non-const even though it
takes a const parameter, but I can understand why they made it that way.
They really should have split it in to two functions though, one const
and one non-const or something. But alas, ultimately for a C programmer
who knows what they're doing it isn't a huge deal.
This adds support for the windows 8+ output duplicator feature which
allows the efficient capturing of a specific monitor connected to the
currently used device.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
The boolean variables which stored whether frames have been
rendered/downloaded/converted/etc were not being reset when video
restarted, causing frames to not be sent in the correct order whenever
video was reset. This could lead to minor desync of video/audio.
In certain circumstances where the output was stopping, and where data
took a long enough time to send (such as when using an encoding preset
that causes high CPU usage), the output would sometimes still send data
even after it was stopped, typically causing the output to crash.
I unintentionally made it use obs_source::sample_info instead of using
the actual target channel count, which is designated by the OBS output
sampler info. obs_source::sample_info is actually used to indicate the
currently set sampler information for incoming samples. So if a source
is outputting 5.1 channel 48khz audio, and OBS is running at stereo
44.1khz, then the obs_source::sample_info value would be set to
5.1/48khz, not the other way around. It indicates what the source
itself is running at, not what OBS is running at.
I suppose the variable needs a better name because even I used it
incorrectly despite actually having been the one who wrote it.
The copy_audio_data function really shouldn't be inlined because it's
being called twice. It's somewhat unnecessary, I think I left it inline
by accident.
This changes the way source volume handles transitioning between being
active and inactive states.
The previous way that transitioning handled volume was that it set the
presentation volume of the source and all of its sub-sources to 0.0 if
the source was inactive, and 1.0 if active. Transition sources would
then also set the presentation volume for sub-sources to whatever their
transitioning volume was. However, the problem with this is that the
design didn't take in to account if the source or its sub-sources were
active anywhere else, so because of that it would break if that ever
happened, and I didn't realize that when I was designing it.
So instead, this completely overhauls the design of handling
transitioning volume. Each frame, it'll go through all sources and
check whether they're active or inactive and set the base volume
accordingly. If transitions are currently active, it will actually walk
the active source tree and check whether the source is in a
transitioning state somewhere.
- If the source is a sub-source of a transition, and it's not active
outside of the transition, then the transition will control the
volume of the source.
- If the source is a sub-source of a transition, but it's also active
outside of the transition, it'll defer to whichever is louder.
This also adds a new callback to the obs_source_info structure for
transition sources, get_transition_volume, which is called to get the
transitioning volume of a sub-source.
The reason to keep a reference counter for transitions is due to an
optimization I'm planning on when calculating transition volumes. I'm
planning on walking the source tree to be able to calculate the current
base volume of a source, but *only* if there are transitions active,
because the only time that the volume can be anything other than 1.0
or 0.0 is when there are active transitions, which may change the base
volume of a source.
When the presentation volume is set for a source, it's set for all of
its children and their children. The original intention for doing this
was to be able to use it for transitioning, but honestly it's just bad
design, and I feel there are better ways to handle transitioning volume.
Changed the design from using obs_source::enum_refs to just simply
preventing infinite source recursion in general, rather than allowing it
through the enum_refs variable. obs_source_add_child has been changed
so that it now returns a boolean, and if the function fails, it means
that the child cannot be added due to that potential recursion.
Two integers are needlessly converted to floating points for what should
be an integer operation. One of those floats is then used for another
integer operation later, where the original integer value should have
been used. So essentially there was an int -> float -> int conversion
going on, which could lead to potential loss of data due to floating
point precision.
There were also some general 64bit -> 32bit conversion warnings.
obs_encoder_getdisplayname declaration was not changed to match the
definition (obs_encoder_get_display_name) when the API consistency
update occurred.
If an encoder did not possess any SEI data, it would never send data at
all because the sent_first_packet wasn't set despite the first packet
being sent.
Added obs_avc_keyframe that returns whether an avc packet is a keyframe
or not. This function is particularly useful for when writing custom
encoder plugins.
I encountered some cases where I needed to use these enumerations
outside of the file, so this allows other modules to use AVC
enumerations without having to redefine them each time. Especially
useful for custom encoder modules.
I neglected to surround some files with extern "C", so if something
written with C++ used the files it would cause function exports to not
be mangled by it correctly.
This adds bicubic and lanczos scaling capability to libobs to improve
scaling quality and sharpness when the output resolution has to be
scaled relative to the base resolution. Bilinear is also available,
although bilinear has rather poor quality and causes scaling to appear
blurry.
If the output resolution is close to the base resolution, then bilinear
is used instead as an optimization, as there's no need to use these
shaders if scaling is not in use.
The Bicubic and Lanczos effects are also exposed via exported function
to allow the ability to use those shaders in plugin modules if desired.
The API change adds a variable 'scale_type' to the obs_video_info
structure that allows the user interface to choose what type of scaling
filter should be used.
Remove the calculation of volume levels and the corresponding signal
from obs_source since this is now handled in the volume meter.
Code that is interested in the volume levels can either use the
volmeter provided from obs_audio_controls or use the audio_data signal
to gain access to the raw audio data.
Signal updated volume levels when they become available in the volume
meter. The frequency of the updates can be adjusted by setting a
different update interval.
Remove the the signal handler for the volume_level signal of audio
sources from the volume meter in anticipation of using the levels
calculated in the volume meter itself.
Add a property to the volume meter that specifies the length of the
interval in which the audio data should be sampled before the
audio_levels signal is emitted.
This adds a new signal to (audio) sources which is emitted whenever new
audio data is received from the source. This enables other code that is
interested in the raw audio data to directly access it when it becomes
available.
This was an important change because we were originally using an
hard-coded 709/partial range color matrix for the output, which was
causing problems for people wanting to use different formats or color
spaces. This will now automatically generate the color matrix depending
on the format, color space, and range, or use an identity matrix if the
video format is RGB instead of YUV.
Just for a quick background: D3D's fmod intrinsic is very imprecise.
Naturally floating points aren't precise at all, and when the numbers
you're dealing with become very large, it can often be off by 0.1 or
more.
However, apparently 0.1 isn't enough of an offset to ensure a proper
value when using the fmod intrinsic and then flooring the value. 0.2
seems to fix the issue and make the image display properly.
On certain GPUs, if you don't flush and the window is minimized it can
endlessly accumulate memory due to what I'm assuming are driver design
flaws (though I can't know for sure). The flush seems to prevent this
from happening, at least from my tests. It would be nice if this
weren't necessary.
This replaces the old code for the audio meter that was using
calculations in two different places with the new audio meter api.
The source signal will now emit simple levels instead of dB values,
in order to avoid dB conversion and calculation in the source.
The GUI on the other hand now expects simple position values from
the volume meter api with no knowledge about dB calculus either.
That way all code that handles those conversions is in one place,
with the option to easily add new mappings that can be used
everywhere.
This adds a volume meter object to libobs that can be used by the GUI
or plugins to convert the raw audio level data from sources to values
that can easily be used to display the audio data.
The volume meter object will use the same mapping functions as the
fader object to map dB levels to a scale.
In older versions of visual studio 2013 microsoft's WORTHLESS C compiler
has a bug where it will, almost at random, not be able to handle having
variables declared in the middle of a function and give the warning:
"illegal use of this type as an expression". It was fixed in recent
VS2013 updates, but I'm not about to force everyone to update to it.
Because a vec3 structure can contain a __m128 variable and not the
expected three floats x, y, and z, you must use vec3_set when
setting a value for a vec3 structure to ensure that it uses the proper
intrinsics internally if necessary.
This adds functions for piping a command line program's stdin or stdout.
Note however that this is unidirectional only.
This will be especially useful later on when implementing MP4 output,
because MP4 output has to be piped to prevent unexpected program
termination from corrupting the file.
This adds a new library of audio control functions mainly for the use in
GUIS. For now it includes an implementation of a software fader that can
be attached to sources in order to easily control the volume.
The fader can translate between fader-position, volume in dB and
multiplier with a configurable mapping function.
Currently only a cubic mapping (mul = fader_pos ^ 3) is included, but
different mappings can easily be added.
Due to libobs saving/restoring the source volume from the multiplier,
the volume levels for existing source will stay the same, and live
changing of the mapping will work without changing the source volume.
This function greatly simplifies the use of effects by making it so you
can call this function in a simple loop. This reduces boilerplate and
makes drawing with effects much easier. The gs_effect_loop function
will now automatically handle all the functions required to do drawing.
---------------------
Before:
gs_technique_t *technique = gs_effect_get_technique("technique");
size_t passes = gs_technique_begin(technique);
for (size_t pass = 0; pass < passes; pass++) {
gs_technique_begin_pass(technique, pass);
[draw]
gs_technique_end_pass(technique);
}
gs_technique_end(technique);
---------------------
After:
while (gs_effect_loop(effect, "technique")) {
[draw]
}
If you look at the previous commits, you'll see I had added
obs_source_draw before. For custom drawn sources in particular, each
time obs_source_draw was called, it would restart the effect and its
passes for each draw call, which was not optimal. It should really use
the effect functions for that. I'll have to add a function to simplify
effect usage.
I also realized that including the color matrix parameters in
obs_source_draw made the function kind of messy to use; instead,
separating the color matrix stuff out to
obs_source_draw_set_color_matrix feels a lot more clean.
On top of that, having the ability to set the position would be nice to
have as well, rather than having to mess with the matrix stuff each
time, so I also added that for the sake of convenience.
obs_source_draw will draw a texture sprite, optionally of a specific
size and/or at a specific position, as well as optionally inverted. The
texture used will be set to the 'image' parameter of whatever effect is
currently active.
obs_source_draw_set_color_matrix will set the color matrix value if the
drawing requires color matrices. It will set the 'color_matrix',
'color_range_min', and 'color_range_max' parameters of whatever effect
is currently active.
Overall, these feel much more clean to use than the previous iteration.
This function simplifies drawing textures for sources in order to help
reduce boilerplate code. If a source is a custom drawn source, it will
automatically set up the effect to draw the sprite. If it's not a
custom drawn source, it will simply draw the sprite as per normal. If
the source uses a specific color matrix, it will also handle that as
well.
When the image data is copied into a texture with flipping set to true
each row has to be copied into the (height - row - 1)th row instead of
the row with the same number. Otherwise it will just create an unflipped
copy.
Apparently the audio isn't guaranteed to start up past the first video
frame, so it would trigger that assert (which I'm glad I put in). I
didn't originally have this happen when I was testing because my audio
buffering was not at the default value and didn't trigger it to occur.
A blunder on my part, and once again a fine example of how you should
never make assumptions about possible code path.
This moves the 'flags' variable from the obs_source_frame structure to
the obs_source structure, and allows user flags to be set for a specific
source. Having it set on the obs_source_frame structure didn't make
much sense.
OBS_SOURCE_UNBUFFERED makes it so that the source does not buffer its
async video output in order to try to play it on time. In other words,
frames are played as soon as possible after being received.
Useful when you want a source to play back as quickly as possible
(webcams, certain types of capture devices)
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
This bug would happen if audio packets started being received before
video packets. It would erroneously cause audio packets to be
completely thrown away, and in certain cases would cause audio and video
to start way out of sync.
My original intention was "don't accept audio until video has started",
but instead mistakenly had the effect of "don't start audio until a
video packet has been received". This was originally was intended as a
way to handle outputs hooking in to active encoders and compensating
their existing timestamp information.
However, this made me realize that there was a major flaw in the design
for handling this, so I basically rewrote the entire thing.
Now, it does the following steps when inserting packets:
- Insert packets in to the interleaved packet array
- When both audio/video packets are received, prune packets up until the
point in which both audio/video start at the same time
- Resort the interleaved packet array
I have tested this code extensively and it appears to be working well,
regardless of whether or not the encoders were already active with
another output.
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump. This could mess up syncing
slightly when the frame is finally given to an outputs.
Apparently I unintentionally typed received_video = false twice instead
of one for video and one for audio.
This fixes a bug where audio would not start up again on an output that
had recently started and then stopped.
When the output sets a new audio/video encoder, it was not properly
removing itself from the previous audio/video encoders it was associated
with. It was erroneously removing itself from the encoder parameter
instead.
At the start of each render loop, it would get the timestamp, and then
it would then assign that timestamp to whatever frame was downloaded.
However, the frame that was downloaded was usually occurred a number of
frames ago, so it would assign the wrong timestamp value to that frame.
This fixes that issue by storing the timestamps in a circular buffer.
If audio timestamps are within the operating system timing threshold,
always use those values directly as a timestamp, and do not apply the
regular jump checks and timing adjustments that normally occur.
This potentially fixes an issue with plugins that use OS timestamps
directly as timestamp values for their audio samples, and bypasses the
timing conversions to system time for the audio line and uses it
directly as the timestamp value. It prevents those calculations from
potentially affecting the audio timestamp value when OS timestamps are
used.
For example, if the first set of audio samples from the audio source
came in delayed, while the subsequent samples were not delayed, those
first samples could have potentially inadvertently triggered the timing
adjustments, which would affect all subsequent audio samples.
This combines the 'direct' timestamp variance threshold with the maximum
timestamp jump threshold (or rather just removes the max timestamp jump
threshold and uses the timestamp variance threshold for both timestamp
jumps and detecting timestamps).
The reason why this was done was because a timestamp jump could occur at
a higher threshold than the threshold used for detecting OS timestamps
within a certain threshold. If timestamps got between those two
thresholds it kind of became a weird situation where timestamps could be
sort of 'stuck' back or forward in time more than intended. Better to
be consistent and use the same threshold for both values.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold. The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about. For
the time being, I feel it may be worth it to try 50 milliseconds.
The graphics subsystem was not being freed here, for example if a
required effect failed to compile it would still successfully have the
graphics subsystem sans required effect. The graphics subsystem should
be completely shut down if required libobs effects fail to compile.
Due to a small error in the timestamp smoothing code the timestamp of
audio packages that were too early was always set to the next expected
timestamp, even if the difference was bigger than the smoothing threshold.
This would cause obs to simply append all audio data to the buffer even if
the real timestamp was way smaller than the next that was expected.
This should reduce corruption problems with for example the pulseaudio
plugin, which resends data under certain conditions.
As stated in the sysinfo manpage the totalram field in the sysinfo
structure is in mem_unit sizes since Linux 2.3.23. To get the actual
memory in the system the totalram value has to be multiplied with the
mem_unit size.
obs_source_update_properties should be called by sources when property
values change, e.g. a capture device source would use this when it
detects a new capture device (in case its properties contain a list of
available capture devices or similar)
This allows for easier comparison between two obs_data_t and it will make
const char *json1 = obs_data_get_json(data);
obs_data_t *data_ = obs_data_create_from_json(json1);
const char *json2 = obs_data_get_json(data_);
produce the same string in json1 and json2
This Fixes a minor flaw with the API where data had to always be mutable
to be usable by the API.
Functions that do not modify the fundamental underlying data of a
structure should be marked as constant, both for safety and to signify
that the parameter is input only and will not be modified by the
function using it.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This is not a com pointer; it should not release/close the handle when
an & operator is used, it should only return the handle value. Clearing
is only used on assignment.
This helps ensure that an asynchronous video source is played as close
to its framerate as possible, reduces the risk of duplication as
much as possible, and helps to ensure that playback is as smooth as
possible.
This prevents multiple needless calls to obs_source_get_frame and other
functions. If the texture has already been processed, then just render
it as-is in any subsequent calls to obs_source_video_render.
This is actually unnecessary now that there's a hard limit on the
maximum offset in which audio can be inserted.
This also assumes too much about the audio; it assumes audio is always
on, where as with some devices (such as the elgato) audio is not on
until the stream starts, and when the video has already incremented the
counter.
Audio that goes below the minimum expecting timing (current time -
buffering time) is automatically removed. However, delayed audio is not
removed regardless of its delay. This puts a hard cap of 6 seconds from
current time that the maximum delay audio can have. This will also
prevent the circular buffer from dynamically growing too large.
Doing timestamp smoothing in obs-source.c is good because timestamps can
typically operate on a different timebase, however, obs-source.c can
also change that time base dynamically (such as with async video and
unexpected timestamp jumps), so in order to ensure that audio is
seamless in the output as well, perform timestamp smoothing in
audio-io.c as well just as an extra precautionary measure.
If the audio didn't start at the 0 timestamp, it would misinterpret it
as a timestamp jump because obs_source::next_audio_ts_min is set to 0 on
creation. Timestamp starting values should be allowed to start at any
arbitrary value.
This makes it easier to do two things:
1.) Get the skipped frames count relative to each specific output
2.) Make it so that getting the 'current' log will always contain
information about skipped frames. Before, you'd have to force the
user to restart the program and get the last log, which was really
annoying when you just wanted to see how the encoders were
performing.
It would try to move data from the old pointer even if the pointer was
changed via realloc, which would cause it to copy data from freed
memory. Instead, just get the position of the data and call memmove to
move it up.
On release of obs_data, if the default/autoselect values pointed toward
a sub-object or a sub-array, it would look up the data for the regular
user value. (Palana must have forgot to change these functions around
when adding the default/autoselect functionality)
Multiplication of the matricies was being done in the wrong direction.
This caused source transformations to come out looking incorrect, for
example the linux-xshm source's cursor would not be drawn correctly or
in the right position if the source was moved/scaled/rotated. The
problem just turned out to be that the gs_matrix_* functions were
multiplying in the wrong direction. Reverse the direction of
multiplication, and the problem is solved.
Adds the following function:
------------------------------
obs_properties_add_font
This function creates a 'font' property to allow selection of a system
font. Implementation by the UI should treat the setting as an obs_data
sub-object with four sub-items:
- face: face name (string)
- style: style name (string)
- size: size (integer)
- flags: font flags (integer)
'flags' can be any combination of the following values:
- OBS_FONT_BOLD
- OBS_FONT_ITALIC
- OBS_FONT_UNDERLINE
- OBS_FONT_STRIKEOUT
API functions added:
-----------------------------------------------
obs_output_set_preferred_size
obs_output_get_width
obs_output_get_height
obs_encoder_set_scaled_size
obs_encoder_get_width
obs_encoder_get_height
These functions allow for easier means of setting a custom resolution on
an output or encoder.
If an output uses an encoder and you set the preferred width/height
using the output, then the output will attempt to set the scaled
width/height for the encoder it's currently using.
Outputs and encoders now should use these functions to determine the
width/height of the raw frame data instead of using the video-io
functions.
This is sort of hard to explain: the scale_video_output function was
overwriting the current frame. If scaling was disabled, it would do
nothing, and return success, and all would be well. If it was enabled,
it would then call the scaler, and then replace the contents of the
'data' function parameter with the scaled frame data. The problem with
this is that I was passing video_output::cur_frame directly, which
overwrites its previous value with the scaled frame data. Then if
cur_frame was not updated on time, it would end up trying to scale the
previously scaled image, if that makes sense. it would call the video
scaler with the same from for both the source and destination.
So the simple fix was to simply use a local variable and pass that in as
a parameter to prevent this bug from occurring.
For the sake of consistency, renamed these two functions to include
_value at the end so they are consistent.
Renamed: To:
-------------------------------------------------------
obs_data_has_default obs_data_has_default_value
obs_data_has_autoselect obs_data_has_autoselect_value
obs_data_item_has_default obs_data_item_has_default_value
obs_data_item_has_autoselect obs_data_item_has_autoselect_value
Instead of having functions like obs_signal_handler() that can fail to
properly specify their actual intent in the name (does it signal a
handler, or does it return a signal handler?), always prefix functions
that are meant to get information with 'get' to make its functionality
more explicit.
Previous names: New names:
-----------------------------------------------------------
obs_audio obs_get_audio
obs_video obs_get_video
obs_signalhandler obs_get_signal_handler
obs_prochandler obs_get_proc_handler
obs_source_signalhandler obs_source_get_signal_handler
obs_source_prochandler obs_source_get_proc_handler
obs_output_signalhandler obs_output_get_signal_handler
obs_output_prochandler obs_output_get_proc_handler
obs_service_signalhandler obs_service_get_signal_handler
obs_service_prochandler obs_service_get_proc_handler
API Removed:
- graphics_t obs_graphics();
Replaced With:
- void obs_enter_graphics();
- void obs_leave_graphics();
Description:
obs_graphics() was somewhat of a pointless function. The only time
that it was ever necessary was to pass it as a parameter to
gs_entercontext() followed by a subsequent gs_leavecontext() call after
that. So, I felt that it made a bit more sense just to implement
obs_enter_graphics() and obs_leave_graphics() functions to do the exact
same thing without having to repeat that code. There's really no need
to ever "hold" the graphics pointer, though I suppose that could change
in the future so having a similar function come back isn't out of the
question.
Still, this at least reduces the amount of unnecessary repeated code for
the time being.
Changed:
- obs_source_gettype
To:
- enum obs_source_type obs_source_get_type(obs_source_t source);
- const char *obs_source_get_id(obs_source_t source);
This function was inconsistent for a number of reasons. First, it
returns both the ID and the type of source (input/transition/filter),
which is inconsistent with the name of "get type". Secondly, the
'squishy' naming convention which has just turned out to be bad
practice and causes inconsistencies. So it's now replaced with two
functions that just return the type and the ID.
Prefix with obs_ for the sake of consistency
Renamed enums:
- order_movement (now obs_order_movement)
Affected functions:
- obs_source_filter_setorder
- obs_sceneitem_setorder
Renamed functions:
- obs_source_getframe (rename to obs_source_get_frame)
- obs_source_releaseframe (rename to obs_source_release_frame)
For the sake of consistency and helping to get rid of the "squishy
function name" issue
The bug here is that when conversion is active, the source video frame
is initialized with the destination height/width/format instead of the
source height/width/format.
With the recent change to module handling by BtbN, I felt that having
this information might be useful in case someone is actually using make
install to set up their libraries.
This functionality can now be handled automatically because locale can
now be freed seaparately from obs_module_unload with
obs_module_free_locale, which is called automatically when the module is
being freed.
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
The version macro that modules use to compile versus the actual core
version that may be in use may be different, so this is a way to compare
them to check for compatibility issues later on.
Changed API functions:
libobs: obs_reset_video
Before, video initialization returned a boolean, but "failed" is too
little information, if it fails due to lack of device capabilities or
bad video device parameters, the front-end needs to know that.
The OBS Basic UI has also been updated to reflect this API change.
It's a sad day when I realize that I did not add any null pointer
checking to any of the functions in this file. Discovered it while
checking all the different languages, happened when there was a missing
locale file for a certain module that hadn't had the language uploaded
yet.
These macros are used as easy helper functions to load/unload module
locale that's based upon the text_lookup system. You simple place the
OBS_MODULE_USE_DEFAULT_LOCALE macro once in the module, call
OBS_MODULE_FREE_DEFAULT LOCALE in obs_module_unload, and then
call obs_module_text anywhere in your module where you need to look up
text.
By default, it will look for a locale directory in your module's data
directory, and look for language files within it (INI locale format)
This function is used to simplify the process when using the default
locale handling for modules. It will automatically search in the plugin
data directory associated with the specific module specified, load the
default locale text (for example english if its default language is
english), and then it will load the set locale on top of the default
locale, which will cause text to use the default locale if the desired
locale text is not found.
Total bytes, total frames, and frames dropped. Total frames is
generated automatically, but total bytes and total dropped frames are
returned via callbacks.
Before it would assign the encoder/media callbacks directly to the
output's callbacks, so instead of doing that, it now goes through
intermediary functions for the sake of counting the frames.
Usually if you are reconnecting after network outage, it will give a
different code (such as OBS_OUTPUT_CONNECT_FAILED). So, if already
reconnecting, ignore the code unless it's OBS_OUTPUT_SUCCESS.
I was implementing a pushing/popping attributes function like with GL,
but I realized that for our particular purposes (and actually for most
purposes) its usage was somewhat.. niche. I may still implement
pushing/popping of attributes in the future, though right now I feel
using a function to reset the state is sufficient for our purposes.
There was no need to call the context free function in the
initialization function, and it's safer to just initialize the memory to
0 before using (which also negates the need for da_init)
This just ensures that if an obs object is renamed that the pointer to
older names will still be valid. Prevents renames from causing any
invalid memory access.
When the obs object is destroyed, so are the cached names.
The core itself now provides reconnection options (enabled by default, 2
second timeout between reconnects, 20 retries max until actual
disconnection occurs). This will make things easier for both module
developers and UI developers.
Reconnecting treats the stream as though it were still active, and
signals are sent when reconnecting and upon successful reconnection.
Need to implement user interface information for reconnections.
This implements the 'frame skipping' mechanism to forcibly cause frames
to be duplicated in order to reduce encoder complexity so the encoder
can catch up to the video, otherwise it will continue to be
progressively behind and will cause a desync of the video.
Typically, if a user gets this issue, they should turn down their
settings. For the love of god do not tell them that 'frames are
skipping', just tell them that CPU usage is high, and that they should
consider turning down their settings.
MagickCore is provided here as an alternative to FFmpeg in case FFmpeg
is not easily supported (for example, debian-based linux distros).
NOTE: Cmake configuration needs to be changed in order to allow
MagickCore image support.
NOTE: In texture_setimage, I had to move variables to the top of the
scope because microsoft's C compiler will give the legacy C90 error of:
'illegal use of this type as an expression'.
To sum it up, microsoft's C compiler is still utter garbage.
Similar to the shader functions, the effect parameter functions take
the effect as a parameter. However, the effect parameter is pretty
pointless, because the effect parameter.. parameter stores the effect
pointer interally.
...I'm actually concerned that I went a bit overkill trying to prevent
backwards compatibility issues with this abstraction design, because
this is a large number of files that have to be modified just to add a
single graphics subsystem export. Someone's going to strangle me, and
when you know that someone might strangle you, that means that you did
something wrong. We'll have to look in to simplifying this in the
future without killing backward compatibility safety.
The module callback obs_module_set_locale will be called after loading
the module, and any time the locale is manually changed via core API.
When this function is called, the module is expected to load new text
lookup values for all the text it uses based upon the current locale.
The locale parameter was a mistake, because it puts extra needless
burden upon the module developer to have to handle this variable for
each and every single callback function. The parameter is being removed
in favor of a single centralized module callback function that
specifically updates locale information for a module only when needed.
Having the value stored here is somewhat pointless, so this is one step
in fixing the locale handling. Locale should be handled by the modules
themselves with their own loaded locale lookup information.
This API is used to set the current locale for libobs, which it will set
for all modules when a module is loaded or specifically when the locale
is manually changed.
These functions were mostly related to being able to set true fullscreen
mode -- however, this has no place for our purposes, and these functions
were just sitting empty and unused, so they should be removed.
Besides, fullscreen mode only applies to the windows operating system.
This variable is currently somewhat pointless, I was originally going to
use it to tell the graphics subsystem to completely rebuild the internal
vertex buffers, but it would be bad/inefficient to allow that
functionality.
These are meant to reflect auto-detection configuration changes that
should not be written to the config, for example, frame rate changes
for a camera where the (user-/config-file-)configured frame rate isn't
available but a similar frame rate can be automatically chosen
Default values are now permanently stored in the obs_data_items and
can be accessed via the new get_default functions
Also default values are no longer serialized to JSON to ease transition
to new default values
This allows, for example, disconnected devices for dshow/avcapture to
be listed (and to stay selected in the user config) while making them
unavailable for selection in new sources
The 'initialize' callback is used before the encoders/output start up so
it can adjust encoder settings to required values if needed.
Also added the function 'obs_encoder_active' that returns true or false
depending on whether that encoder is active or not.
Structures with anonymous unions would a warning when you do a brace
assignment on them.
Also fixed some unused parameters and removed some unused variables.
So, scene editing was interesting (and by interesting I mean
excruciating). I almost implemented 'manipulator' visuals (ala 3dsmax
for example), and used 3 modes for controlling position/rotation/size,
but in a 2D editing, it felt clunky, so I defaulted back to simply
click-and-drag for movement, and then took a similar though slightly
different looking approach for handling scaling and reszing.
I also added a number of menu item helpers related to positioning,
scaling, rotating, flipping, and resetting the transform back to
default.
There is also a new 'transform' dialog (accessible via menu) which will
allow you to manually edit every single transform variable of a scene
item directly if desired.
If a scene item does not have bounds active, pulling on the sides of a
source will cause it to resize it via base scale rather than by the
bounding box system (if the source resizes that scale will apply). If
bounds are active, it will modify the bounding box only instead.
How a source scales when a bounding box is active depends on the type of
bounds being used. You can set it to scale to the inner bounds, the
outer bounds, scale to bounds width only, scale to bounds height only,
and a setting to stretch to bounds (which forces a source to always draw
at the bounding box size rather than be affected by its internal size).
You can also set it to be used as a 'maximum' size, so that the source
doesn't necessarily get scaled unless it extends beyond the bounds.
Like in OBS1, objects will snap to the edges unless the control key is
pressed. However, this will now happen even if the object is rotated or
oriented in any strange way. Snapping will also occur when stretching
or changing the bounding box size.
There are a ridiculous number of features related to scaling and
positioning due to requests by a number of people who complained that
they hated the way that OBS1 would always resize their sources when the
source's base size changed. There were also people who wanted more
control for how the resizing was handled, or the ability to completely
prevent resizing entirely if desired. So I made it so that you can
optionally use a 'bounds' system, which allows you to specify different
styles of controlling resizing.
If disabled, the source will always automatically resize and only the
base scale is applied. If enabled, you have a variety of different ways
to limit/control how it can resize within the bounds, or make it so it
can't resize at all. You can also control alignment within that
bounding box, so you can make it so that a source always aligns to a
side or corner of the box.
I also added an alignment value which changes how the source is oriented
relative to the position of the scene item. For example, setting
bottom-right alignment will make it so that the position of the item is
the bottom right corner of the source. When the source resizies, it
will resize leftward and upward in that case, which solves the problem
of how a source resizes relative to a desired position.
I encountered a situation where I wanted to delete a callback for a
signal while inside of that signal. However it would hard lock, and
even after that, it would mess up the loop for the callback list.
So, change the mutex of the individual signals to a recursive-style
mutex, and then if a callback of a signal is deleted while currently in
that signal, just mark it for deletion, which will happen after the
signal is complete.
This replaces the older code which simply queried the max volume level
value for any given audio.
I'm still not 100% sure on if this is how I want to approach the
problem, particularly, whether this should be done in obs_source or in
audio_line, but it can always be moved later if needed.
This uses the calculations by the awesome Bill Hamilton that OBS1 used
for its volume levels. It calculates the current max (level),
magnitude, and current peak. This data then can be used to create
awesome volume meter controls later on.
NOTE: Will probably need optimization, does one float at a time right
now.
Also, change some of the naming conventions. I actually need to change
a lot of the naming conventions in general so that all words are
separated by underscores. Kind of a bad practice there on my part.
There was a fundamental flaw with the string type conversion functions
where the sizes were not being properly accounted for. They were using
the 'len' value as a value for the output rather than only for the
input, which was bad because the output could have more or less
characters than the input.
When a source's private data is being created by a module, it wasn't
able to call most source functions because most functions rely on the
obs_source_info part of the context to be set. This fixes that issue.
It was strange that this wasn't already the case because the other
context types already did the same thing.
This uses the reverse planar YUV 4:2:0 conversion shader to output a YUV
texture without having to convert it via CPU. Again, this will reduce
video upload bandwidth usage to 37.5% of the original rate. I suspect
this will be particularly useful for when an FFmpeg or libav input
plugin for playing videos is made.
NOTE: There's an issue with certain texture sizes right now I haven't
been able to identify, if the full size of texture data divided by the
base texture width is an uneven number, the V chroma plane seems like it
can potentially shift, though I only had this happen with 160x90
resolution C920. Almost all resolutions tend to be even. Needs further
testing with more devices that support planar YUV 4:2:0 output.
This adds button support to properties, which will allow a properties
pane to let the user click a button to activate something with a
particular obs context. When pressed, the button will execute the
callback given, with the context's private data as a parameter.
Character conversion functions did not previously ask for a maximum
buffer size for their 'dst' parameter, it's unsafe to assume some given
destination buffer may have enough size to accommodate a conversion.
Added github gist API uploading to the help menu to help make problems a
bit easier to debug in the future. It's somewhat vital that this
functionality be implemented before any release in order to analyze any
given problem a user may be experiencing.
This fixes an issue reported by valgrind where overlapping memory
was copied with memcpy.
This also removes a redundant assignment where the array size was
explicitly set to zero when it was already zero.
This doesn't add FLV file output to the user interface yet, but we'll
get around to that eventually. This just adds an FLV output type.
Also, removed ftello/fseeko because off_t is a really annoying data
type, and I'd rather have a firm int64_t for large sizes, so I named it
to os_fseeki64 and os_ftelli64 instead, and changed the file size
function to return an int64_t.
Not entirely sure how this happened but I *think* that a null source was
somehow being added to the list of user sources for one particular user,
and then I noticed this code does not check to see whether the source is
null or not.
Just use platform-nix.c code for general stuff that mac is compliant
with, and put a define around everything else. Take that code out of
platform-cocoa.m.
Added os_opendir, os_readdir, and os_closedir to be able to query
available files within a directory.
First, if the private data of the source fails to be created, then do
not destroy the source. If the source is destroyed, all the user's data
associated with that source is lost, which could end up being a
potential problem. Instead, let it linger as a 'dead' source until the
user chooses to fix the problem (though this should never really happen,
the source module functions should be programmed to handle this
scenario)
Secondly, rename new_frame_ready to ready_async_frame, and fix a
potential memory leak with it.
obs_source_output_video can cause cached frames to be freed twice if
called with a partially destroyed source, among other undesirable
effects; freeing the source private data right after the destroy signal
has been processed ensures proper behavior
- Add volume control
These volume controls are basically nothing more than sliders. They
look terrible and hopefully will be as temporary as they are
terrible.
- Allow saving of specific non-user sources via obs_load_source and
obs_save_source functions.
- Save data of desktop/mic audio sources (sync data, volume data, etc),
and load the data on startup.
- Make it so that a scene is created by default if first time using the
application. On certain operating systems where supported, a default
capture will be created. Desktop capture on mac, particularly. Not
sure what to do about windows because monitor capture on windows 7 is
completely terrible and is bad to start users off with.
If a source with async video wasn't currently active, it would endlessly
buffer the video data, which would cause memory to grow endlessly until
available memory was extinguished.
This really needs to be replaced with a proper caching mechanism at some
point.
This saves scenes/sources from json on exit, and properly loads it back
up when starting up the program again, as well as the currently active
scene.
I had to add a 'load' and 'save' callback to the source interface
structure because I realizes that certain sources (such as scenes)
operate different with their saved data; scenes for example would have
to keep track of their settings information constantly, and that was
somewhat unacceptable to make it functional.
The optional 'load' callback will be called only after having loaded
setttings specifically from file/imported data, and the 'save' function
will be called only specifically when data actually needs to be saved.
I also had to adjust the obs_scene code so that it's a regular input
source type now, and I also modified it so that it doesn't have some
strange custom creation code anymore. The obs_scene_create function is
now simply just a wrapper for obs_source_create. You could even create
a scene with obs_source_create manually as well.
The 'wait' constant was a terrible means of trying to ensure that the
packets were interleaved. Instead, calculate the current highest
timestamps of each encoder that's present in the interleaved buffer, and
use that as a means of detecting whether the current packet should be
sent off. This will guarantee sorting without relying on some arbirary
constant that 'assumes' that it'll be interleaved. It also reduces
buffering any more than what is needed to interleave.
- Updated the services API so that it links up with an output and
the output gets data from that service rather than via settings.
This allows the service context to have control over how an output is
used, and makes it so that the URL/key/etc isn't necessarily some
static setting.
Also, if the service is attached to an output, it will stick around
until the output is destroyed.
- The settings interface has been updated so that it can allow the
usage of service plugins. What this means is that now you can create
a service plugin that can control aspects of the stream, and it
allows each service to create their own user interface if they create
a service plugin module.
- Testing out saving of current service information. Saves/loads from
JSON in to obs_data_t, seems to be working quite nicely, and the
service object information is saved/preserved on exit, and loaded
again on startup.
- I agonized over the settings user interface for days, and eventually
I just decided that the only way that users weren't going to be
fumbling over options was to split up the settings in to simple/basic
output, pre-configured, and then advanced for advanced use (such as
multiple outputs or services, which I'll implement later).
This was particularly painful to really design right, I wanted more
features and wanted to include everything in one interface but
ultimately just realized from experience that users are just not
technically knowledgable about it and will end up fumbling with the
settings rather than getting things done.
Basically, what this means is that casual users only have to enter in
about 3 things to configure their stream: Stream key, audio bitrate,
and video bitrate. I am really happy with this interface for those
types of users, but it definitely won't be sufficient for advanced
usage or for custom outputs, so that stuff will have to be separated.
- Improved the JSON usage for the 'common streaming services' context,
I realized that JSON arrays are there to ensure sorting, while
forgetting that general items are optimized for hashing. So
basically I'm just using arrays now to sort items in it.
Add API for streaming services. The services API simplifies the
creation of custom service features and user interface.
Custom streaming services later on will be able to do things such as:
- Be able to use service-specific APIs via modules, allowing a more
direct means of communicating with the service and requesting or
setting service-specific information
- Get URL/stream key via other means of authentication such as OAuth,
or be able to build custom URLs for services that require that sort
of thing.
- Query information (such as viewer count, chat, follower
notifications, and other information)
- Set channel information (such as current game, current channel title,
activating commercials)
Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.
I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.) ..Anyway, the linked list info is also stored in the shared
context data structure.
Just wanted the ability to be able to add private data to the properties
data. Makes it a little easier to manage data if you get updates from
controls.
Before, async video sources would flicker because they were only being
drawn when they were updated. So when updated, they'd draw that frame,
then it would stop drawing it until it updated again. This fixes that
issue and they should now draw properly.
Also, fix a few other minor bugs and issues relating to async video,
and make it so that non-async video filters can be properly applied to
them.
For the purposes of testing, change the 'test-random' source to an async
video source that updates every quarter of a second with a new random
face.
Also fix a bug where non-async video sources wouldn't have filter
effects applied properly.
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
- Fix an issue that could occur when using more than one video encoder.
Audio/video would not sync up correctly because they were expected to
be paired with a particular encoder. This simply adds a little
helper variable to encoder packets that specifies the system time in
microseconds. We then use that system time to sync
- Fix an issue with x264 with fractional FPS rates (29.97 and 59.94
particularly) where it would create ridiculously large stream
outputs. The problem was that you shouldn't set the timebase_*
variables in the x264 params manually, let x264 handle the default
values for it and leave them at 0.
- Make x264 use CFR output, because there's no reason to ever use VFR
in this case.
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
I was getting cases where the CPU cache was causing issues with the
allocation counter, for the longest time I thought I was doing something
wrong, but when the allocation counter went below 0, I realized it was
because I didn't use atomics for incrementing/decrementing the
allocation counter variable. The allocation counter now always should
have the correct value.
- Add interleaving of video/audio packets for outputs that are encoded
and expect both video and audio data, sorting the packets and sending
them to the output when both video and audio is received.
- Combine create and initialize callbacks for the encoder API callback
interface.
Improve the properties API so that it can actually respond somewhat to
user input. Maybe later this might be further improved or replaced with
something script-based.
When creating a property, you can now add a callback to that property
that notifies when the property has been changed in the user interface.
Return true if you want the properties to be refreshed, or false if not.
Though now that I think about it I doubt there would ever be a case
where you would have this callback and *not* refresh the properties.
Regardless, this allows functions to change the values of properties or
settings, or enable/disable/hide other property controls from view
dynamically.
- Add start/stop code to obs-output module
- Use a circular buffer for the buffered encoder packets instead of a
dynamic array
- Add pthreads.lib as a dependency to obs-output module on windows in
visual studio project files
- Fix an windows export bug for avc parsing functions on windows.
Also, rename those functions to be more consistent with each other.
- Make outputs use a single function for encoded data rather than
multiple functions
- Add the ability to make 'text' properties be passworded
- obs-outputs module: Add preliminary code to send out data, and add
an FLV muxer. This time we don't really need to build the packets
ourselves, we can just use the FLV muxer and send it directly to
RTMP_Write and it should automatically parse the entire stream for us
without us having to do much manual code at all. We'll see how it
goes.
- libobs: Add AVC NAL packet parsing code
- libobs/media-io: Add quick helper functions for audio/video to get
the width/height/fps/samplerate/etc rather than having to query the
info structures each time.
- libobs (obs-output.c): Change 'connect' signal to 'start' and 'stop'
signals. 'start' now specifies an error code rather than whether it
simply failed, that way the client can actually know *why* a failure
occurred. Added those error codes to obs-defs.h.
- libobs: Add a few functions to duplicate/free encoder packets
The serializer code is meant to be used as a means of reading/writing
data from any arbitrary type of input/output.
The array output serializer makes it so we can stream data to a dynamic
array on the fly.
- Add dummy GL texture support to allow libobs texture references to be
created for GL without
- Add a texture_getobj function to allow the retrieval of the
context-specific object, such as the D3D texture pointer, or the
OpenGL texture object handle.
- Also cleaned up the export stuff. I realized it was all totally
superfluous. Kind of a dumb moment, but nice to clean it up
regardless.
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
- Add a properties window for sources so that you can now actually edit
the settings for sources. Also, display the source by itself in the
window (Note: not working on mac, and possibly not working on linux).
When changing the settings for a source, it will call
obs_source_update on that source when you have modified any values
automatically.
- Add a properties 'widget', eventually I want to turn this in to a
regular nice properties view like you'd see in the designer, but
right now it just uses a form layout in a QScrollArea with regular
controls to display the properties. It's clunky but works for the
time being.
- Make it so that swap chains and the main graphics subsystem will
automatically use at least one backbuffer if none was specified
- Fix bug where displays weren't added to the main display array
- Make it so that you can get the properties of a source via the actual
pointer of a source/encoder/output in addition to being able to look
up properties via identifier.
- When registering source types, check for required functions (wasn't
doing it before). getheight/getwidth should not be optional if it's
a video source as well.
- Add an RAII OBSObj wrapper to obs.hpp for non-reference-counted
libobs pointers
- Add an RAII OBSSignal wrapper to obs.hpp for libobs signals to
automatically disconnect them on destruction
- Move the "scale and center" calculation in window-basic-main.cpp to
its own function and in its own source file
- Add an 'update' callback to WASAPI audio sources
Microsoft's garbage compiler just doesn't even.. read the names of
enums. It sees an enum and goes "durr, that's an int" without even
properly evaluating it. Just total garbage, as per usual.
Also, rename atomic functions to be consistent with the rest of the
platform/threading functions, and move atomic functions to threading*
files rather than platform* files
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
...The reason why audio didn't work was because I overwrote the bitrate
values.
As for semaphores, mac doesn't support unnamed semaphores without using
mach semaphores. So, I just implemented a semaphore wrapper for each
OS.
Ensure that a source has a valid name. Duplicates aren't a big deal
internally, but sources without a name are probably something that
should be avoided. Made is so that if a source is programmatically
created without a name, it's assigned an index based name.
In the main basic-mode window, made it check to make sure the name was
valid as well.
- Add some temporary streaming code using FFmpeg. FFmpeg itself is not
very ideal for streaming; lack of direct control of the sockets and
no framedrop handling means that FFmpeg is definitely not something
you want to use without wrapper code. I'd prefer writing my own
network framework in this particular case just because you give away
so much control of the network interface. Wasted an entire day
trying to go through FFmpeg issues.
There's just no way FFmpeg should be used for real streaming (at
least without being patched or submitting some sort of patch, but I'm
sort of feeling "meh" on that idea)
I had to end up writing multiple threads just to handle both
connecting and writing, because av_interleaved_write_frame blocks
every call, stalling the main encoder thread, and thus also stalling
draw signals.
- Add some temporary user interface for streaming settings. This is
just temporary for the time being. It's in the outputs section of
the basic-mode settings
- Make it so that dynamic arrays do not free all their data when the
size just happens to be reduced to 0. This prevents constant
reallocation when an array keeps going from 1 item to 0 items. Also,
it was bad to become dependent upon that functionality. You must now
always explicitly call "free" on it to ensure the data is free, and
that's how it should be. Implicit functionality can lead to
confusion and maintainability issues.
- Fix a bug where the initial audio data insertion would cause all
audio data to unintentionally clear (mixed up < and > operators, damn
human error)
- Fixed a potential interdependant lock scenario with channel mutex
locks and graphics mutex locks. The main video thread could lock the
graphics mutex and then while in the graphics mutex could lock the
channels mutex. Meanwhile in another thread, the channel mutex could
get locked, and then the graphics mutex would get locked, causing a
deadlock.
The best way to deal with this is to not let mutexes lock within
other mutexes, but sometimes it's difficult to avoid such as in the
main video thread.
- Audio devices should now be functional, and the devices in the audio
settings can now be changed as desired.
Modify the obs_display API so that it always uses an orthographic
projection that is the size of the display, rather than OBS' base size.
Having it do an orthographic projection to OBS' base size was silly
because it meant that everything would be skewed if you wanted to draw
1:1 in the display. This deoes mean that the callbacks must handle
resizing the images, but it's worth it to ensure 1:1 draw sizes.
As for the preview widget, instead of making some funky widget within
widget that resizes, it's just going to be a widget within the entire
top layout. Also changed the preview padding color to gray.
- Implement a means of obtaining default settings for an
input/output/encoder. obs_source_defaults for example will return
the default settings for a particular source type.
- Because C++ doesn't have designated initializers, use functions in
the WASAPI plugin to register the sources instead.
- Implement windows monitor capture (code is so much cleaner than in
OBS1). Will implement duplication capture later
- Add GDI texture support to d3d11 graphics library
- Fix precision issue with sleep timing, you have to call
timeBeginPeriod otherwise windows sleep will be totally erratic.
- Add WASAPI audio capture for windows, input and output
- Check for null pointer in os_dlopen
- Add exception-safe 'WinHandle' and 'CoTaskMemPtr' helper classes that
will automatically call CloseHandle on handles and call CoTaskMemFree
on certain types of memory returned from windows functions
- Changed the wide <-> MBS/UTF8 conversion functions so that you use
buffers (like these functions are *supposed* to behave), and changed
the ones that allocate to a different naming scheme to be safe
- Split input and output audio captures so that they're different
sources. This allows easier handling and enumeration of audio
devices without having to do some sort of string processing.
This way the user interface code can handle this a bit more easily,
and so that it doesn't confuse users either. This should be done for
all audio capture sources for all operating systems. You don't have
to duplicate any code, you just need to create input/output wrapper
functions to designate the audio as input or output before creation.
- Make it detect soundflower and wavtap devices as mac "output" devices
(even though they're actually input) for the mac output capture, and
make it so that users can select a default output capture and
automatically use soundflower or wavtap.
I'm not entirely happy about having to do this, but because mac is
designed this way, this is really the only way to handle it that
makes it easier for users and UI code to deal with.
Note that soundflower and wavtap are still also designated as input
devices, so will still show up in input device enumeration.
- Remove pragma messages because they were kind polluting the other
compiler messages and just getting in the way. In the future we can
just do a grep for TODO to find them.
- Redo list property again, this time using a safer internal array,
rather than requiring sketchy array inputs. Having functions handle
everything behind the scenes is much safer.
- Remove the reference counter debug log code, as it was included
unintentionally in a commit.
Categories added an unnecessary complexity to making properties, and
would very likely almost never be used in most cases, and were more of a
display feature. The main issue is that it made property data more
complex to work with, and I just didn't feel comfortable with that.
Also, added a function to allow you to retrieve a porperty just by its
name.
When a source/output/etc has a property of a 'list' type, there was no
way to get the names associated with its values. That, and it only
supported lists of either text, or enums (0..[value] only).
Now, you can associate translated names with those values, and use
integer, float, or string values. Put it all in to one function as well
to simplify its usage.
I plan on using this to help get enumerations from devices/etc for
certain types of sources. For example, if I get the properties of an
audio source, I'd like to have a list of available devices with it as
well.
- Signals and dynamic callbacks now require declarations to be made
before being used. What this does is allows us to get information
about the functions dynamically which can be relayed to the user and
plugins for future extended usage (this should have big implications
later for scripting in particular, hopefully).
- Reduced the number of types calldata uses from "everything I could
think of" to simply integer, float, bool, pointer/object, string.
Integer data is now stored as long long. Floats are now stored as
doubles (check em).
- Use a more consistent naming scheme for lexer error/warning macros.
- Fixed a rather nasty bug where switching to an existing scene would
cause it to increment sourceSceneRefs, which would mean that it would
never end up never properly removing the source when the user clicks
removed (stayed in limbo, obs_source_remove never got called)
See, it can sometimes be a bit confusing. These functions should
definitely not fail under normal circumstances, and these errors may
affect the user and/or application in some way.
LOG_ERROR should be used in places where though recoverable (or at least
something that can be handled safely), was unexpected, and may affect
the user/application.
LOG_WARNING should be used in places where it's not entirely unexpected,
is recoverable, and doesn't really affect the user/application.
- Add CoreAudio device input capture for mac audio capturing. The code
should cover just about everything for capturing mac input device
audio. Because of the way mac audio is designed, users may have no
choice but to obtain the open source soundflower software to capture
their mac's desktop audio. It may be necessary for us to distribute
it with the program as well.
- Hide event backend
- Use win32 events for windows
- Allow timed waits for events
- Fix a few warnings
the signals for scenes could have potentially conflicted with default
source signals. "remove" should be used for source removal, for
example. Changed the scene signals to "item-add" and "item-remove" for
its items.
Split off activate to activate and show callbacks, and split off
deactivate to deactivate and hide callbacks. Sources didn't previously
have a means to know whether it was actually being displayed in the main
view or just happened to be visible somewhere. Now, for things like
transition sources, they have a means of knowing when they have actually
been "activated" so they can initiate their sequence.
A source is now only considered "active" when it's being displayed by
the main view. When a source is shown in the main view, the activate
callback/signal is triggered. When it's no longer being displayed by
the main view, deactivate callback/signal is triggered.
When a source is just generally visible to see by any view, the show
callback/signal is triggered. If it's no longer visible by any views,
then the hide callback/signal is triggered.
Presentation volume will now only be active when a source is active in
the main view rather than also in auxilary views.
Also fix a potential bug where parents wouldn't properly increment or
decrement all the activation references of a child source when a child
was added or removed.
Implement a few audio options in to the user interface as well as a few
inline audio functions in audio-io.h.
Make it so ffmpeg plugin automatically converts to the desired format.
Use regular interleaved float internally for audio instead of planar
float.
This allows the changing of bideo settings without having to completely
reset all graphics data. Will recreate internal output/conversion
buffers and such and reset the main preview.
Make it so obs_data settings input in to *_update are applied to the
existing settings rather than fully replace the existing settings. That
way you can update with only certain specific settings, leaving other
settings untouched. Of course if you're already using the original
settings pointer in the first place then you've already done that, so
it'll just ignore it because you've already applied them.
- Remove obs_source::type because it became redundant now that the
type is always stored in the obs_source::info variable.
- Apply presentation volumes of 1.0 and 0.0 to sources when they
activate/deactivate, respectively. It also applies that presentation
volume to all sub-sources, with exception of transition sources.
Transition sources must apply presentation volume manually to their
sub-sources with the new transition functions below.
- Add a "transition_volume" variable to obs_source structure, and add
three functions for handling volume for transitions:
* obs_transition_begin_frame
* obs_source_set_transition_vol
* obs_transition_end_frame
Because the to/from targets of a transition source might both contain
some of the same sources, handling the transitioning of volumes for
that specific situation becomes an issue.
So for transitions, instead of modifying the presentation volumes
directly for both sets of sources, we do this:
- First, call obs_transition_begin_frame at the beginning of each
transition frame, which will reset transition volumes for all
sub-sources to 0. Presentation volumes remain unchanged.
- Call obs_source_set_transition_vol on each sub-source, which will
then add the volume to the transition volume for each source in
that source's tree. Presentation volumes still remain unchanged.
- Then you call obs_trandition_end_frame when complete, which will
then finally set the presentation volumes to the transition
volumes.
For example, let's say that there's one source that's within both the
"transitioning from" sources and "transition to" sources. It would
add both the fade in and fade out volumes to that source, and then
when the frame is complete, it would set the presentation volume to
the sum of those two values, rather than set the presentation volume
for that same source twice which would cause weird volume jittering
and also set the wrong values.
Now sources will be properly activated and deactivated when they are in
use or not in use.
Had to figure out a way to handle child sources, and children of
children, just ended up implementing simple functions that parents use
to signal adding/removal to help with hierarchial activation and
deactivation of child sources.
To prevent the source activate/deactivate callbacks from being called
more than once, added an activation reference counter. The first
increment will call the activate callback, and the last decrement will
call the deactivate callback.
Added "source-activate" and "source-deactivate" signals to the main obs
signal handler, and "activate" and "deactivate" to individual source
signal handlers.
Also, fixed the main window so it properly selects a source when the
current active scene has been changed.
Added a "master" volume for the entire audio subsystem.
Also, added a "presentation" volume for both the master volume and for
each invidiaul source. The presentation volume is used to control
things like transitioning volumes, preventing sources from outputting
any audio when they're inactive, as well as some other uses in the
future.
If audio was under, it originally did a full reset of the audio timing.
However, resetting the audio timing when this happens is kind of a bad
thing. It's better just to clamp the value to the expected timestamp to
ensure seamless audio output.
Also, implement audio timestamp smoothing to ensure audio tries to be as
seamless as possible.
I actually did compile that last commit and misread the failed projects
as 0. I'm just going to put the conversion stuff in video-io.h stuff
because it requires it anyway, and video-scaler.h already depends on
video-io.h for the video_format enum anyway.
Add a scaler interface (defaults to swscale), and if a separate output
wants to use a different scale or format than the default output format,
allow a scaler instance to be created automatically for that output,
which will then receive the new scaled output.
If there are for example more than one audio outputs and they have
different sample rates or channels and such, this will allow automatic
conversion of that audio to the request formats/channels/rates (but only
if requested).
- Changed glMapBuffer to glMapBufferRange to allow invalidation. Using
just glMapBuffer alone was causing some unacceptable stalls.
- Changed dynamic buffers from GL_DYNAMIC_WRITE to GL_STREAM_WRITE
because I had misunderstood the OpenGL specification
- Added _OPENGL and _D3D11 builtin preprocessor macros to effects to
allow special processing if needed
- Added fmod support to shaders (NOTE: D3D and GL do not function
identically with negative numbers when using this. Positive numbers
however function identically)
- Created a planar conversion shader that converts from packed YUV to
planar 420 right on the GPU without any CPU processing. Reduces
required GPU download size to approximately 37.5% of its normal rate
as well. GPU usage down by 10 entire percentage points despite the
extra required pass.
There were a *lot* of warnings, managed to remove most of them.
Also, put warning flags before C_FLAGS and CXX_FLAGS, rather than after,
as -Wall -Wextra was overwriting flags that came before it.
Originally, the rendering system was designed to only display sources
and such, but I realized there would be a flaw; if you wanted to render
the main viewport in a custom way, or maybe even the entire application
as a graphics-based front end, you wouldn't have been able to do that.
Displays have now been separated in to viewports and displays. A
viewport is used to store and draw sources, a display is used to handle
draw callbacks. You can even use displays without using viewports to
draw custom render displays containing graphics calls if you wish, but
usually they would be used in combination with source viewports at
least.
This requires a tiny bit more work to create simple source displays, but
in the end its worth it for the added flexibility and options it brings.
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
The signature detection code when reading UTF-8 files was causing the
UTF-8 strings read from file to allocate more data than they were
supposed to, causing the last 3 bytes to be garbage
- Fill in the rest of the FFmpeg test output code for testing so it
actually properly outputs data.
- Improve the main video subsystem to be a bit more optimal and
automatically output I420 or NV12 if needed.
- Fix audio subsystem insertation and byte calculation. Now it will
seamlessly insert new audio data in to the audio stream based upon
its timestamp value. (Be extremely cautious when using floating
point calculations for important things like this, and always round
your values and check your values)
- Use 32 byte alignment in case of future optimizations and export a
function to get the current alignment.
- Make os_sleepto_ns return true if slept, false if the time has
already been passed before the call.
- Fix sinewave output so that it actually properly calculates a middle
C sinewave.
- Change the use of row_bytes to linesize (also makes it a bit more
consistent with FFmpeg's naming as well)
- Add planar audio support. FFmpeg and libav use planar audio for many
encoders, so it was somewhat necessary to add support in libobs
itself.
- Improve/adjust FFmpeg test output plugin. The exports were somewhat
messed up (making me rethink how exports should be done). Not yet
functional; it handles video properly, but it still does not handle
audio properly.
- Improve planar video code. The planar video code was not properly
accounting for row sizes for each plane. Specifying row sizes for
each plane has now been added. This will also make it more compatible
with FFmpeg/libav.
- Fixed a bug where callbacks wouldn't create properly in audio-io and
video-io code.
- Implement 'blogva' function to allow for va_list usage with libobs
logging.
- Implement texture scaling/conversion/downloading for the main view so
we can finally start getting data to output.
Also, redesign how it works a bit, it will now properly wait one full
frame for each step in the process: rendering the main texture,
scaling the main texture to an output texture, staging/downloading the
ouput texture, and then outputting that staged data. This way, the
GPU will have more than enough time to fully complete each step.
- Fix a bug with OpenGL plugin's texture staging function. Was using
glBindBuffer instead of what should have been used: glBindTexture.
- Change the naming scheme of the variables in default.effect. It's now
named with the idea of just "color matrix" in mind instead of "yuv
matrix", and instead of DrawRGBToYUV, it's now just DrawMatrix.