Fixes a crash that could happen if any of the mutexes are used in the
create callback, or before the obs_source_init function is called.
I'm not sure how this function order slipped because it seems fairly
obvious that these mutexes should be created before the create callback.
Had this crash happen to me when creating a WASAPI output source, the
create callback of the WASAPI source creates a thread which outputs
audio, and that thread managed to call obs_source_output_audio before
the obs_source_init function was called, which in turn caused it to try
to use a null mutex.
When caching a new frame, keep a reference to the frame while copying to
ensure that the frame is not potentially destroyed for whatever reason
while that data is being copied.
The obs_source::async_reset_texture variable can cause a data race
between threads to occur because it could be set to true in one thread
then changed back to false in another thread. This could cause the
async texture to not update its size when it's supposed to, which can
cause a crash or corruption when copying data from a frame of a
differing size.
The solution to this is to:
- Delete the async_reset_texture variable, and make the
set_async_texture_size function change the texture size if the
async_width, async_height, or async_format variables differ from the
frame's width/height/format. Those variables are then only ever set
in the libobs graphics thread.
- Make the cache_video function use separate variables from other
functions to detect a change in size (due to the fact that the texture
size should only be resized in the libobs graphics thread). These
variables are async_cache_width, async_cache_height, and
async_cache_format, which are only be set in the thread that calls
obs_source_output_video.
How to replicate the data race:
- On OSX, use window capture on a textedit window, then continually
resize the textedit window.
The bilinear lowres scale effect was using 'output' for a variable,
which is apparently a reserved keyword in GLSL on macs. This slipped
by me due to the fact that this didn't occur with OpenGL on my windows
machine.
The minimum and maximum color range values were not being set by the
video_format_get_parameters function when full range was in use; I
assume it's because the expected min/max values of full range is
{0.0, 0.0, 0.0} and {1.0, 1.0, 1.0}, so I'm just making it so that it
sets those values if using full range.
Way to replicate the issue (windows):
1.) Create a win-dshow device capture source
2.) Select a device and set it to a YUV color format to enable YUV color
conversion
3.) Select "Full Range" in the color range property.
4.) Restart OBS, the device will then start up, but will display green
due to the fact that the min/max range values were never set, and are
left at their default value of 0.
Due to a bad 'if' expression, when a filter that is not last in the
chain is disabled or being bypassed, it ends up still calling the
filter's video processing function unintentionally.
This fix makes sure that it only calls the appropriate render functions
if the next filter target is the source, otherwise it will just call
obs_source_video_render to process the next filter in the chain.
How to replicate the bug:
1. Create two crop filters on the same source
2. Give each crop filter a different distinct value
3. Disable both crop filters
4. The image would still be cropped
The normal scaling methods cannot sample enough pixels to create an
accurate output image when the output size is under half the base size,
so use the bilinear low resolution scaling effect in that case instead
to ensure a more accurate low resolution image.
If you don't need to see what's displayed, then this is particularly
useful for two reasons:
1. It reduces the number of draw/present calls
2. It can prevent issues with certain hardware setups where rendering on
a monitor hooked up to a separate card can experience slowdowns
This reverts commit 99c674e41f.
These commits were originally added to allow multiple user interfaces to
use the same plugins, but I soon realized that multiple user interfaces
can use multiple libobs versions, so each user interface should have its
own set of plugins to manage. Some user interfaces may not wish to use
certain plugins anyway, so this fixes that issue as well.
This reverts commit 92d800cc18, reversing
changes made to 35a4acede0.
These commits were originally added to allow multiple user interfaces to
use the same plugins, but I soon realized that multiple user interfaces
can use multiple libobs versions, so each user interface should have its
own set of plugins to manage. Some user interfaces may not wish to use
certain plugins anyway, so this fixes that issue as well.
Instead of manually setting the blend state to the desired values, use
gs_reset_blend_state to ensure we have the default blend state (which
for color is a typical srcalpha/invsrcalpha alpha blending operation,
then the alpha channels are added together).
This fixes an issue where filtered scenes would look strange due to the
fact that alpha was not being blended properly.
This allows the ability to separate the blend states of color and alpha.
The default blend state has also changed so that alpha is always added
together to ensure that the destination image always gets an alpha value
that is actually usable after the operation (for render targets).
Old default state:
color source: GS_BLEND_SRCALPHA, color dest: GS_BLEND_INVSRCALPHA
alpha source: GS_BLEND_SRCALPHA, alpha dest: GS_BLEND_INVSRCALPHA
New default state:
color source: GS_BLEND_SRCALPHA, color dest: GS_BLEND_INVSRCALPHA
alpha source: GS_BLEND_ONE, alpha dest: GS_BLEND_ONE
This fixes an issue where cache frames would not free at all after
having been allocated with no upper limit on the cached frame size. If
cached frames go unused for a specific period of time, they are
deallocated and removed from the cache.
This is preferable to having an upper cache limit due to the potential
for async delay filtering.
Under certain circumstances the cache could be prone to growing too
large unintentionally. Setting a hard maximum limit should prevent
memory from growing if we suddenly get a lot of frames.
Async frames are only swapping when rendering, or when not visible.
This is a flawed design due to the fact that there are certain
circumstances where the source is neither visible nor currently
rendering.
This is what caused a memory leak when scene items were marked as
invisible, because if a source has an async child source and decides not
to render that source for whatever reason, the child source would not
process the async frames at all, and the cache would just grow.
To fix this, simply moving the async frame cycle to tick fixes the issue
due to the fact that tick is always called regardless of circumstance.
Filtering the video before it's output to the texture means that it
happens after all the processing on the timestamps and such of the video
data. This way, the video filter does not have to worry about what's
currently buffered, and it won't affect timing.
When OBS is shutting down, if for some reason the filter is destroyed
before the parent source is destroyed, it would try to remove itself
from the source, but it would decrement the reference and try to destroy
itself again while already in the process of destroying itself.
So, the solution was simply to make sure that if it's removing itself
from the source that it doesn't decrement its own reference.
The "last" filter that's rendered is technically at filter index 0, so
enumeration needs to be from the last index in the list to the first
index in the list.
When rendering a filter to a texture, the target is empty and unused, so
there's no reason for blending to be on when rendering the filter to a
render target.
obs_source_process_filter tried to do everything in a single function,
but the problem is that effect parameters would not properly be
accounted for due to the way it internally draws, therefore it was
necessary to split the functions in to two, you first call
obs_source_process_filter_begin, then you set your effect parameters,
then you finally call obs_source_process_filter_end. This ensures that
when the filter is drawn, that the effect parameters are set.
When the filter chain finally reaches the source and the last filter in
the chain is set to not render directly (meaning it has to render to
texture), it would not render the source with any effect due to the fact
that it expects a filter to be present.
Adds a source callback function that is used when a filter is removed
from a source. This ensures that any data that might be associated with
that source can be cleaned up if necessary.
There are a few more conditions which need to be checked to ensure
whether we can actually bypass or not; in this particular case we are
using the YUV texture shaders, plus the image can also be flipped, so we
can't really use the bypass optimization in those situations.
These functions are primarily for use with filters, filters need to be
able to get the width/height of a target source without it necessarily
getting the post-filtered dimensions.
Previously I had it set so that it would set the async_width and
async_height variables to 0,0 if a null frame occurs, but the problem
with this is that it was inadvertently cause the frame cache to clear
when it starts receiving textures again because it detects it as a size
change.
So instead of changing async_width and async_height to 0 when a null
frame occurs, change it so that if the source is set to an inactive
state, make obs_source_get_width/_get_height return 0 for their
dimensions instead.
When a frame is processed by a filter, it comes directly from the
source's video frame cache. However, if a filter is using or processing
those frames for whatever reason, there would be no guarantee that the
frames would persist during processing, and frames could eventually be
deallocated unexpected, for example when the resolution or format
changes.
So the solution is to implement simple reference counting for the frames
so that the frames will exist until they have been released by any
source or filter that's using them.
Fix a bug where when a source first starts up its async cache, it
unintentionally resets its cache, which means that the first few frames
would be lost.
This optionally specifies that the int/float value should be displayed
with a slider rather than and up/down control. UIs are not necessarily
required to implement it, it's meant to be more of a hint.
If negative was specified it would not parse the negative sign and thus
would not interpret that number as negative, and would cause
shader/effect parsing to simply fail on the file.
This is particularly important for the filter pipeline in order to
ensure that when the last filter is reached that the original blend
state is properly reset.
When render targets are used, they output to the render target inverted
due to the way that opengl works. This fixes that issue by inverting
the projection matrix so that it renders the image upside down and
inverting the front face from counterclockwise to clockwise.
The wrong function was being used to recurse through the filter chain in
obs_source_process_filter, obs_source_get_[width/height] would get the
post-effect dimensions rather than the pre-effect dimensions.
The frame is definitely meant to be modifiable if needed, so it
shouldn't be const. I originally meant for these frames to be
duplicated, but with the source video cache that's both unnecessary and
would reduce performance.
If the audio data had the same format/samplerate as the obs audio
subsystem, it would fail to simply destroy the resampler and set it to
NULL, and then any audio data going through would use the resampler that
was being used before that, causing audio to become garbage.
This bug only started appearing when I recently changed the libobs
internal audio subsystem format to non-interleaved floating point, which
is a common format, and thus caused this bug to actually occur more
often.
I when a source has both async audio/video capability, it would ignore
audio until the video has started. There's really no need to do this,
when the video starts it'll just fix up the timing automatically.
This should fix the case where things like the media source would not be
able to play audio-only files.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
This code was originally meant to skip some checks as an optimization,
but it did not account for async video sources and would call
obs_source_default_render (which is only for synchronous video sources),
this fixes the issue by just calling obs_source_video_render.
The obs_source_get_width and obs_source_get_height functions need to
account for filters that may change their width/height such as a crop
filter or something similar.
As a side effect of this commit, because these functions need to lock
the filter mutex, these functions can no longer be used with a
const-qualified obs_source_t pointer.
This function is somewhat big and is a function called conditionally,
therefore having it be inline is somewhat of a waste if it's not always
being called.
Graphics data has to be freed inside of an active graphics context,
otherwise the resources will not be freed; in the future, we should
probably make sure that the graphics subsystem automatically
asserts/warns about this scenario.
I was iterating through the obs_source::filters array to remove filters,
but every time I removed a filter, it would remove itself from the
array, therefore it would end up skipping items in the array, therefore
leaving filters unreleased. This just removes the first filter until
all filters have been removed.
These signals introduce unnecessary complexity. Instead of emitting a
signal for a specific move direction, just signal that the scene has
been reordered, that way the target just refreshes the list.
For the show/hide and activate/deactivate callbacks, schedule these
callbacks to only be called from within the video thread rather than in
a separate thread. This ensures that any potential graphics activity
that occurs within them is kept in the same thread.
obs_source_inc_showing and obs_source_dec_showing are used to indicate
that a source is showing or no longer being shown in a particular area
of the program outside from the main render view.
One could use an obs_view, but I felt like it was unnecessary because
using an obs_view just to display a single source feels somewhat
superfluous.
The main view does not need to worry about hiding/deactivation of
sources when it's being freed here, when the obs context is shutting
down in this section of obs, all the sources are being freed, thus
there's no need to worry about deactivating/hiding sources.
When using multiple video encoders together with a single audio encoder,
the audio wouldn't be in sync.
The reason why this occurred is because the dts_usec variable of the
encoder packet (which is based on system time) would always be reset to
a value based upon the dts (which is not guaranteed to be based on
system time) in the apply_interleaved_packet_offset function. This
would then in turn cause it to miscalculate the starting audio/video
offsets, which are required to calculate sync.
So instead of calling that function unnecessarily, separate the check
for whether audio/video has been received in to a new function, and only
start applying the interleaved offsets after audio and video have
actually started up and the starting offsets have been calculated.
Instead of having services automatically apply encoder settings on
initialization (whether the output wants to or not), instead make it
something that must be explicitly called by the developer. There are
cases where the developer may not wish to apply the service-specific
settings, or may wish to override them for whatever reason.
If on windows, use the windows UTF conversion functions due to the fact
that the existing utf code is meant for 32bit wide characters, while the
windows conversion functions will properly handle 16bit wide characters.
Adds an additional search path for UI-independent and
installation-independent plugins for windows/mac.
Windows:
%appdata%/obs-plugins/
Mac:
~/Library/Application Support/obs-plugins/
Plugin directory format is [module]/bin and [module]/data.
On windows, for 32bit binaries:
[module]/bin/32bit
and 64bit binaries:
[module]/bin/64bit
Before: After:
obs_service_gettype obs_service_get_type
It seems there was an API function that was missed when we were doing
our big API consistency update. Unsquishes obs_service_gettype to
obs_service_get_type.
I didn't think it would ever need to be exported, but this function is
actually useful for applying settings to properties (to call all of
their update callbacks based upon the settings) without necessarily
having to have an object associated with it.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
The comment says "these are different", but doesn't state why.
Actually, I should really rename the output flags so they're not flags,
but instead just "caps", because that's really all that they are.
obs_data_apply is used to apply the changes of a source object in to a
destination object. Problem with this however is that if sub-objects
are in use, it currently just copies the pointer of the sub-object,
meaning that the source and destination will both share the same
sub-object via reference. If anything modifies that sub-object data,
it'll modify it for both objects, which was not intended.
Instead of copying the object pointer, create a new copy and then
recursively repeat the process to ensure the data is always completely
separate.
Changed:
char *os_get_config_path(const char *name);
To:
int os_get_config_path(char *dst, size_t size, const char *name);
Also added:
char *os_get_config_path_ptr(const char *name);
I don't like this function returning an allocation by default.
Similarly to what was done with the wide character conversion functions,
this function now operates on an array argument, and if you really want
to just get a pointer for convenience, you use the *_ptr version of the
function that clearly indicates that it's returning an allocation.
The gs_enum_adapters function is an optional implementation to allow
enumeration of available graphics adapters that can be used with the
program. The ID associated with the adapter can be an index or a hash
depending on the implementation.
Direct3D textures are usually aligned to a specific pitch, so their
internal width is often not equal to the expected output width; this
means that if we want to use it on our texture output, that we must
de-align the texture while copying the texture data.
However, I unintentionally messed up the calculation at some point with
RGBA textures, so the variable size I was supposed to be using was
supposed to be multiplied by 4 (for RGBA), while I was still expecting
single channel data. So, if the texture width was something like 1332,
the source (directx) texture line size would be somewhere at or above
5328 (because it's RGBA), then destination is at 1332 (YUV luma plane),
and it would unintentionally treat 3996 (or 5328 - 1332) bytes as the
unused alignment data. So this fixes that miscalculation.
Refer to https://www.opengl.org/registry/doc/GLSLangSpec.4.10.6.clean.pdf
for a list of current (reserved) keywords.
In the future the shader compiler in libobs-opengl should probably take
care of avoiding those name conflicts (bonus points for transparently
remapping the names of effect parameters)
Instead of returning a valid string value when there are no more strings
available in the list, return NULL to indicate failure. An empty string
should really be allowed to be a valid value for the list.
The return value of os_sleepto_ns is true if it waited to the specified
time, and false if the current time is past the specified time. So it
basically returns true if it successfully waited.
I just didn't check the return value properly here, so it ended up just
setting the count of frames to 1 if overshot, ultimately causing sync
issues.
The temporary unoptimized code we were using before just completely
allocated a new copy of each frame every single time a new async frame
was output by the source plugin. This just creates a cache of frames as
needed for the current format/width/height to minimize the allocation
and deallocation. If new frames come in that are of a different
format/width/height, it'll just clear the cache. This is a fairly
important optimization.
all the async video related stuff usually started with async_*, and
there were two that didn't. So I just renamed them so they have the
same naming convention
If an async video source stops video for whatever reason, it would get
stuck on the last frame that was played. This was particularly awkward
when I wanted to give the user the ability to deactivate a source such
as a webcam because it would get stuck on the last frame.
This allows us to change the visible UI name of a property after it's
been created (particularly for a case where I want to change an
'Activate' button to 'Deactivate')
A slightly refactored version of R1CH's crash handler, allows crash
handling for windows which provides stack traces of all threads and a
list of all loaded modules. Also shows the processor, windows version,
and current libobs version.
When the encoder is set to scale to a different resolution than the obs
output resolution, make sure it uses the current video colorspace and
range by default.
I actually kind of hate how strstr returns a non-const even though it
takes a const parameter, but I can understand why they made it that way.
They really should have split it in to two functions though, one const
and one non-const or something. But alas, ultimately for a C programmer
who knows what they're doing it isn't a huge deal.
This adds support for the windows 8+ output duplicator feature which
allows the efficient capturing of a specific monitor connected to the
currently used device.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
The boolean variables which stored whether frames have been
rendered/downloaded/converted/etc were not being reset when video
restarted, causing frames to not be sent in the correct order whenever
video was reset. This could lead to minor desync of video/audio.
In certain circumstances where the output was stopping, and where data
took a long enough time to send (such as when using an encoding preset
that causes high CPU usage), the output would sometimes still send data
even after it was stopped, typically causing the output to crash.
I unintentionally made it use obs_source::sample_info instead of using
the actual target channel count, which is designated by the OBS output
sampler info. obs_source::sample_info is actually used to indicate the
currently set sampler information for incoming samples. So if a source
is outputting 5.1 channel 48khz audio, and OBS is running at stereo
44.1khz, then the obs_source::sample_info value would be set to
5.1/48khz, not the other way around. It indicates what the source
itself is running at, not what OBS is running at.
I suppose the variable needs a better name because even I used it
incorrectly despite actually having been the one who wrote it.
The copy_audio_data function really shouldn't be inlined because it's
being called twice. It's somewhat unnecessary, I think I left it inline
by accident.
This changes the way source volume handles transitioning between being
active and inactive states.
The previous way that transitioning handled volume was that it set the
presentation volume of the source and all of its sub-sources to 0.0 if
the source was inactive, and 1.0 if active. Transition sources would
then also set the presentation volume for sub-sources to whatever their
transitioning volume was. However, the problem with this is that the
design didn't take in to account if the source or its sub-sources were
active anywhere else, so because of that it would break if that ever
happened, and I didn't realize that when I was designing it.
So instead, this completely overhauls the design of handling
transitioning volume. Each frame, it'll go through all sources and
check whether they're active or inactive and set the base volume
accordingly. If transitions are currently active, it will actually walk
the active source tree and check whether the source is in a
transitioning state somewhere.
- If the source is a sub-source of a transition, and it's not active
outside of the transition, then the transition will control the
volume of the source.
- If the source is a sub-source of a transition, but it's also active
outside of the transition, it'll defer to whichever is louder.
This also adds a new callback to the obs_source_info structure for
transition sources, get_transition_volume, which is called to get the
transitioning volume of a sub-source.
The reason to keep a reference counter for transitions is due to an
optimization I'm planning on when calculating transition volumes. I'm
planning on walking the source tree to be able to calculate the current
base volume of a source, but *only* if there are transitions active,
because the only time that the volume can be anything other than 1.0
or 0.0 is when there are active transitions, which may change the base
volume of a source.
When the presentation volume is set for a source, it's set for all of
its children and their children. The original intention for doing this
was to be able to use it for transitioning, but honestly it's just bad
design, and I feel there are better ways to handle transitioning volume.
Changed the design from using obs_source::enum_refs to just simply
preventing infinite source recursion in general, rather than allowing it
through the enum_refs variable. obs_source_add_child has been changed
so that it now returns a boolean, and if the function fails, it means
that the child cannot be added due to that potential recursion.
Two integers are needlessly converted to floating points for what should
be an integer operation. One of those floats is then used for another
integer operation later, where the original integer value should have
been used. So essentially there was an int -> float -> int conversion
going on, which could lead to potential loss of data due to floating
point precision.
There were also some general 64bit -> 32bit conversion warnings.
obs_encoder_getdisplayname declaration was not changed to match the
definition (obs_encoder_get_display_name) when the API consistency
update occurred.
If an encoder did not possess any SEI data, it would never send data at
all because the sent_first_packet wasn't set despite the first packet
being sent.
Added obs_avc_keyframe that returns whether an avc packet is a keyframe
or not. This function is particularly useful for when writing custom
encoder plugins.
I encountered some cases where I needed to use these enumerations
outside of the file, so this allows other modules to use AVC
enumerations without having to redefine them each time. Especially
useful for custom encoder modules.
I neglected to surround some files with extern "C", so if something
written with C++ used the files it would cause function exports to not
be mangled by it correctly.
This adds bicubic and lanczos scaling capability to libobs to improve
scaling quality and sharpness when the output resolution has to be
scaled relative to the base resolution. Bilinear is also available,
although bilinear has rather poor quality and causes scaling to appear
blurry.
If the output resolution is close to the base resolution, then bilinear
is used instead as an optimization, as there's no need to use these
shaders if scaling is not in use.
The Bicubic and Lanczos effects are also exposed via exported function
to allow the ability to use those shaders in plugin modules if desired.
The API change adds a variable 'scale_type' to the obs_video_info
structure that allows the user interface to choose what type of scaling
filter should be used.
Remove the calculation of volume levels and the corresponding signal
from obs_source since this is now handled in the volume meter.
Code that is interested in the volume levels can either use the
volmeter provided from obs_audio_controls or use the audio_data signal
to gain access to the raw audio data.
Signal updated volume levels when they become available in the volume
meter. The frequency of the updates can be adjusted by setting a
different update interval.
Remove the the signal handler for the volume_level signal of audio
sources from the volume meter in anticipation of using the levels
calculated in the volume meter itself.
Add a property to the volume meter that specifies the length of the
interval in which the audio data should be sampled before the
audio_levels signal is emitted.
This adds a new signal to (audio) sources which is emitted whenever new
audio data is received from the source. This enables other code that is
interested in the raw audio data to directly access it when it becomes
available.
This was an important change because we were originally using an
hard-coded 709/partial range color matrix for the output, which was
causing problems for people wanting to use different formats or color
spaces. This will now automatically generate the color matrix depending
on the format, color space, and range, or use an identity matrix if the
video format is RGB instead of YUV.
Just for a quick background: D3D's fmod intrinsic is very imprecise.
Naturally floating points aren't precise at all, and when the numbers
you're dealing with become very large, it can often be off by 0.1 or
more.
However, apparently 0.1 isn't enough of an offset to ensure a proper
value when using the fmod intrinsic and then flooring the value. 0.2
seems to fix the issue and make the image display properly.
On certain GPUs, if you don't flush and the window is minimized it can
endlessly accumulate memory due to what I'm assuming are driver design
flaws (though I can't know for sure). The flush seems to prevent this
from happening, at least from my tests. It would be nice if this
weren't necessary.
This replaces the old code for the audio meter that was using
calculations in two different places with the new audio meter api.
The source signal will now emit simple levels instead of dB values,
in order to avoid dB conversion and calculation in the source.
The GUI on the other hand now expects simple position values from
the volume meter api with no knowledge about dB calculus either.
That way all code that handles those conversions is in one place,
with the option to easily add new mappings that can be used
everywhere.
This adds a volume meter object to libobs that can be used by the GUI
or plugins to convert the raw audio level data from sources to values
that can easily be used to display the audio data.
The volume meter object will use the same mapping functions as the
fader object to map dB levels to a scale.
In older versions of visual studio 2013 microsoft's WORTHLESS C compiler
has a bug where it will, almost at random, not be able to handle having
variables declared in the middle of a function and give the warning:
"illegal use of this type as an expression". It was fixed in recent
VS2013 updates, but I'm not about to force everyone to update to it.
Because a vec3 structure can contain a __m128 variable and not the
expected three floats x, y, and z, you must use vec3_set when
setting a value for a vec3 structure to ensure that it uses the proper
intrinsics internally if necessary.
This adds functions for piping a command line program's stdin or stdout.
Note however that this is unidirectional only.
This will be especially useful later on when implementing MP4 output,
because MP4 output has to be piped to prevent unexpected program
termination from corrupting the file.
This adds a new library of audio control functions mainly for the use in
GUIS. For now it includes an implementation of a software fader that can
be attached to sources in order to easily control the volume.
The fader can translate between fader-position, volume in dB and
multiplier with a configurable mapping function.
Currently only a cubic mapping (mul = fader_pos ^ 3) is included, but
different mappings can easily be added.
Due to libobs saving/restoring the source volume from the multiplier,
the volume levels for existing source will stay the same, and live
changing of the mapping will work without changing the source volume.
This function greatly simplifies the use of effects by making it so you
can call this function in a simple loop. This reduces boilerplate and
makes drawing with effects much easier. The gs_effect_loop function
will now automatically handle all the functions required to do drawing.
---------------------
Before:
gs_technique_t *technique = gs_effect_get_technique("technique");
size_t passes = gs_technique_begin(technique);
for (size_t pass = 0; pass < passes; pass++) {
gs_technique_begin_pass(technique, pass);
[draw]
gs_technique_end_pass(technique);
}
gs_technique_end(technique);
---------------------
After:
while (gs_effect_loop(effect, "technique")) {
[draw]
}
If you look at the previous commits, you'll see I had added
obs_source_draw before. For custom drawn sources in particular, each
time obs_source_draw was called, it would restart the effect and its
passes for each draw call, which was not optimal. It should really use
the effect functions for that. I'll have to add a function to simplify
effect usage.
I also realized that including the color matrix parameters in
obs_source_draw made the function kind of messy to use; instead,
separating the color matrix stuff out to
obs_source_draw_set_color_matrix feels a lot more clean.
On top of that, having the ability to set the position would be nice to
have as well, rather than having to mess with the matrix stuff each
time, so I also added that for the sake of convenience.
obs_source_draw will draw a texture sprite, optionally of a specific
size and/or at a specific position, as well as optionally inverted. The
texture used will be set to the 'image' parameter of whatever effect is
currently active.
obs_source_draw_set_color_matrix will set the color matrix value if the
drawing requires color matrices. It will set the 'color_matrix',
'color_range_min', and 'color_range_max' parameters of whatever effect
is currently active.
Overall, these feel much more clean to use than the previous iteration.
This function simplifies drawing textures for sources in order to help
reduce boilerplate code. If a source is a custom drawn source, it will
automatically set up the effect to draw the sprite. If it's not a
custom drawn source, it will simply draw the sprite as per normal. If
the source uses a specific color matrix, it will also handle that as
well.
When the image data is copied into a texture with flipping set to true
each row has to be copied into the (height - row - 1)th row instead of
the row with the same number. Otherwise it will just create an unflipped
copy.
Apparently the audio isn't guaranteed to start up past the first video
frame, so it would trigger that assert (which I'm glad I put in). I
didn't originally have this happen when I was testing because my audio
buffering was not at the default value and didn't trigger it to occur.
A blunder on my part, and once again a fine example of how you should
never make assumptions about possible code path.
This moves the 'flags' variable from the obs_source_frame structure to
the obs_source structure, and allows user flags to be set for a specific
source. Having it set on the obs_source_frame structure didn't make
much sense.
OBS_SOURCE_UNBUFFERED makes it so that the source does not buffer its
async video output in order to try to play it on time. In other words,
frames are played as soon as possible after being received.
Useful when you want a source to play back as quickly as possible
(webcams, certain types of capture devices)
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
This bug would happen if audio packets started being received before
video packets. It would erroneously cause audio packets to be
completely thrown away, and in certain cases would cause audio and video
to start way out of sync.
My original intention was "don't accept audio until video has started",
but instead mistakenly had the effect of "don't start audio until a
video packet has been received". This was originally was intended as a
way to handle outputs hooking in to active encoders and compensating
their existing timestamp information.
However, this made me realize that there was a major flaw in the design
for handling this, so I basically rewrote the entire thing.
Now, it does the following steps when inserting packets:
- Insert packets in to the interleaved packet array
- When both audio/video packets are received, prune packets up until the
point in which both audio/video start at the same time
- Resort the interleaved packet array
I have tested this code extensively and it appears to be working well,
regardless of whether or not the encoders were already active with
another output.
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump. This could mess up syncing
slightly when the frame is finally given to an outputs.
Apparently I unintentionally typed received_video = false twice instead
of one for video and one for audio.
This fixes a bug where audio would not start up again on an output that
had recently started and then stopped.
When the output sets a new audio/video encoder, it was not properly
removing itself from the previous audio/video encoders it was associated
with. It was erroneously removing itself from the encoder parameter
instead.
At the start of each render loop, it would get the timestamp, and then
it would then assign that timestamp to whatever frame was downloaded.
However, the frame that was downloaded was usually occurred a number of
frames ago, so it would assign the wrong timestamp value to that frame.
This fixes that issue by storing the timestamps in a circular buffer.
If audio timestamps are within the operating system timing threshold,
always use those values directly as a timestamp, and do not apply the
regular jump checks and timing adjustments that normally occur.
This potentially fixes an issue with plugins that use OS timestamps
directly as timestamp values for their audio samples, and bypasses the
timing conversions to system time for the audio line and uses it
directly as the timestamp value. It prevents those calculations from
potentially affecting the audio timestamp value when OS timestamps are
used.
For example, if the first set of audio samples from the audio source
came in delayed, while the subsequent samples were not delayed, those
first samples could have potentially inadvertently triggered the timing
adjustments, which would affect all subsequent audio samples.
This combines the 'direct' timestamp variance threshold with the maximum
timestamp jump threshold (or rather just removes the max timestamp jump
threshold and uses the timestamp variance threshold for both timestamp
jumps and detecting timestamps).
The reason why this was done was because a timestamp jump could occur at
a higher threshold than the threshold used for detecting OS timestamps
within a certain threshold. If timestamps got between those two
thresholds it kind of became a weird situation where timestamps could be
sort of 'stuck' back or forward in time more than intended. Better to
be consistent and use the same threshold for both values.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold. The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about. For
the time being, I feel it may be worth it to try 50 milliseconds.
The graphics subsystem was not being freed here, for example if a
required effect failed to compile it would still successfully have the
graphics subsystem sans required effect. The graphics subsystem should
be completely shut down if required libobs effects fail to compile.
Due to a small error in the timestamp smoothing code the timestamp of
audio packages that were too early was always set to the next expected
timestamp, even if the difference was bigger than the smoothing threshold.
This would cause obs to simply append all audio data to the buffer even if
the real timestamp was way smaller than the next that was expected.
This should reduce corruption problems with for example the pulseaudio
plugin, which resends data under certain conditions.
As stated in the sysinfo manpage the totalram field in the sysinfo
structure is in mem_unit sizes since Linux 2.3.23. To get the actual
memory in the system the totalram value has to be multiplied with the
mem_unit size.
obs_source_update_properties should be called by sources when property
values change, e.g. a capture device source would use this when it
detects a new capture device (in case its properties contain a list of
available capture devices or similar)
This allows for easier comparison between two obs_data_t and it will make
const char *json1 = obs_data_get_json(data);
obs_data_t *data_ = obs_data_create_from_json(json1);
const char *json2 = obs_data_get_json(data_);
produce the same string in json1 and json2
This Fixes a minor flaw with the API where data had to always be mutable
to be usable by the API.
Functions that do not modify the fundamental underlying data of a
structure should be marked as constant, both for safety and to signify
that the parameter is input only and will not be modified by the
function using it.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This is not a com pointer; it should not release/close the handle when
an & operator is used, it should only return the handle value. Clearing
is only used on assignment.
This helps ensure that an asynchronous video source is played as close
to its framerate as possible, reduces the risk of duplication as
much as possible, and helps to ensure that playback is as smooth as
possible.
This prevents multiple needless calls to obs_source_get_frame and other
functions. If the texture has already been processed, then just render
it as-is in any subsequent calls to obs_source_video_render.
This is actually unnecessary now that there's a hard limit on the
maximum offset in which audio can be inserted.
This also assumes too much about the audio; it assumes audio is always
on, where as with some devices (such as the elgato) audio is not on
until the stream starts, and when the video has already incremented the
counter.
Audio that goes below the minimum expecting timing (current time -
buffering time) is automatically removed. However, delayed audio is not
removed regardless of its delay. This puts a hard cap of 6 seconds from
current time that the maximum delay audio can have. This will also
prevent the circular buffer from dynamically growing too large.
Doing timestamp smoothing in obs-source.c is good because timestamps can
typically operate on a different timebase, however, obs-source.c can
also change that time base dynamically (such as with async video and
unexpected timestamp jumps), so in order to ensure that audio is
seamless in the output as well, perform timestamp smoothing in
audio-io.c as well just as an extra precautionary measure.
If the audio didn't start at the 0 timestamp, it would misinterpret it
as a timestamp jump because obs_source::next_audio_ts_min is set to 0 on
creation. Timestamp starting values should be allowed to start at any
arbitrary value.
This makes it easier to do two things:
1.) Get the skipped frames count relative to each specific output
2.) Make it so that getting the 'current' log will always contain
information about skipped frames. Before, you'd have to force the
user to restart the program and get the last log, which was really
annoying when you just wanted to see how the encoders were
performing.
It would try to move data from the old pointer even if the pointer was
changed via realloc, which would cause it to copy data from freed
memory. Instead, just get the position of the data and call memmove to
move it up.
On release of obs_data, if the default/autoselect values pointed toward
a sub-object or a sub-array, it would look up the data for the regular
user value. (Palana must have forgot to change these functions around
when adding the default/autoselect functionality)
Multiplication of the matricies was being done in the wrong direction.
This caused source transformations to come out looking incorrect, for
example the linux-xshm source's cursor would not be drawn correctly or
in the right position if the source was moved/scaled/rotated. The
problem just turned out to be that the gs_matrix_* functions were
multiplying in the wrong direction. Reverse the direction of
multiplication, and the problem is solved.
Adds the following function:
------------------------------
obs_properties_add_font
This function creates a 'font' property to allow selection of a system
font. Implementation by the UI should treat the setting as an obs_data
sub-object with four sub-items:
- face: face name (string)
- style: style name (string)
- size: size (integer)
- flags: font flags (integer)
'flags' can be any combination of the following values:
- OBS_FONT_BOLD
- OBS_FONT_ITALIC
- OBS_FONT_UNDERLINE
- OBS_FONT_STRIKEOUT
API functions added:
-----------------------------------------------
obs_output_set_preferred_size
obs_output_get_width
obs_output_get_height
obs_encoder_set_scaled_size
obs_encoder_get_width
obs_encoder_get_height
These functions allow for easier means of setting a custom resolution on
an output or encoder.
If an output uses an encoder and you set the preferred width/height
using the output, then the output will attempt to set the scaled
width/height for the encoder it's currently using.
Outputs and encoders now should use these functions to determine the
width/height of the raw frame data instead of using the video-io
functions.
This is sort of hard to explain: the scale_video_output function was
overwriting the current frame. If scaling was disabled, it would do
nothing, and return success, and all would be well. If it was enabled,
it would then call the scaler, and then replace the contents of the
'data' function parameter with the scaled frame data. The problem with
this is that I was passing video_output::cur_frame directly, which
overwrites its previous value with the scaled frame data. Then if
cur_frame was not updated on time, it would end up trying to scale the
previously scaled image, if that makes sense. it would call the video
scaler with the same from for both the source and destination.
So the simple fix was to simply use a local variable and pass that in as
a parameter to prevent this bug from occurring.
For the sake of consistency, renamed these two functions to include
_value at the end so they are consistent.
Renamed: To:
-------------------------------------------------------
obs_data_has_default obs_data_has_default_value
obs_data_has_autoselect obs_data_has_autoselect_value
obs_data_item_has_default obs_data_item_has_default_value
obs_data_item_has_autoselect obs_data_item_has_autoselect_value
Instead of having functions like obs_signal_handler() that can fail to
properly specify their actual intent in the name (does it signal a
handler, or does it return a signal handler?), always prefix functions
that are meant to get information with 'get' to make its functionality
more explicit.
Previous names: New names:
-----------------------------------------------------------
obs_audio obs_get_audio
obs_video obs_get_video
obs_signalhandler obs_get_signal_handler
obs_prochandler obs_get_proc_handler
obs_source_signalhandler obs_source_get_signal_handler
obs_source_prochandler obs_source_get_proc_handler
obs_output_signalhandler obs_output_get_signal_handler
obs_output_prochandler obs_output_get_proc_handler
obs_service_signalhandler obs_service_get_signal_handler
obs_service_prochandler obs_service_get_proc_handler
API Removed:
- graphics_t obs_graphics();
Replaced With:
- void obs_enter_graphics();
- void obs_leave_graphics();
Description:
obs_graphics() was somewhat of a pointless function. The only time
that it was ever necessary was to pass it as a parameter to
gs_entercontext() followed by a subsequent gs_leavecontext() call after
that. So, I felt that it made a bit more sense just to implement
obs_enter_graphics() and obs_leave_graphics() functions to do the exact
same thing without having to repeat that code. There's really no need
to ever "hold" the graphics pointer, though I suppose that could change
in the future so having a similar function come back isn't out of the
question.
Still, this at least reduces the amount of unnecessary repeated code for
the time being.
Changed:
- obs_source_gettype
To:
- enum obs_source_type obs_source_get_type(obs_source_t source);
- const char *obs_source_get_id(obs_source_t source);
This function was inconsistent for a number of reasons. First, it
returns both the ID and the type of source (input/transition/filter),
which is inconsistent with the name of "get type". Secondly, the
'squishy' naming convention which has just turned out to be bad
practice and causes inconsistencies. So it's now replaced with two
functions that just return the type and the ID.
Prefix with obs_ for the sake of consistency
Renamed enums:
- order_movement (now obs_order_movement)
Affected functions:
- obs_source_filter_setorder
- obs_sceneitem_setorder
Renamed functions:
- obs_source_getframe (rename to obs_source_get_frame)
- obs_source_releaseframe (rename to obs_source_release_frame)
For the sake of consistency and helping to get rid of the "squishy
function name" issue
The bug here is that when conversion is active, the source video frame
is initialized with the destination height/width/format instead of the
source height/width/format.
With the recent change to module handling by BtbN, I felt that having
this information might be useful in case someone is actually using make
install to set up their libraries.
This functionality can now be handled automatically because locale can
now be freed seaparately from obs_module_unload with
obs_module_free_locale, which is called automatically when the module is
being freed.
Changed API:
- char *obs_find_plugin_file(const char *sub_path);
Changed to: char *obs_module_file(const char *file);
Cahnge it so you no longer need to specify a sub-path such as:
obs_find_plugin_file("module_name/file.ext")
Instead, now automatically handle the module data path so all you need
to do is:
obs_module_file("file.ext")
- int obs_load_module(const char *name);
Changed to: int obs_open_module(obs_module_t *module,
const char *path,
const char *data_path);
bool obs_init_module(obs_module_t module);
Change the module loading API so that if the front-end chooses, it can
load modules directly from a specified path, and associate a data
directory with it on the spot.
The module will not be initialized immediately; obs_init_module must
be called on the module pointer in order to fully initialize the
module. This is done so a module can be disabled by the front-end if
the it so chooses.
New API:
- void obs_add_module_path(const char *bin, const char *data);
These functions allow you to specify new module search paths to add,
and allow you to search through them, or optionally just load all
modules from them. If the string %module% is included, it will
replace it with the module's name when that string is used as a
lookup. Data paths are now directly added to the module's internal
storage structure, and when obs_find_module_file is used, it will look
up the pointer to the obs_module structure and get its data directory
that way.
Example:
obs_add_module_path("/opt/obs/my-modules/%module%/bin",
"/opt/obs/my-modules/%module%/data");
This would cause it to additionally look for the binary of a
hypthetical module named "foo" at /opt/obs/my-modules/foo/bin/foo.so
(or libfoo.so), and then look for the data in
/opt/obs/my-modules/foo/data.
This gives the front-end more flexibility for handling third-party
plugin modules, or handling all plugin modules in a custom way.
- void obs_find_modules(obs_find_module_callback_t callback, void
*param);
This searches the existing paths for modules and calls the callback
function when any are found. Useful for plugin management and custom
handling of the paths by the front-end if desired.
- void obs_load_all_modules(void);
Search through the paths and both loads and initializes all modules
automatically without custom handling.
- void obs_enum_modules(obs_enum_module_callback_t callback,
void *param);
Enumerates currently opened modules.
The version macro that modules use to compile versus the actual core
version that may be in use may be different, so this is a way to compare
them to check for compatibility issues later on.
Changed API functions:
libobs: obs_reset_video
Before, video initialization returned a boolean, but "failed" is too
little information, if it fails due to lack of device capabilities or
bad video device parameters, the front-end needs to know that.
The OBS Basic UI has also been updated to reflect this API change.
It's a sad day when I realize that I did not add any null pointer
checking to any of the functions in this file. Discovered it while
checking all the different languages, happened when there was a missing
locale file for a certain module that hadn't had the language uploaded
yet.
These macros are used as easy helper functions to load/unload module
locale that's based upon the text_lookup system. You simple place the
OBS_MODULE_USE_DEFAULT_LOCALE macro once in the module, call
OBS_MODULE_FREE_DEFAULT LOCALE in obs_module_unload, and then
call obs_module_text anywhere in your module where you need to look up
text.
By default, it will look for a locale directory in your module's data
directory, and look for language files within it (INI locale format)
This function is used to simplify the process when using the default
locale handling for modules. It will automatically search in the plugin
data directory associated with the specific module specified, load the
default locale text (for example english if its default language is
english), and then it will load the set locale on top of the default
locale, which will cause text to use the default locale if the desired
locale text is not found.
Total bytes, total frames, and frames dropped. Total frames is
generated automatically, but total bytes and total dropped frames are
returned via callbacks.
Before it would assign the encoder/media callbacks directly to the
output's callbacks, so instead of doing that, it now goes through
intermediary functions for the sake of counting the frames.
Usually if you are reconnecting after network outage, it will give a
different code (such as OBS_OUTPUT_CONNECT_FAILED). So, if already
reconnecting, ignore the code unless it's OBS_OUTPUT_SUCCESS.
I was implementing a pushing/popping attributes function like with GL,
but I realized that for our particular purposes (and actually for most
purposes) its usage was somewhat.. niche. I may still implement
pushing/popping of attributes in the future, though right now I feel
using a function to reset the state is sufficient for our purposes.
There was no need to call the context free function in the
initialization function, and it's safer to just initialize the memory to
0 before using (which also negates the need for da_init)
This just ensures that if an obs object is renamed that the pointer to
older names will still be valid. Prevents renames from causing any
invalid memory access.
When the obs object is destroyed, so are the cached names.
The core itself now provides reconnection options (enabled by default, 2
second timeout between reconnects, 20 retries max until actual
disconnection occurs). This will make things easier for both module
developers and UI developers.
Reconnecting treats the stream as though it were still active, and
signals are sent when reconnecting and upon successful reconnection.
Need to implement user interface information for reconnections.
This implements the 'frame skipping' mechanism to forcibly cause frames
to be duplicated in order to reduce encoder complexity so the encoder
can catch up to the video, otherwise it will continue to be
progressively behind and will cause a desync of the video.
Typically, if a user gets this issue, they should turn down their
settings. For the love of god do not tell them that 'frames are
skipping', just tell them that CPU usage is high, and that they should
consider turning down their settings.
MagickCore is provided here as an alternative to FFmpeg in case FFmpeg
is not easily supported (for example, debian-based linux distros).
NOTE: Cmake configuration needs to be changed in order to allow
MagickCore image support.
NOTE: In texture_setimage, I had to move variables to the top of the
scope because microsoft's C compiler will give the legacy C90 error of:
'illegal use of this type as an expression'.
To sum it up, microsoft's C compiler is still utter garbage.
Similar to the shader functions, the effect parameter functions take
the effect as a parameter. However, the effect parameter is pretty
pointless, because the effect parameter.. parameter stores the effect
pointer interally.
...I'm actually concerned that I went a bit overkill trying to prevent
backwards compatibility issues with this abstraction design, because
this is a large number of files that have to be modified just to add a
single graphics subsystem export. Someone's going to strangle me, and
when you know that someone might strangle you, that means that you did
something wrong. We'll have to look in to simplifying this in the
future without killing backward compatibility safety.
The module callback obs_module_set_locale will be called after loading
the module, and any time the locale is manually changed via core API.
When this function is called, the module is expected to load new text
lookup values for all the text it uses based upon the current locale.
The locale parameter was a mistake, because it puts extra needless
burden upon the module developer to have to handle this variable for
each and every single callback function. The parameter is being removed
in favor of a single centralized module callback function that
specifically updates locale information for a module only when needed.
Having the value stored here is somewhat pointless, so this is one step
in fixing the locale handling. Locale should be handled by the modules
themselves with their own loaded locale lookup information.
This API is used to set the current locale for libobs, which it will set
for all modules when a module is loaded or specifically when the locale
is manually changed.
These functions were mostly related to being able to set true fullscreen
mode -- however, this has no place for our purposes, and these functions
were just sitting empty and unused, so they should be removed.
Besides, fullscreen mode only applies to the windows operating system.
This variable is currently somewhat pointless, I was originally going to
use it to tell the graphics subsystem to completely rebuild the internal
vertex buffers, but it would be bad/inefficient to allow that
functionality.
These are meant to reflect auto-detection configuration changes that
should not be written to the config, for example, frame rate changes
for a camera where the (user-/config-file-)configured frame rate isn't
available but a similar frame rate can be automatically chosen
Default values are now permanently stored in the obs_data_items and
can be accessed via the new get_default functions
Also default values are no longer serialized to JSON to ease transition
to new default values
This allows, for example, disconnected devices for dshow/avcapture to
be listed (and to stay selected in the user config) while making them
unavailable for selection in new sources
The 'initialize' callback is used before the encoders/output start up so
it can adjust encoder settings to required values if needed.
Also added the function 'obs_encoder_active' that returns true or false
depending on whether that encoder is active or not.
Structures with anonymous unions would a warning when you do a brace
assignment on them.
Also fixed some unused parameters and removed some unused variables.