This fixes an design flaw where a delayed output would schedule a
stop even while in the process of reconnecting instead of just shutting
down right away.
When obs_output_actual_stop is called on shutdown, it should wait for
the output to fully stop before doing anything, and then it should wait
for the data capture to end. The service should not be removed until
after the output has stopped, otherwise it could result in a possible
memory leak on stop. Packets should be freed last.
(Note: This commit also modifies obs-ffmpeg and obs-outputs)
API Changed:
obs_output_info::void (*stop)(void *data);
To:
obs_output_info::void (*stop)(void *data, uint64_t ts);
This fixes the long-time design flaw where obs_output_stop and the
output 'stop' callback would just shut down the output without
considering the timing of when obs_output_stop was used, discarding any
possible buffering and causing the output to get cut off at an
unexpected timing.
The 'stop' callback of obs_output_info now takes a timestamp with the
expectation that the output will use that timestamp to stop output data
in accordance to that timing. obs_output_stop now records the timestamp
at the time that the function is called and calls the 'stop' callback
with that timestamp. If needed, obs_output_force_stop will still stop
the output immediately without buffering.
Fixes an issue where the audio meter/fader would call an obs function
and lock another mutex, potentially causing a mutual inverted lock in
another thread.
When using GPU conversion for 4:2:0 frames on async video sources, it
would create a texture bigger than necessary and try to copy too much
data from the frame, resulting in a crash.
(Note: This commit also modifies the UI)
The editable list only had two types: A type that allows both files and
URLS, and a type that only allows strings.
This changes it so the editable list can have a "files only" type, a
"files and URLs" type, and a "strings only" type.
When a transition is a sub-source of another source, it would not call
the transition's active source enum function, meaning that any sources
the transition had would not increment their active/showing refs (it
would only be called when activating the transition directly before).
That would result in negative/invalid active/showing refs on its
sub-sources, causing them to become permanently active/inactive and/or
permanently showing/hidden.
Under certain circumstances it's necessary to seek, but if the frame
isn't loaded for the position that's being seeked to, it won't update
the texture. This just ensures the texture will update when seeking.
If custom transforms were used, the very first frame after starting
would always render with the previous transform before calculating the
new transform.
If a transition had a fixed size, it would not render itself or its
sub-sources according to that fixed size. The fixed size value was
essentially being ignored.
This fixes a bug where the sub-sources on a transition wouldn't render
with the expected size when the transition had a different size from its
sub-sources
Transition audio was programmed to stop if there is no queued audio from
both sources. Because of that, when a source's audio started after the
transition started, it would cause audio from the source to be excluded
from the transition until the transition had completed because the audio
had already been marked as stopped.
Instead, if there's no audio from the transition sources, the audio
should only be marked as stopped when video has stopped. This allows
the to/from sources to have an opportunity to start/restart audio during
the transition safely.
The field orders of retro 2x and linear 2x deinterlace shaders were
inverted. Note that yadif 2x does not act the same in this regard, its
field ordering is correct due to how it operates.
(Note: this commit also modifies the obs-filters and test-input modules)
Changes the obs_source_process_filter_begin return type so that it
returns true/false to indicate that filter processing should or should
not continue (for example if the filter is bypassed or if there's some
other sort of issue that causes the filtering to fail)
On outputs that use already-active video/audio encoder, the audio
pruning to sync up audio packets with video packets doesn't always get
called (for example if the video pruning function was called). Always
prune excess starting audio packets.
From MSDN: "The behavior of the least significant bit of the return value
is retained strictly for compatibility with 16-bit Windows applications
(which are non-preemptive) and should not be relied upon."
This caused problems with hotkeys firing if the user pressed a hotkey key
in another application, followed by the modifier keys at any other time.
OBS would then think the hotkey key was just pressed based on the was_down
behavior and incorrectly trigger a hotkey event.
Fixes 0000443.
If audio buffering is very high, the audio packets built up in the
interleaved buffer would be significantly before the first video packet,
causing the offset between the starting video/audio packet pairs to be
significantly off, leading to desync.
This issue was not spotted until recently because it only happens when
streaming/recording with same encoders while audio buffering is very
high.
The source shouldn't be inserted into obs->data.first_audio_source until it's
fully initialized, or other threads will access source->control and
dereference an uninitialized pointer.
This is a band-aid solution to be able to create temporary services
without logging them and keep them out of enumeration functions.
This is a band-aid solution -- 'master obs context lists' should not be
kept by the core. Logging of object creation/destruction should also be
controlled by the front-end instead of the core.
This patch fixes a specific crash where if the user named a filter the
same name as an input source that already existed in the system, scene
item loading code could find the filter with the same name instead of
the source, and mistakenly use it as the scene item's source directly.
This would cause a crash when trying to render that filter as a regular
source.
Marking filters as private is a temporary and simple workaround to the
solution. Filters are currently not meant to be found via the main
enumeration/search functions, which is a design flaw (lack of
consistency). In future major API revisions of libobs, filters should
be reworked to act as sources, with the sources they filter as
sub-sources ideally.
Additionally, the concept of "private context objects" and "primary
lists of context objects" in the back-end should probably also be
removed, allowing the font-end (or optional separate API layers) to
control all primary lists of obs context objects. These minor issues
that occur ultimately stem from API design flaws which need to be
corrected.
This crash happened when a filter was mistakenly used as a regular
source due to an unrelated bug in filter code and scene loading code.
The filter and the source it belongs to both had the same names, and the
source loading code found the filter and mistakenly used it as the
source instead of the actual source with the same name.
Determines whether an obs object was created successfully. If a plugin
that's used for a saved object is removed (third party plugins), its
data will become invalid, but the objects can often still be created for
the sake of preserving user settings, but sometimes these objects can
cause problems if they're actually used (such as using them for
transitions).
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
Just creates an effect to the target variable only if its current value
is null. This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.
(Note: This commit also modifies obs-filters and text-freetype2)
This simplifies writing of effects. DrawMatrix is no longer necessary
because there are no sources that require drawing with a color matrix
other than async sources, and async sources are automatically processed
and don't defer their initial render stage to filters.
When the #include directive in in the C lexer preprocessor is
encountered, the files being included need to be relative to the
directory of the file that the include was used in.
(Note: This commit also changes the UI)
Changed:
-------------------
void obs_load_sources(obs_data_array_t *sources_list);
To:
-------------------
void obs_load_sources(obs_data_array_t *sources_list,
obs_source_load_cb callback, void *private_data);
Signals should really never be required to use to make some function
work properly. The "source_load" signal was required for the
obs_load_sources function, but it's meant more for loading private data
in the settings, not for general loading of sources.
This changes it so that a callback is explicitly required to load the
sources.
The default buffering time for audio was always 1 second before the
audio subsystem was changed, and it was always more than sufficient for
max audio buffering time
Under certain circumstances, the timing_adjust variable would cause line
1161 to continually trigger over and over again. The "loop detection"
code incorrectly made it so that any timestamp that was just simply
below the expected value would be seen as a jump. After that, the
timing_adjust variable would be set for the frame again, and then the
audio would see it as a jump again after that, and those two things
would continue endlessly. This would cause stuttering particularly with
certain devices (particularly elgato/lgp/hdpvr) where the audio/video
data are decoded and sent at varying/different/unpredictable times.
To fix this issue, it should not detect values below as jumps, but
instead should only do it for values that exceed the MAX_TS_VAR (maximum
timestamp variance) value.
If obs_source::audio_ts is set to 0 (such as by discard_if_stopped in
obs-audio.c), but the push_back variable in the source_output_audio_data
function in obs-source.c was being set to true (meaning it's within the
seamless audio smoothing threshold), it would cause it to never reset
the obs_source::audio_ts value, and thus all audio data from the source
would become perpetually ignored by the audio subsystem until there was
finally some sort of timestamp jump that caused it to call
source_output_audio_place, and thus reset obs_source::audio_ts.
obs_source::audio_ts is only reset in source_output_audio_place, not in
source_output_audio_push_back, so the most simple solution is to just
call source_output_audio_push_back is obs_source::audio_ts is 0.
This code causes audio data in general to be reset (and subsequently
deleted). It should just be marked as pending and ignored until the
data is ready. The discard_if_stopped function will serve the same
purpose if the source's audio has actually stopped.
There's technically no need to clear the audio data here, nor is there
any need to try to trick the timestamp in to a different position. It
can simple just reset the audio timing.
Prevents a possible case where audio data might be deleted when it's not
necessary to delete any.
This variable is used to detect whether audio has stopped -- if audio
stops, it detects that no new data is coming in, and resets the audio
position so that it eliminates the chance of causing the audio buffering
to go haywire if audio starts up again. However, this variable was not
being reset every time the value changes, which it should.
Sometimes the A and B sources of a transition would a large difference
in their timestamps, and the calculation of where to start the audio
data for one of the sources could be above the tick size, which could
cause a crash.
If the circular audio buffer of the source has data remaining that's
less than the audio frame tick count (1024 frames), it would just leave
that audio data on the source without discarding it. However, this
could cause audio buffering to increase unnecessarily under certain
circumstances (when the next audio timestamp is within the timestamp
jump window), so it would append data to that circular buffer despite
the audio stopping that long ago, causing audio buffering to have to
increase to compensate.
Instead, just discard pending audio if it hasn't been written to. In
other words, if the audio has stopped and there's insufficient audio
left to continue processing.
With the new audio subsystem, audio buffering is minimal at all times.
However, when the audio buffering is too small or non-existent, it would
cause the audio encoders to start with a timestamp that was actually
higher than the first video frame timestamp. Video would have some
inherent buffering/delay, but then audio could return and encode almost
immediately. This created a possible window of empty time between the
first encoded video packet and the first encoded audio packet, where as
audio buffering would cause the first audio packet's timestamp to always
be way before the first video packet's timestamp. It would then
incorrectly assume the two starting points were in sync.
So instead of assuming the audio data is always first, this patch makes
video wait for audio data comes in, and conversely buffers audio data
until video comes in, and tries to find a starting point within that
video data instead, ensuring a synced starting point whether audio
buffering is active or not.
When starting a multi-track output, attempt to pair the video encoder
with one of the audio encoders to ensure that the video and audio
encoders start as close together in time as possible. This ensures the
best possible audio/video syncing point when using multi-track audio
output.
When using multi-track audio, encoders cannot be paired like they can
when only using a single audio track with video, so it has to choose the
best point in the interleaved buffer as the "starting point", and if the
encoders start up at different times, it has to prune that data and wait
to start the output on the next video keyframe. When the audio encoders
started up, there was the case where the encoders would take some time
to load, and it would cause the pruning code to wait for the next
keyframe to ensure startup syncing.
Starting the audio encoders before starting the video encoder should
reduce the possibility of that happening in a multi-track scenario.
In a multi-track scenario it was not taking in to consideration the
possibility of secondary audio tracks, which could have caused desync on
some of the audio tracks.
The seamless audio looping code would erroneously trigger for things
that weren't loops, causing the audio data to continually push back and
ignore timestamps, thus going out of sync.
There does need to be loop handling code, but due to the fact that other
things may need to trigger this code, it's best just to clear the audio
data and start from a fresh sync point. Unfortunately for the case of
loops, this means the window in which audio data loops and video frames
loop need to be muted.
Fixes an issue where audio data would not be popped if they were not
activated/presenting. This would cause the audio subsystem to
needlessly buffer when they were reactivated again. Rendering all audio
sources (excuding composite/filter sources) helps ensure that audio data
is always popped and not left to pile up.
A comment that serves as a reminder to anyone who might need to edit the
scene code. If the graphics mutex must be locked, it must be locked
first before entering the scene mutexes, or outside of the scene
mutexes.
This fixes an age-old issue where audio samples could be lost or audio
could temporarily go out of sync in the case of looping videos. When
audio/video data is looping, there's a window between when the audio
data resets its timestamp value and when the video data resets its
timestamp value. This method simply pushes back the audio data while in
that window and does not modify sync, and when it detects that its out
of the loop window it simply forces a resync of the audio data in the
circular buffer.
This ensures that minimal audio data is lost in the loop process, and
minimizes the likelihood of any sort of sync issues associated with
looping.
Instead of applying the resampler offset right away (to each audio
packet), apply the resampler offset when the timestamps are converted to
system timestamps. This fixes an issue where if audio timestamps reset
to 0 (for whatever reason), the offset would cause the timestamp to go
in to the negative.
(Note: This commit also modifies the UI)
Allows the ability to duplicate sources fully copied, and/or have the
scene and its duplicates be private sources
Certain types of sources (display captures, game captures, audio
device captures, video device captures) should not be duplicated. This
capability flag hints that the source prefers references over full
duplication.
Mostly only used for transitions with the intention of automatically
creating transitions which don't require configuration, returns whether
the source has any properties or not (whether it's configurable)
(Note: This commit also modifies UI)
Instead of using signals, use designated callback lists for audio
capture and audio control helpers. Signals aren't suitable here due to
the fact that signals aren't meant for things that happen every frame or
things that happen every time audio/video is received. Also prevents
audio from being allocated every time these functions are called due to
the calldata structure.
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source. get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.
In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio. A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.
Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio. Two mix callbacks are used to
handle how the source/destination sources are mixed together. To ensure
the best possible quality, audio processing is per-sample.
Transitions can be set to automatically resize, or they can be set to
have a fixed size. Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition. They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.
Planned (but not yet implemented and lower priority) features:
- "Switch" transitions which allow the ability to switch back and forth
between two sources with a transitioning animation without discarding
the references
- Easing options to allow the option to transition with a bezier or
custom curve
- Manual transitioning to allow the front-end/user to manually control
the transition offset
Prevents a mutual lock with the scene mutex and graphics mutex. In
libobs/obs-video.c, the graphics mutex could be locked first, then the
scene mutexes second, while in the UI thread, the scene mutexes could be
locked first, then when a scene item is being destroyed, a source could
be destroyed, and sometimes sources would lock the graphics mutex
second.
A possible additional solution is to defer source destroys to the video
thread.
(Note: test and UI are also modified by this commit)
API Changed (removed "enum obs_source_type type" parameter):
-------------------------
obs_source_get_display_name
obs_source_create
obs_get_source_output_flags
obs_get_source_defaults
obs_get_source_properties
Removes the "type" parameter from these functions. The "type" parameter
really doesn't serve much of a purpose being a parameter in any of these
cases, the type is just to indicate what it's used for.
This buffers scene item visibility actions so that if
obs_sceneitem_set_visible to true or false, that it will ensure that the
action is mapped to the exact sample point time in which
obs_sceneitem_set_visible is called. Mapping to the exact sample point
isn't necessary, but it's a nice thing to have.
The new audio subsystem fixes two issues:
- First Primary issue it fixes is the ability for parent sources to
intercept the audio of child sources, and do custom processing on
them. The main reason for this was the ability to do custom
cross-fading in transitions, but it's also useful for things such as
side-chain effects, applying audio effects to entire scenes, applying
scene-specific audio filters on sub-sources, and other such
possibilities.
- The secondary issue that needed fixing was audio buffering.
Previously, audio buffering was always a fixed buffer size, so it
would always have exactly a certain number of milliseconds of audio
buffering (and thus output delay). Instead, it now dynamically
increases audio buffering only as necessary, minimizing output delay,
and removing the need for users to have to worry about an audio
buffering setting.
The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources. Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source. Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio. Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick. Things like
scenes now mix audio from their sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Adds a "composite" source type which is used for sources that composite
one or more sub-sources. The audio_render callback is called for
composite sources to allow those types of sources to do custom
processing of the audio of its sub-sources.
(Note: This commit breaks libobs compilation. Skip if bisecting)
This variable is somewhat redundant. Volume is already known/accessible
to front-ends.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Removes audio lines and stores the circular buffer for the audio on the
source itself.
(Note: This commit breaks libobs compilation. Skip if bisecting)
The mixers that a source was assigned to were originally stored in the
audio line. This will store it in the sources themselves instead.
(Note: This commit breaks libobs compilation. Skip if bisecting)
Uses a callback and allows the caller to mix audio. Additionally,
allows the callback to return audio later, allowing it to buffer as much
as it needs.
The skipped frame count (dropped frames due to encoding being
overloaded) would erroneously include lagged frames (dropped frames due
to render stalls). This will make diagnosing actual issues a user might
be having a bit easier.
Problem:
When an output is started with encoders that have already been started
by another output, and it starts in between the window in between where
the first audio packets start coming in and where the first video packet
comes in, the output would get audio packets with timestamps potentially
much later than the first video frame when the first video frame finally
comes in. The audio/video encoders will almost always have a differing
delay.
Solution:
Detect that starting window, and if within that starting window, wait
for a new keyframe from video instead of trying to sync up video.
Additional Notes:
In these cases where an output starts with already-active encoders, this
patch also reduces the potential sync offset between the first video
frame and the first audio frame. Before, it would sync the first video
frame with the first audio frames right after that, but now it syncs
with the closest audio frame in the interleaved array, which halves the
potential sync difference between the first video frame and the first
audio frame of already-active encoders. (So now the potential sync
difference is roughly 11.6 milliseconds at 44.1khz audio, where it was
23.2 before)
Ensures that the packet dts_usec vals which are generated for
syncing/interleaving use the proper offset relative to where they're
supposed to be starting from. The negative DTS of a first video packet
could potentially have been applied twice due to this.
Allows the ability to use fixed stack memory to construct a calldata_t
structure rather than having to allocate each time. This is fine to do
for certain signals where the calldata never goes above a specific size.
If by some chance the size is insufficient, it will output a log
message.
Originally this was programmed to call the recursive height/width
functions if the source type was an input with the intention of not
calling it on filters, but instead of doing that just program it to do
just that: only call the recursive height/width functions if it's not a
filter.
This was originally used for calculating audio volume if transitions
were active, but transitions won't work that way so tracking the active
transitions is no longer needed.
Before if a source was set to invisible it would still be considered
active. This changes it so that the source is deactivated when the
source is invisible to reduce needless resource usage or capturing.
Renames:
----------------------------------------
obs_source_add_child
obs_source_remove_child
obs_source_enum_sources
obs_source_enum_tree
obs_source_info::enum_sources
To:
----------------------------------------
obs_source_add_active_child
obs_source_remove_active_child
obs_source_enum_active_sources
obs_source_enum_active_tree
obs_source_info::enum_active_sources
These functions/callbacks had misleading names: they originally implied
any child sources, when they actually meant active child sources that
are being used to render video or audio. It's important that the
function names represent their actual purpose.
(Note: This commit breaks UI compilation. Skip if bisecting)
Adds a means of saving specific sources that the front-end chooses,
rather than being forced to use the now-removed "user list".
(Note: This commit breaks UI compilation. Skip if bisecting)
API Removed:
------------------------
obs_add_source
API Changed:
------------------------
obs_source_remove: Now just marks/signals a source for removal
The concept of "user sources" is flawed: it was something that the
front-end was forced to deal with if it wanted to automate source
saving/loading, and often it had to code around it. That's not how
saving/loading should work, a front-end should be allowed to manage
lists of sources in the way it explicitly chooses, and it should be able
to choose which sources it wants to save/load.
These functions created stack variables but never actually initialized
them. If the calldata variable is invalid, the return values will be
the uninitialized stack value.
This function was removed even though the browser plugin was using this
function on mac, so this is being put back in temporarily while the
browser plugin is modified to remove this function.
With certain devices (AVerMedia C985 and LGP), audio timestamps are
bad, and a 50ms threshold of audio data "smoothing" (making consecutive
audio packets seamless with one another) isn't enough to handle bad
consecutive timestamp values. After testing, 70ms sufficiently solves
the issue.
This shouldn't happen anymore because crop was fixed, but if a filter
returns 0x0 size and is invalid it shouldn't stop the filter chain.
Instead, it should just be skipped.
Fixes warning introduced by d7848f3cb7 that pops up in GCC:
warning: comparison of unsigned expression < 0 is always false
[-Wtype-limits]
The proper solution was to use the 64bit integer values with the clamp,
and then convert to a 32bit unsigned integer.
Accessing objects inside obs_datas after obs_data_clear was called on the
parent obs_data causes a NULL dereference.
Reproduce with:
obs_data_t *data = obs_data_create();
obs_data_set_obj(data, "foo", NULL);
obs_data_clear(data);
obs_data_get_obj(data, "foo");
SetThreadExecutionState with ES_DISPLAY_REQUIRED has the same effect of preventing the screensaver from activating. Using SPI_SETSCREENSAVEACTIVE leaves the screensaver disabled system-wide if OBS crashes before it can re-enable it.
When an async video source is about to be rendered, the async texture
should be updated before any effect filtering occurs, rather than right
when it's about to render.
Fixes a few bugs:
- If the async texture hadn't drawn for its first time, and the source
has an effect filter, it would never end up rendering the first
frame due to the fact that it would fail on obs-source.c:2434 for the
first filter, causing it to never actually render the source, and thus
never get to a point in which it could call set_async_texture_size to
establish the async texture width/height for the first time.
- Any time the async texture size changed, it would only update the
async texture size at the end of the filter loop, which means that the
first frame after a size change would use the old size for the filters
rather than update to the new size right away.
Passing 0,0 texture size should be considered legal, and safely return
false to indicate that rendering can't begin. Also there's no need to
try to use the current swap chain's width/height if either of the sizes
are 0, there's no need try try to "force" success here anymore.
Adds warnings for graphics functions to ensure that graphics functions
are called within an active graphics context, and add warnings if
required pointer parameters are null.
To prevent confusion with the new obs_source_valid function which
displays a warning, rename the "source_valid" function to "data_valid"
to emphasize that it's checking for the validity of the internal data as
well as the source itself.
API removed:
--------------------
gs_effect_t *obs_get_default_effect(void);
gs_effect_t *obs_get_default_rect_effect(void);
gs_effect_t *obs_get_opaque_effect(void);
gs_effect_t *obs_get_solid_effect(void);
gs_effect_t *obs_get_bicubic_effect(void);
gs_effect_t *obs_get_lanczos_effect(void);
gs_effect_t *obs_get_bilinear_lowres_effect(void);
API added:
--------------------
gs_effect_t *obs_get_base_effect(enum obs_base_effect effect);
Summary:
--------------------
Combines multiple near-identical functions into a single function with
an enum parameter.
Use explicit UTF-8 byte sequence for the "no-break space" character.
Prevents issues with certain editors, and fixes the following compiler
warning on Visual C++:
warning C4819: The file contains a character that cannot be represented
in the current code page (X). Save the file in Unicode format to prevent
data loss
It's possible for one signal handler to disconnect another during signal_handler_signal, which could result in crashes when the disconnected signal handler is called with a potentially freed data pointer due to other cleanup.
In case a system authentication prompt is open the actual error code
(via err_get_code) is kIOReturnExclusiveAccess on OSX 10.9; I'm not
100% sure if this makes the "if (!value)" part of the code obsolete,
but that code path shouldn't be triggered under most circumstances
anyway
Fixes https://obsproject.com/mantis/view.php?id=346
When the crash text is generated, it's generated with LF line endings
only. Pasting the crash text in to notepad will result in garbled text
on windows (due to the fact that notepad still to this day has not been
programmed to understand anything other than CRLF). Instead of using
LF, just use CRLF.
There's no need to refresh the actual module list for the crash handler
until a crash has occurred. Reduces startup time for this function call
from 400ms to 40ms.
On the first call to update the symbol paths, pass the path string to
the SymInitializeW function first if it hasn't been called yet. If it
has been called, then defer to SymSetSearchPathW and then
SymRefreshModuleList. This is meant to reduce a needless extra call to
the latter two functions on the first use of the function.
Using gzdopen will not work properly if the C standard library used to
get the descriptor is not the same library as the one that was compiled
with zlib. When this is the case, it causes zlib to fail to write a
file, and would report a C standard library error. Use gzopen and
gzopen_w instead to ensure that the file is opened and closed by zlib
itself, ensuring that zlib and the libobs do not have to share the C
standard library between each other.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is useful for allowing the ability to have private data associated
with the object type definition structures. This private data can be
useful for things like plugin wrappers for other languages, or providing
dynamically generated object types.
This prevents encoders (hardware encoders in particular) from being
continually active when all outputs disconnect from an encoder. This is
mostly just a temporary measure; the encoding interface may need a bit
of a redesign. It will also definitely needs to be able to flush at
some point. Currently when an output is stopped, the pending data is
discarded, which needs to be fixed.
Allows objects to be created regardless of whether the actual id exists
or not. This is a precaution that preserves objects/settings if for
some reason the id was removed for whatever reason (plugin removed, or
hardware encoder that disappeared). This was already added for sources,
but really needs to be added for other libobs objects as well: outputs,
encoders, services.
If the FFMPEG_AVCODEC_LIBRARIES variable does not exist, it will
generate a cmake error, so check to make sure the variable exists before
executing this code.
These fucntions prevent the computer from going to sleep, hibernating,
or starting up a screen saver.
On linux, it will also attempt to use DBus to prevent gnome/kde/etc
sleep, but it's not necessarily required in order to compile the
library. Otherwise, it will simply call "xdg-screensaver reset" once
every 30 seconds to reset the screensaver timer.
This fixes the issue when an output cancels reconnecting, reconnect is
left at true, causing obs_output_active to always return true even
though reconnecting has actually been canceled.
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.
This was broken in cd222f8ce0 which had a
horrible commit message that makes replicating the issue impossible if
there weren't others who reported similar visual studio issues when
using a Japanese locale
This improves logging for when audio data insertion is way out of bounds
or is getting cut off in the front due to a bad negative sync offset.
Instead of throwing out a log message for every time this happens with
each piece of data, it now states when the out of bounds or cutoff has
started and stopped only.
This fixes a case where an insertion of audio data would pass
valid_timestamp_range yet the insert position would cause a negative
integer position and thus an unsigned integer overflow.
obs_data_create_from_json_file_safe: Attempts to create an obs_data
object from a file, and if that fails and a backup file exists, deletes
the old file and tries to open it again.
obs_data_save_json_safe: Saves json data to a temporary file first,
optionally backs up the target file if the file exists and backup_ext is
valid (otherwise deletes it), and then renames the temporary file to the
target file. This helps reduce the chance of json corruption on save.
This helper function saves to a temporary file first, (optionally) backs
up the original file, then renames the temporary file to the actual file
name. This helps reduce the chance of file corruption under various
circumstances (such as shutdown or crash while the file is being written
to disk).
API Changed:
---------------------------
From:
- bool obs_startup(const char *locale, profiler_name_store_t *store);
To:
- bool obs_startup(const char *locale, const char *module_config_path,
profiler_name_store_t *store);
Summary:
---------------------------
This allows plugin modules to store plugin-specific configuration data
(rather than only allowing objects to store configuration data). This
will be useful for things like caching data, for example looking up and
storing ingests from remote (rather than storing locally), or caching
font data (so it doesn't have to build a font cache each time), among
other things.
Also adds a module-specific directory for the UI
Manually specifies the UTF-8 character codes used by the file rather
than relying on the compiler to determine what the codes are manually.
I was getting compile errors due to the fact that my current code page
is not at the default code page; so visual c++ tried to use my current
code page rather than UTF-8 and it would cause an error on the file.
When getting a blank module data file (indicating you want to get the
path to the module data directory itself), on certain operating systems
(windows) it will fail if you end the path with '/' or '\'. So simply
make sure that if a blank module sub-path is used, don't try to append a
slash.
Just a little helper function that allows you to create an obs_data_t
object from a json file (rather than having to manually open it each
time and then call obs_data_create_from_json on the file data)
The rationale for rejecting these register calls is that these functions
may be required for the plugin to work properly, which can't be
guaranteed when libobs doesn't know about them.
This behavior may be revisited once the plugin manager is implemented,
to e.g. make it configurable (potentially per plugin) to allow loading
newer plugins in case they are known to work with the older libobs
The id and parent_id fields should now allow better recovery of the
actual call trees, though they aren't compatible between different data
dumps in a single profiler session anymore; for that reason the new
fields name_id and parent_name_id are introduced, they hold the old id
and parent_id values respectively
Due to all the threads in libobs it wouldn't be safe to make that
parameter reconfigurable after libobs is initialized without adding
even more synchronization. On the other hand, adding a function to set
the name store before calling obs_startup would solve the problem of
passing a name store into libobs, but it can lead to more complicated
semantics for obs_get_profiler_name_store (e.g., should it always return
the current name store even if libobs isn't initialized until someone
calls set_name_store(NULL)? should obs_shutdown call
set_name_store(NULL)? Passing it as obs_startup parameter avoids
these (and hopefully other) potential misunderstandings
(Non-compiling commit: windowless-context branch)
API Changed:
---------------------
Removed functions:
- obs_add_draw_callback
- obs_remove_draw_callback
- obs_resize
- obs_preview_set_enabled
- obs_preview_enabled
Removed member variables from struct obs_video_info:
- window_width
- window_height
- window
Summary:
---------------------
Changes the core libobs API to not be dependent upon a main window/view.
If you wish to draw to a window/view, use an obs_display object to
handle it.
This allows the use of libobs without requiring a window to be present
on the system. This is also prunes code that had to be needlessly
duplicated to handle the "main" window.
(Non-compiling commit: windowless-context branch)
Changes API from:
---------------------
EXPORT int gs_create(graphics_t **graphics, const char *module,
const struct gs_init_data *data);
To:
---------------------
EXPORT int gs_create(graphics_t **graphics, const char *module,
uint32_t adapter);
Summary:
---------------------
Changes the gs_create function to use an adapter parameter instead of
requiring a gs_init_data with window/color/etc information.
Intentionally breaks compilation when trying to compile the specific
merged commits within the windowless-context branch. This is meant to
be used in conjunction with a merge commit so that bisecting will never
see any non-compiling commits.
Microsoft basically deprecated GetVersion/GetVersionEx, so now you have
to query the file version of kernel32.dll in order to get the actual
windows version. Because of the steps involved in getting the windows
version are fairly complicated, this is an exported libobs helper
function that can be used to get the windows version if needed.
(Microsoft does have its own set of helper functions for this but the
API feels a bit.. awkward, and you can't actually get the real windows
version with them)
A minor optimization: in copy_rgbx_frame (used when libobs is set to
output RGBA frames instead of YUV frames), if the line sizes for the
source and destination match, just use a single memcpy call for all of
the data instead of multiple memcpy calls.
Added cast to unsigned and the assert because microsoft's compiler
doesn't support "%zu"
Actual warning:
libobs/graphics/effect-parser.c:1387:4: warning: format specifies type
'unsigned int' but the argument has type 'size_t' (aka 'unsigned long')
[-Wformat]
Certain plugins/modules that are loaded on windows may be dependent on
libraries that are located in the same directory as the module in
question. This makes is so that LoadLibrary will also search for
dependent libraries for that module in the module's directory.
When globbing for modules, the intent was to ignore the . and ..
directories that might also be included in the glob search. It was
mistakenly doing a string compare on the 'path' variable which contains
the full path, rather than the 'file' variable which just contains the
found file itself, so the string compare always failed.
In case the encoder has to use a different sample rate (due to the
sample rate being unsupported), we need an API function for the encoder
to get the sample rate that the encoder is actually running at.
When an audio filter is applied to a video source that also has
accompanying audio, it would cause the video from the source to stop
rendering.
The original code this was from was to prevent audio-only sources from
rendering video, but I neglected to make sure that this would not apply
to filters, and thus when an audio filter is on a source with video, the
code would kill the video.
The size parameter is the size of the elements, not the size of the
data. The size parameter should be 1, and the elements should be the
number of bytes.
The reason why I'm making this change is because the fread/fwrite would
fail when the parameters were swapped.
In the (unlikely) event of multiple concurrent calls to
input_method_changed it was possible that the log messages would appear
out of order with respect to which layout would actually be active
after the last log message
Due to the fact that async timestamps themselves can be susceptible to
minor jitter from certain types of inputs, increase the allowable jitter
compensation value to ensure that the rendered frame timing from async
video sources is always as close as possible to the compositor.
When the framerate of the source is the same as the framerate as the
compositor, this (combined with the fact that clamped video timing now
being used with async video frames) helps ensure that buffered async
video sources will sync up their rendering to the compositor as
accurately as possible despite jitter from the source's timestamps.
If there is no jitter in the source's timestamps then it'll always sync
up perfectly with the compositor, thanks to clamped video timing.
When playing back buffered async frames, this reduces the probability
that new frames will be missed/skipped due to jitter in the system
timestamps.
If a buffered async source is playing at the same framerate as the
compositor and there is no jitter in the async source's timestamps, then
the async source will play back perfectly in sync with the compositor
thanks to this change, ensuring that there's no skipped or missed frames
in video playback.
The "clamped" video time is the system time per video frame that is
closest to the current system time, but always divisible by the frame
interval. For example, if the last frame system timestamp was 1600 and
the new frame is 2500, but the frame interval is 800, then the
"clamped" video time is 2400.
This clamped value is useful to get the relative system time without any
jitter.
When buffering is enabled for an async video source, sometimes minor
drift in timestamps or unexpected delays to frames can cause frames to
slowly buffer more and more in memory, in some cases eventually causing
the system to run out of memory.
The circumstances in which this can happen seems to depend on both the
computer and the devices in use. So far, the only known circumstances
in which this happens are with heavily buffered devices, such as
hauppauge, where decoding can sometimes take too long and cause
continual frame playback delay, and thus continual buffering until
memory runs out. I've never been able to replicate it on any of my
machines however, even after hours of testing.
This patch is a precautionary measure that puts a hard limit on the
number of async frames that can be currently queued to prevent any case
where memory might continually build for whatever reason. If it goes
over the limit, it clears the cache to reset the buffering.
I had a user with this problem test this patch with success and positive
feedback, and the intervals between buffering resets were long to where
it wasn't even noticeable while streaming/recording.
Ideally when decoding frames (such as from those devices), frame
dropping should be used to ensure playback doesn't incur extra delay,
although this sort of hard limit on the frame cache should still be
implemented regardless just as a safety precaution. For DirectShow
encoded devices I should just switch to faruton's libff for decoding and
enable the frame dropping options. It would probably explain why no
one's ever reported it for the media source, and pretty much only from
DirectShow device usage.
Found via UBSan, actual errors (addresses not pruned for illustrative purposes):
"runtime error: store to misaligned address 0x7f9a9178e84c for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
"runtime error: load of misaligned address 0x7f9a9140f2cf for type
'size_t' (aka 'unsigned long'), which requires 8 byte alignment"
The screen index returned from XDefaultScreen is 0-based, and we were
decrementing it before the check to see if it had reached 0 rather than
after, so in the default_screen function it would always end up getting
either the wrong screen or no screen.
When xcb_query_pointer and xcb_query_pointer_reply was called with no
valid screen, it would fail with an error, thus making it so that the
mouse buttons could not be properly captured as hotkeys.
Implements exponential backoff for consecutive reconnects, which is
useful to prevent too many connections from trying to reconnect back to
a service at once over a short period of time in the case of potential
service downtime. Exponential backoff causes each subsequent reconnect
attempt to double its timeout duration.
This optionally allows the front-end to know what the current timeout
value in seconds is set to for the reconnection without having to call
an extra API function to find that out.
Implement the log_processor_info function on bsd and add ifdefs to only
build the implementation specific to the platform.
Also add an ifdef around the call to that function to make sure it will
only be called on platforms where it is actually implemented.
Split the function logging the processor information on nix into two
parts. The part logging the number of logical cores is portable and
works on all systems that support POSIX.1 while the other part is
specific to linux.
Add the relevant header file needed on FreeBSD and utilize yet another
ifdef to call pthread_set_name_np as the function name differs from
those on the other platforms.
Add definition on FreeBSD to enable getline in stdio, as it is not
(yet ?) available by default. According to the manpage getline was a
GNU extension but was standardized in POSIX.1-2008.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I realized that the get_video_info and get_audio_info encoder callbacks
always have to manually query the libobs audio/video information.
This fixes that problem by passing the libobs video/audio information in
the structures passed to those callbacks so they don't have to query it
each time, reducing needless boilerplate code for encoders.
Allows the ability to hint at encoders what format should be used.
This is particularly useful if libobs is currently operating in planar
4:4:4, but you want to force an encoder used for streaming to convert to
NV12 to prevent streaming issues.
Fixes a crash that could happen if any of the mutexes are used in the
create callback, or before the obs_source_init function is called.
I'm not sure how this function order slipped because it seems fairly
obvious that these mutexes should be created before the create callback.
Had this crash happen to me when creating a WASAPI output source, the
create callback of the WASAPI source creates a thread which outputs
audio, and that thread managed to call obs_source_output_audio before
the obs_source_init function was called, which in turn caused it to try
to use a null mutex.