Commit Graph

169 Commits (a4a8b385c56ae132fa9a32b2881d09cae5c32a0a)

Author SHA1 Message Date
jp9000 b54f70ef8d libobs: Add async video/audio decoupling functions
Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.

Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
2017-10-10 06:45:34 -07:00
jp9000 2ef00ecec4 libobs: Add private settings to scene items/sources
Allows setting and sharing of user data for sources and scene items.
2017-09-13 21:17:44 -07:00
SammyJames 4fd66d4d1e libobs: Add post-load module callback
This allows the ability for certain types of modules (particularly
scripting-related modules) to initialize extra data when all other
modules have loaded.  Because front-ends may wish to have custom
handling for loading modules, the front-end must manually call
obs_post_load_modules after it has completed loading all plug-in
modules.

Closes jp9000/obs-studio#965
2017-07-21 08:27:31 -07:00
Richard Stanway 50f8a066b9
libobs: Add support for output error messages 2017-05-15 12:04:11 +02:00
jp9000 d13fa96851 libobs: Don't use source flags for async buffering
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)

Originally, async buffering for sources was supposed to be a
user-controllable flag.  However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).

The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.

Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.
2017-05-13 23:32:40 -07:00
jp9000 cb9a478821 libobs: Add function to get average render time
Useful for real-time rendering statistics
2017-05-13 01:21:16 -07:00
Richard Stanway 9e95b2eb6f
Various: Don't use boolean bitfields
Using bitfields causes less optimized code generation and the memory
savings are minimal as none of the objects are instantiated enough
times to be worth it.

See https://blogs.msdn.microsoft.com/oldnewthing/20081126-00/?p=20073
2017-05-10 23:28:46 +02:00
jp9000 ad57aa1520 libobs: Add function to allow custom output drawing
Optionally allows drawing directly to the primary output instead of
having to use a source to draw.
2017-05-06 11:29:29 -07:00
jp9000 ffef736640 libobs: Pass exact data when calling obs_get_video_info
Originally, obs_get_video_info would recreate the obs_video_info
structure that was originally passed to it from obs_reset_video.  This
changes that to just store a copy of the obs_video_info when calling
obs_reset_video, and then copying that to the parameter of
obs_get_video_info when called.
2017-05-06 11:29:28 -07:00
jp9000 829ec5be2d libobs: Fix skipped frames reporting
When frames are skipped the skipped frame count would increment, but the
total frame count would not increment, causing the percentage
calculation to fail.

Additionally, the skipped frames log reporting has been moved to
media-io/video-io.c instead of each output.
2017-05-06 11:29:25 -07:00
jp9000 0941a9c13d libobs: Add ability to preload async frames
Prevents any delay when starting up certain async video sources.
Additionally prevents old frames from being presented as well.
2017-03-28 09:39:47 -07:00
jp9000 dc4e205008 libobs: Delay stop detection of audio source
Delay the "audio stopped" detection by one audio tick to ensure that
audio doesn't get intentionally cut out in some situations
2017-03-28 09:19:00 -07:00
jp9000 d2934eca7e libobs: Implement audio monitoring
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker.  It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.

On windows, audio monitoring uses WASAPI.  Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.

On mac, it uses AudioQueue.

On linux, it's not currently implemented and won't do anything (to be
implemented).
2017-02-06 11:44:02 -08:00
jp9000 04ae015a60 libobs: Add ability to insert captions into frames
Uses the libcaption library to allow insertion of caption data directly
in to H.264 frame data.
2016-12-23 10:37:09 -08:00
jp9000 85269fb5b2 libobs: Convert Y800 to RGBX manually
Because D3D11 specifically does not support an L8 texture format (you
have to use a shader swizzle), manually convert Y800 signals to RGBX
instead.  This also fixes a bug where Y800 signals will render red.

Closes jp9000/obs-studio#718
2016-12-17 19:58:01 -08:00
jp9000 e541e170a3 libobs: Fix possible reverse order mutex hard lock
For displays, instead of using the draw_callbacks_mutex and risk a
reverse mutual lock scenario, use a separate mutex to lock display size
data.

This bug was exposed when trying to reorder filters in the UI module.
The UI thread would try to reorder the filters, locking the filter mutex
of the source, and then the reorder would signal the UI to resize the
display, so the display would lock its draw_callbacks_mutex.  Then, in
the graphics thread, it would lock the display's draw_callbacks_mutex,
try to draw the source, and then the source would try to lock that same
filter mutex.

A mutex trace:
UI thread -> lock source filter mutex -> waiting on display mutex
graphics thread -> lock display mutex -> waiting on source filter mutex

Closes jp9000/obs-studio#714
2016-12-17 15:21:00 -08:00
jp9000 7d6e6eee79 libobs: Use reference counting for encoder packets
Prevents reallocation of encoded packet data.

Deprecates:
obs_duplicate_encoder_packet
obs_free_encoder_packet

Replaces those functions with:
obs_encoder_packet_ref
obs_encoder_packet_release
2016-12-08 03:27:39 -08:00
Jim 9982581a4f libobs: Ensure async source sizes are always reset
If an async source is cropped on one side, then when the program is
restarted and the source is loaded from file, the async source will
start out with a width/height of zero.  This will cause the async source
to not be drawn if cropping or scale filtering is added to the scene
item, because it has to be rendered to a texture first.  However, the
source cannot reset its size until it's drawn, so it leaves it in
perpetual state of having a 0x0 size.

This fixes that problem by ensuring that the async source size is always
reset even when not being rendered.

Close jp9000/obs-studio#686
2016-11-10 00:13:06 -08:00
jp9000 95ce556051 libobs: Add obs_get_active_fps function
Allows getting the current active framerate that the core is rendering
with.  This takes in to account any rendering lag or stalls that may be
occurring.
2016-08-22 12:05:57 -07:00
jp9000 d49833830c libobs: Add ability to use scale filters on scene items
Allows the ability to use scale filters such as point, bicubic, lanczos
on specific scene items, disabled by default.  When using one of the
latter two options, if the item's scale is under half of the source's
original size, it uses the bilinear low resolution downscale shader
instead.
2016-06-29 08:00:54 -07:00
jp9000 d7db0b8b01 (API Change) libobs: Fix output data cutoff on stop
(Note: This commit also modifies obs-ffmpeg and obs-outputs)

API Changed:
obs_output_info::void (*stop)(void *data);

To:
obs_output_info::void (*stop)(void *data, uint64_t ts);

This fixes the long-time design flaw where obs_output_stop and the
output 'stop' callback would just shut down the output without
considering the timing of when obs_output_stop was used, discarding any
possible buffering and causing the output to get cut off at an
unexpected timing.

The 'stop' callback of obs_output_info now takes a timestamp with the
expectation that the output will use that timestamp to stop output data
in accordance to that timing.  obs_output_stop now records the timestamp
at the time that the function is called and calls the 'stop' callback
with that timestamp.  If needed, obs_output_force_stop will still stop
the output immediately without buffering.
2016-06-22 14:10:39 -07:00
jp9000 29e849e355 libobs: Move output/encoder shutdown to independent thread
obs_output_end_data_capture could cause a hard lock due to mutex lock
ordering, depending on what thread it was called in.
2016-06-22 03:08:44 -07:00
jp9000 7018c6039d libobs: Do not allow output stop calls more than once 2016-06-22 03:08:43 -07:00
jp9000 ceb675f6df libobs: Use atomics for certain output boolean variables 2016-06-22 03:08:32 -07:00
jp9000 96d848f3d2 libobs: Add premultiplied alpha base effect 2016-03-26 21:41:49 -07:00
jpk c3629eacb5 libobs: Add Y800 color format support
(Note: Also modified the obs-ffmpeg plugin module)

Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).

Closes jp9000/obs-studio#515
2016-03-24 03:33:35 -07:00
jp9000 07c644c581 libobs: Add deinterlacing API functions
Adds deinterlacing API functions.  Both standard and 2x variants are
supported.  Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.

This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed.  If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result.  It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.

Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
2016-03-21 21:22:32 -07:00
jp9000 11d9a8f3e4 libobs: Rename async_convert_texrender to async_texrender 2016-03-21 21:22:31 -07:00
jp9000 9a86a10f91 libobs: Update async textures via function parameters
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.

This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
2016-03-21 21:22:31 -07:00
jp9000 12cdaec1a7 libobs: Move frame-related functions to obs-internal.h
Allows access in other source files (particularly the deinterlacer)
2016-03-21 21:22:30 -07:00
jp9000 3019dccf87 libobs: Add obs_load_effect function
Just creates an effect to the target variable only if its current value
is null.  This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.
2016-03-21 21:22:30 -07:00
jp9000 d069302b2e libobs: Add function to get obs object type 2016-02-27 02:49:03 -08:00
jp9000 4b15880231 libobs: Discard remainder audio if source audio stopped
If the circular audio buffer of the source has data remaining that's
less than the audio frame tick count (1024 frames), it would just leave
that audio data on the source without discarding it.  However, this
could cause audio buffering to increase unnecessarily under certain
circumstances (when the next audio timestamp is within the timestamp
jump window), so it would append data to that circular buffer despite
the audio stopping that long ago, causing audio buffering to have to
increase to compensate.

Instead, just discard pending audio if it hasn't been written to.  In
other words, if the audio has stopped and there's insufficient audio
left to continue processing.
2016-01-31 00:55:02 -08:00
jp9000 9aa18d3de5 libobs: Ensure paired encoders start up at the same time
With the new audio subsystem, audio buffering is minimal at all times.
However, when the audio buffering is too small or non-existent, it would
cause the audio encoders to start with a timestamp that was actually
higher than the first video frame timestamp.  Video would have some
inherent buffering/delay, but then audio could return and encode almost
immediately.  This created a possible window of empty time between the
first encoded video packet and the first encoded audio packet, where as
audio buffering would cause the first audio packet's timestamp to always
be way before the first video packet's timestamp.  It would then
incorrectly assume the two starting points were in sync.

So instead of assuming the audio data is always first, this patch makes
video wait for audio data comes in, and conversely buffers audio data
until video comes in, and tries to find a starting point within that
video data instead, ensuring a synced starting point whether audio
buffering is active or not.
2016-01-31 00:55:01 -08:00
jp9000 d43d59ca8a libobs: Remove seamless audio loop handling
The seamless audio looping code would erroneously trigger for things
that weren't loops, causing the audio data to continually push back and
ignore timestamps, thus going out of sync.

There does need to be loop handling code, but due to the fact that other
things may need to trigger this code, it's best just to clear the audio
data and start from a fresh sync point.  Unfortunately for the case of
loops, this means the window in which audio data loops and video frames
loop need to be muted.
2016-01-31 00:54:54 -08:00
jp9000 6f98bd9fed libobs: Use calldata with stack for simple signals
Makes signals use stack memory rather than allocate memory each time.
Most likely a completely insignificant and pointless optimization.
2016-01-26 11:49:56 -08:00
jp9000 ce0a189228 libobs: Fix audio issues with async video/audio looping
This fixes an age-old issue where audio samples could be lost or audio
could temporarily go out of sync in the case of looping videos.  When
audio/video data is looping, there's a window between when the audio
data resets its timestamp value and when the video data resets its
timestamp value.  This method simply pushes back the audio data while in
that window and does not modify sync, and when it detects that its out
of the loop window it simply forces a resync of the audio data in the
circular buffer.

This ensures that minimal audio data is lost in the loop process, and
minimizes the likelihood of any sort of sync issues associated with
looping.
2016-01-26 11:49:54 -08:00
jp9000 41fa9c1bdb libobs: Don't include sync offsets in TS smoothing
Apply user sync offset *after* timestamp smoothing, not before.
Prevents small or gradual sync offsets from not being properly applied.
2016-01-26 11:49:54 -08:00
jp9000 1089564b57 libobs: Apply resampler offset to system audio TS
Instead of applying the resampler offset right away (to each audio
packet), apply the resampler offset when the timestamps are converted to
system timestamps.  This fixes an issue where if audio timestamps reset
to 0 (for whatever reason), the offset would cause the timestamp to go
in to the negative.
2016-01-26 11:49:53 -08:00
jp9000 3371ff59c9 libobs: Add *_create_private functions
Allows creation of private/unlisted sources/outputs/services/encoders
2016-01-26 11:49:48 -08:00
jp9000 bccd3b0b0a libobs: Allow "private" contexts
The intention of this is to allow sources/outputs/etc to be created
without being visible to the UI or save/load functions.
2016-01-26 11:49:47 -08:00
jp9000 669da7ba36 libobs: Do not use signals with audio capture/controls
(Note: This commit also modifies UI)

Instead of using signals, use designated callback lists for audio
capture and audio control helpers.  Signals aren't suitable here due to
the fact that signals aren't meant for things that happen every frame or
things that happen every time audio/video is received.  Also prevents
audio from being allocated every time these functions are called due to
the calldata structure.
2016-01-26 11:49:47 -08:00
jp9000 6839ff7686 libobs: Implement transition sources
Transition sources are implemented by registering a source type as
OBS_SOURCE_TYPE_TRANSITION.  They're automatically marked as video
composite sources, and video_render/audio_render callbacks must be set
when registering the source.  get_width and get_height callbacks are
unused for these types of sources, as transitions automatically handle
width/height behind the scenes with the transition settings.

In the video_render callback, the helper function
obs_transition_video_render is used to assist in automatically
processing and rendering the audio.  A render callback is passed to the
function, which in turn passes to/from textures that are automatically
rendered in the back-end.

Similarly, in the audio_render callback, the helper function
obs_transition_audio_render is used to assist in automatically
processing and rendering the audio.  Two mix callbacks are used to
handle how the source/destination sources are mixed together.  To ensure
the best possible quality, audio processing is per-sample.

Transitions can be set to automatically resize, or they can be set to
have a fixed size.  Sources within transitions can be made to scale to
the transition size (with or without aspect ratio), or to not scale
unless they're bigger than the transition.  They can have a specific
alignment within the transition, or they just default to top-left.
These features are implemented for the purpose of extending transitions
to also act as "switch" sources later, where you can switch to/from two
different sources using the transition animation.

Planned (but not yet implemented and lower priority) features:

- "Switch" transitions which allow the ability to switch back and forth
  between two sources with a transitioning animation without discarding
  the references

- Easing options to allow the option to transition with a bezier or
  custom curve

- Manual transitioning to allow the front-end/user to manually control
  the transition offset
2016-01-26 11:49:45 -08:00
jp9000 ed10b1ab57 libobs: Allow add_alignment to be used by other source files 2016-01-26 11:49:40 -08:00
jp9000 5098f68db0 libobs: Move obs_source_dosignal to obs-internal.h
Allows using it in multiple source files
2016-01-26 11:49:40 -08:00
jp9000 b0104fcee0 (API Change) libobs: Remove source_type param from functions
(Note: test and UI are also modified by this commit)

API Changed (removed "enum obs_source_type type" parameter):
-------------------------
obs_source_get_display_name
obs_source_create
obs_get_source_output_flags
obs_get_source_defaults
obs_get_source_properties

Removes the "type" parameter from these functions.  The "type" parameter
really doesn't serve much of a purpose being a parameter in any of these
cases, the type is just to indicate what it's used for.
2016-01-26 11:49:37 -08:00
jp9000 c1dd156db8 libobs: Implement new audio subsystem
The new audio subsystem fixes two issues:

- First Primary issue it fixes is the ability for parent sources to
  intercept the audio of child sources, and do custom processing on
  them.  The main reason for this was the ability to do custom
  cross-fading in transitions, but it's also useful for things such as
  side-chain effects, applying audio effects to entire scenes, applying
  scene-specific audio filters on sub-sources, and other such
  possibilities.

- The secondary issue that needed fixing was audio buffering.
  Previously, audio buffering was always a fixed buffer size, so it
  would always have exactly a certain number of milliseconds of audio
  buffering (and thus output delay).  Instead, it now dynamically
  increases audio buffering only as necessary, minimizing output delay,
  and removing the need for users to have to worry about an audio
  buffering setting.

The new design makes it so that audio from the leaves of the scene graph
flow to the root nodes, and can be intercepted by parent sources.  Each
audio source handles its own buffering, and each audio tick a specific
number of audio frames are popped from the front of the circular buffer
on each audio source.  Composite sources (such as scenes) can access the
audio for child sources and do custom processing or mixing on that
audio.  Composite sources use the audio_render callback of sources to do
synchronous or deferred audio processing per audio tick.  Things like
scenes now mix audio from their sub-sources.
2016-01-26 11:49:34 -08:00
jp9000 ddfd89a673 libobs: Implement composite sources (skip)
(Note: This commit breaks libobs compilation.  Skip if bisecting)

Adds a "composite" source type which is used for sources that composite
one or more sub-sources.  The audio_render callback is called for
composite sources to allow those types of sources to do custom
processing of the audio of its sub-sources.
2016-01-26 11:49:33 -08:00
jp9000 a5c9350be5 libobs: Remove "presentation volume" and "base volume" (skip)
(Note: This commit breaks libobs compilation.  Skip if bisecting)

These variables are considered obsolete and will no longer be needed.
2016-01-26 11:49:32 -08:00
jp9000 bc0b85cb79 libobs: Store circular audio buffer on source itself (skip)
(Note: This commit breaks libobs compilation.  Skip if bisecting)

Removes audio lines and stores the circular buffer for the audio on the
source itself.
2016-01-26 11:49:30 -08:00