The minimum and maximum color range values were not being set by the
video_format_get_parameters function when full range was in use; I
assume it's because the expected min/max values of full range is
{0.0, 0.0, 0.0} and {1.0, 1.0, 1.0}, so I'm just making it so that it
sets those values if using full range.
Way to replicate the issue (windows):
1.) Create a win-dshow device capture source
2.) Select a device and set it to a YUV color format to enable YUV color
conversion
3.) Select "Full Range" in the color range property.
4.) Restart OBS, the device will then start up, but will display green
due to the fact that the min/max range values were never set, and are
left at their default value of 0.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame. The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other). The ultimate goal was to prevent
encoding from impacting graphics and vise versa.
However, eventually it was realized that there were some fundamental
flaws with this design.
1. Stray frame duplication. You could not guarantee that a frame would
render on time, so sometimes frames would unintentionally be lost if
there was any sort of minor hiccup or if the thread took too long to
be scheduled I'm guessing.
2. Frame timing in the rendering thread was less accurate. The only
place where frame timing was accurate was in the encoder thread, and
the graphics thread was at the whim of thread scheduling. On higher
end computers it was typically fine, but it was just generally not
guaranteed that a frame would be rendered when it was supposed to be
rendered.
So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread. The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.
In the cache, each frame has an encode counter. If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity. If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).
Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.
I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.
So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump. This could mess up syncing
slightly when the frame is finally given to an outputs.
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold. The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about. For
the time being, I feel it may be worth it to try 50 milliseconds.
This Fixes a minor flaw with the API where data had to always be mutable
to be usable by the API.
Functions that do not modify the fundamental underlying data of a
structure should be marked as constant, both for safety and to signify
that the parameter is input only and will not be modified by the
function using it.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
Audio that goes below the minimum expecting timing (current time -
buffering time) is automatically removed. However, delayed audio is not
removed regardless of its delay. This puts a hard cap of 6 seconds from
current time that the maximum delay audio can have. This will also
prevent the circular buffer from dynamically growing too large.
Doing timestamp smoothing in obs-source.c is good because timestamps can
typically operate on a different timebase, however, obs-source.c can
also change that time base dynamically (such as with async video and
unexpected timestamp jumps), so in order to ensure that audio is
seamless in the output as well, perform timestamp smoothing in
audio-io.c as well just as an extra precautionary measure.
This is sort of hard to explain: the scale_video_output function was
overwriting the current frame. If scaling was disabled, it would do
nothing, and return success, and all would be well. If it was enabled,
it would then call the scaler, and then replace the contents of the
'data' function parameter with the scaled frame data. The problem with
this is that I was passing video_output::cur_frame directly, which
overwrites its previous value with the scaled frame data. Then if
cur_frame was not updated on time, it would end up trying to scale the
previously scaled image, if that makes sense. it would call the video
scaler with the same from for both the source and destination.
So the simple fix was to simply use a local variable and pass that in as
a parameter to prevent this bug from occurring.
The bug here is that when conversion is active, the source video frame
is initialized with the destination height/width/format instead of the
source height/width/format.
This implements the 'frame skipping' mechanism to forcibly cause frames
to be duplicated in order to reduce encoder complexity so the encoder
can catch up to the video, otherwise it will continue to be
progressively behind and will cause a desync of the video.
Typically, if a user gets this issue, they should turn down their
settings. For the love of god do not tell them that 'frames are
skipping', just tell them that CPU usage is high, and that they should
consider turning down their settings.
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
- obs-outputs module: Add preliminary code to send out data, and add
an FLV muxer. This time we don't really need to build the packets
ourselves, we can just use the FLV muxer and send it directly to
RTMP_Write and it should automatically parse the entire stream for us
without us having to do much manual code at all. We'll see how it
goes.
- libobs: Add AVC NAL packet parsing code
- libobs/media-io: Add quick helper functions for audio/video to get
the width/height/fps/samplerate/etc rather than having to query the
info structures each time.
- libobs (obs-output.c): Change 'connect' signal to 'start' and 'stop'
signals. 'start' now specifies an error code rather than whether it
simply failed, that way the client can actually know *why* a failure
occurred. Added those error codes to obs-defs.h.
- libobs: Add a few functions to duplicate/free encoder packets
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
...The reason why audio didn't work was because I overwrote the bitrate
values.
As for semaphores, mac doesn't support unnamed semaphores without using
mach semaphores. So, I just implemented a semaphore wrapper for each
OS.
- Fix a bug where the initial audio data insertion would cause all
audio data to unintentionally clear (mixed up < and > operators, damn
human error)
- Fixed a potential interdependant lock scenario with channel mutex
locks and graphics mutex locks. The main video thread could lock the
graphics mutex and then while in the graphics mutex could lock the
channels mutex. Meanwhile in another thread, the channel mutex could
get locked, and then the graphics mutex would get locked, causing a
deadlock.
The best way to deal with this is to not let mutexes lock within
other mutexes, but sometimes it's difficult to avoid such as in the
main video thread.
- Audio devices should now be functional, and the devices in the audio
settings can now be changed as desired.
- Implement windows monitor capture (code is so much cleaner than in
OBS1). Will implement duplication capture later
- Add GDI texture support to d3d11 graphics library
- Fix precision issue with sleep timing, you have to call
timeBeginPeriod otherwise windows sleep will be totally erratic.
LOG_ERROR should be used in places where though recoverable (or at least
something that can be handled safely), was unexpected, and may affect
the user/application.
LOG_WARNING should be used in places where it's not entirely unexpected,
is recoverable, and doesn't really affect the user/application.
- Add CoreAudio device input capture for mac audio capturing. The code
should cover just about everything for capturing mac input device
audio. Because of the way mac audio is designed, users may have no
choice but to obtain the open source soundflower software to capture
their mac's desktop audio. It may be necessary for us to distribute
it with the program as well.
- Hide event backend
- Use win32 events for windows
- Allow timed waits for events
- Fix a few warnings
Implement a few audio options in to the user interface as well as a few
inline audio functions in audio-io.h.
Make it so ffmpeg plugin automatically converts to the desired format.
Use regular interleaved float internally for audio instead of planar
float.
This allows the changing of bideo settings without having to completely
reset all graphics data. Will recreate internal output/conversion
buffers and such and reset the main preview.
Make it so obs_data settings input in to *_update are applied to the
existing settings rather than fully replace the existing settings. That
way you can update with only certain specific settings, leaving other
settings untouched. Of course if you're already using the original
settings pointer in the first place then you've already done that, so
it'll just ignore it because you've already applied them.
I actually did compile that last commit and misread the failed projects
as 0. I'm just going to put the conversion stuff in video-io.h stuff
because it requires it anyway, and video-scaler.h already depends on
video-io.h for the video_format enum anyway.
Add a scaler interface (defaults to swscale), and if a separate output
wants to use a different scale or format than the default output format,
allow a scaler instance to be created automatically for that output,
which will then receive the new scaled output.
If there are for example more than one audio outputs and they have
different sample rates or channels and such, this will allow automatic
conversion of that audio to the request formats/channels/rates (but only
if requested).
There were a *lot* of warnings, managed to remove most of them.
Also, put warning flags before C_FLAGS and CXX_FLAGS, rather than after,
as -Wall -Wextra was overwriting flags that came before it.
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
- Fill in the rest of the FFmpeg test output code for testing so it
actually properly outputs data.
- Improve the main video subsystem to be a bit more optimal and
automatically output I420 or NV12 if needed.
- Fix audio subsystem insertation and byte calculation. Now it will
seamlessly insert new audio data in to the audio stream based upon
its timestamp value. (Be extremely cautious when using floating
point calculations for important things like this, and always round
your values and check your values)
- Use 32 byte alignment in case of future optimizations and export a
function to get the current alignment.
- Make os_sleepto_ns return true if slept, false if the time has
already been passed before the call.
- Fix sinewave output so that it actually properly calculates a middle
C sinewave.
- Change the use of row_bytes to linesize (also makes it a bit more
consistent with FFmpeg's naming as well)
- Add planar audio support. FFmpeg and libav use planar audio for many
encoders, so it was somewhat necessary to add support in libobs
itself.
- Improve/adjust FFmpeg test output plugin. The exports were somewhat
messed up (making me rethink how exports should be done). Not yet
functional; it handles video properly, but it still does not handle
audio properly.
- Improve planar video code. The planar video code was not properly
accounting for row sizes for each plane. Specifying row sizes for
each plane has now been added. This will also make it more compatible
with FFmpeg/libav.
- Fixed a bug where callbacks wouldn't create properly in audio-io and
video-io code.
- Implement 'blogva' function to allow for va_list usage with libobs
logging.
- Added some code for FFmpeg output that I'm still playing around with.
Right now I'm just trying to get it to output to file and try to
understand the FFmpeg/libav APIs. Hopefully in the future this plugin
can be used for any sort of output to FFmpeg.
- Fixed a cast warning in audio-io.c with size_t -> uint32_t
- Renamed the 'video_info' and 'audio_info' structures to
'video_conver_info' and 'audio_convert_info' to better represent their
actual purpose, and to avoid confusion with 'audio_output_info' and
'video_output_info' structures.
- Removed a few macros from obs-def.h that were at one point going to be
used but no longer going to be used (at least for now)
Completely revamped the entire media i/o data and handlers. The
original idea was to have a system that would have connecting media
inputs and outputs, but at a certain point I realized that this was an
unnecessary complexity for what we wanted to do. (Also, it reminded me
of directshow filters, and I HATE directshow with a passion, and
wouldn't wish it upon my greatest enemy)
Now, audio/video outputs are connected to directly, with better callback
handlers, and will eventually have the ability to automatically handle
conversions such as 4:4:4 to 4:2:0 when connecting to an input that uses
them. Doing this will allow the video/audio i/o handlers to also
prevent duplicate conversion, as well as make it easier/simple to use.
My true goal for this is to make output and encoder plugins as simple to
create as possible. I want to be able to be able to create an output
plugin with almost no real hassle of having to worry about image
conversions, media inputs/outputs, etc. A plugin developer shouldn't
have to handle that sort of stuff when he/she doesn't really need to.
Plugins will be able to simply create a callback via obs_video() and/or
obs_audio(), and they will automatically receive the audio/video data in
the formats requested via a simple callback, without needing to do
almost anything else at all.
- Add preliminary (yet to be tested) handling of timestamp invalidation
issues that can happen with specific devices, where timestamps can
reset or go backward/forward in time with no rhyme or reason. Spent
the entire day just trying to figure out the best way to handle this.
If both audio and video are present, it will increment a reference
counter if video timestamps invalidate, and decrement the reference
counter when the audio timestamps invalidate. When the reference
counter is not 0, it will not send audio as the audio will have
invalid timing. What this does is it ensures audio data will never go
out of bounds in relation to the video, and waits for both audio and
video timestamps to "jump" together before resuming audio.
- Moved async video frame timing adjustment code into
obs_source_getframe instead so it's automatically handled whenever
called.
- Removed the 'audio wait buffer' as it was an unnecessary complexity
that could have had problems in the future. Instead, audio will not
be added until video starts for sources that have both async
audio/video. Audio could have buffered for too long of a time anyway,
who knows what devices are going to do.
- Fixed a minor conversion warning in audio-io.c
- In the audio I/O code, if there's a pause in the program or its
threads (especially the audio thread), it'll cause it to sample too
much data, and increase line->base_timestamp to a potentially higher
value than the next audio timestamp that may be added to the line.
This would cause it to crash originally, because it expects audio
data that is within the designated buffering limit.
Because that audio data cannot be filled by that data anyway, just
ignore the audio data until it goes back to the right timing (which
it will as long as the code that is using the line accounts for its
current system time)
- Audio data was just being popped to the "front" of the mix buffer, so
instead it now properly pops into the correct position in the mix
buffer (proper mixing still needs to be implemented)
- Added a test audio sinewave test source that should just play a sine
wave of the middle C note. Using unsigned 8 bit mono to test
ffmpeg's audio resampler, seems to work pretty good.
- Fixed a boolean trap in threading.h for the event_init function, it
now uses enum event_type, which can be EVENT_TYPE_MANUAL or
EVENT_TYPE_AUTO, to specify whether the event is automatically reset
or not.
- Changed display names of test sources to something a little less
vague.
- Removed te whole "if timestamp is 0 just use current system time"
when outputting source audio, if you want to use system time you
should just use system time yourself. Using 0 as some sort of
"indicator" like that just makes things confusing, and prevents you
from legitimately using 0 as a timestamp for your audio data.
- Mixing still isn't implemented, but the audio system should be able
to start up, and mix at least once audio line for the time being.
Will have to write some test audio sources to verify things are
working properly, and build the rest of the output functionality.
- Apply the volume specified with the audio data packet before
inserting the audio data into the circular buffer. Added functions
for multiplying the volume with all the different audio bit depths.
(Could probably be greatly optmimized later)
- Added a volume variable to the obs_source structure and implemented
functions for manipulating source volume.
- Added a volume variable to the audio_data structure so that the
volume will be applied when mixing.