Removed code where if a PTS diff was greater than a certain
threshold it was forced to the previous PTS diff. This breaks
variable length frame media like GIF.
Always use -fPIC when not on WIN32 or APPLE and not just with gcc.
This allows for building obs with clang on linux and FreeBSD
without explicitly specifying -fPIC as compiler flag to cmake.
This adds utility functions for determining which
codecs and formats are supported by loaded FFMpeg
libraries. This includes validating the codecs that
a particular format supports.
Skip decode refresh scheduling if the abort flag is
set when the timer fails to start. This avoids extraneous
refresh scheduling when tearing down the decoders.
Fixes a bug where get_format was overloaded by our own version
when forcing the codec to load with a HW decoder and not
reset to the original get_format if it failed to load.
In the same manner that PNG doesn't appear to work properly
with multiple threads, TIFF, JPEG2000 and WEBP also appears
to not render correctly (use of FFmpeg's ff_thread_* routines)
if decode is called before the automatic thread detection
has returned a suggested thread count for the decoder.
If this is omitted and you use an input that requires the network
you get a warning message about future versions not automatically
doing this for you.
This attaches clocks to packets and frames and defers
the start time until that particular frame is presented.
Any packets/frames in the future with the same clock
will reference that start time.
This fixes issues when there are multiple start times
in a large buffer (looped video/images/audio) and different
frames need different reference clocks to present correctly.
Enables clocks to wait if the main sync clock has not been started yet. An example of this is a video stream (video/audio) being synced to the video clock. If an audio frame gets produced before the video clock has started it will wait.
Add referencing counting to determine when to release a clock Due to no fixed ownership of clocks (packets, frames, decoders and refresh thread all may own a clock at some point).
The bug was undetected because it accidentally fell into an error case that slept the correct amount of time. pthread_cond_timedwait takes an absolute time in the future to wait until. The value we were passing was always in the past so it was immediately failing with a TIMEDOUT error code.
This lets the decoder make decisions based on whether it is a hardware decoder or not. Specifically, hardware decoders are more strict as to which frames can be dropped in an h264 stream.
FFMpeg has an issue where png decoding will not correctly
begin until its optimal-thread-detection finishes in
multi-threaded mode. This unfortunately is after decoding
has begun.
Originally I made the "win_pipe" stuff for named pipes on windows but it
was argued that it should be available to all modules and
programs/libraries that the modules might communicate with.
It cannot really be put in to libobs due to the fact that there would
hypothetically be things unrelated to libobs that might want to use it,
so I felt the best option was just to create a simple static library
specific for interprocess communication.
Non-windows versions of these functions are still yet to be implemented.
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.