Code submissions have continually suffered from formatting
inconsistencies that constantly have to be addressed. Using
clang-format simplifies this by making code formatting more consistent,
and allows automation of the code formatting so that maintainers can
focus more on the code itself instead of code formatting.
In some cases the result of the compatability check is wrong.
For example the format "mpegts" only shows "mpeg2video" as an
encoder even though other codecs such as h.264 are supported by
ffmpeg's muxer for that container and are used within that container
in some applications.
Closesjp9000/obs-studio#804
FFmpeg by default decodes VP8/VP9 via its internal encoders, however
those internal encoders do not support alpha. Encoded alpha is stored
via meta/side data in the container, so the only way to decode it
properly is via forcing FFmpeg to use libvpx for decoding.
(Also modifies obs-ffmpeg to handle empty frames on EOF)
Previously the demuxer could hit EOF before the decoder threads are
finished, resulting in truncated output. In the worse case scenario the
demuxer could read small files before ff_decoder_refresh even has a chance
to start the clocks, resulting in no output at all.
How to crash:
1. Use recent ffmpeg shared libraries.
2. Add a ffmpeg_source, a small static picture (e.g. jpeg) with loop
3. After a while of high cpu usage, it crashed. Seems reproduced more
easily on faster computer
Closes#533
There's no need to duplicate the packet as the reference count will be 1
after the av_read_frame call. Duplicating causes heap corruption when a
synthetic clock packet is duplicated and assigned the buffer from the
stack-based temporary packet which is then double-freed by the decoder
thread.
avformat_free_context() only frees the memory used by an AVFormatContext
but it does not close the opened media file. This causes a leaked file
descriptor every time a media source frees a demuxer. Using
avformat_close_input() instead frees the context and closes the media
file.
Fixes warnings with deprecated packet functions (av_free_packet and
av_dup packet, which were replaced by av_packet_unref and av_packet_ref
respectively)
If the first guessed pts is less than the start_pts, it could
lead to a negative PTS being returned.
Change the behavior so that the first frame's pts, if zero, is
set to the start_pts. If more than one frame is less than the
start_pts, the start_pts is determined invalid and set to 0.
Valid start_pts example:
start_pts = 500
first frame (pts = 0)
pts = 500 (< start_pts)
pts -= 500 (offset by start_pts)
ret 0
second frame (pts = 700)
pts = 700 (no change, > start_pts)
pts -= 500 (offset by start_pts)
ret 200
Invalid start_pts example:
start_pts = 500
first frame (pts = 0)
pts = 500 (< start_pts)
pts -= 500 (offset by start_pts)
ret 0
second frame (pts = 300)
pts = 300 (< start_pts, start_pts set to 0)
pts -= 0 (start_pts is now 0)
ret 300
ff_clock_init expects a parameter with a pointer where it stores the
address of the newly allocated ff_clock, but ff_demuxer_reset does not
provide this parameter. That somehow writes the pointer to the ff_clock
into the packet->base->buf field on the stack of the ff_demuxer_reset
function. This later causes a segmentation fault when the packet is freed.
Closesjp9000/obs-studio#448
Certain input streams (such as remote streams that are already active)
can start up mid-stream with a very high initial timestamp values.
Because of this, it would cause the libff timer to delay for that
initial timestamp, which often would cause it to not render at all
because it was stuck waiting.
To fix the problem, we should ignore the timestamp difference of the
first frame when it's above a certain threshold.
Now that we're using the timestamps from the stream for playback,
certain types of streams and certain file formats will not start from a
pts of 0. This causes the start of the playback to be delayed. This
code simply ensures that there's no delay on startup. This is basically
the same code as used in FFmpeg itself for handling this situation.
Removed code where if a PTS diff was greater than a certain
threshold it was forced to the previous PTS diff. This breaks
variable length frame media like GIF.
This adds utility functions for determining which
codecs and formats are supported by loaded FFMpeg
libraries. This includes validating the codecs that
a particular format supports.
Skip decode refresh scheduling if the abort flag is
set when the timer fails to start. This avoids extraneous
refresh scheduling when tearing down the decoders.
Fixes a bug where get_format was overloaded by our own version
when forcing the codec to load with a HW decoder and not
reset to the original get_format if it failed to load.
In the same manner that PNG doesn't appear to work properly
with multiple threads, TIFF, JPEG2000 and WEBP also appears
to not render correctly (use of FFmpeg's ff_thread_* routines)
if decode is called before the automatic thread detection
has returned a suggested thread count for the decoder.
If this is omitted and you use an input that requires the network
you get a warning message about future versions not automatically
doing this for you.
This attaches clocks to packets and frames and defers
the start time until that particular frame is presented.
Any packets/frames in the future with the same clock
will reference that start time.
This fixes issues when there are multiple start times
in a large buffer (looped video/images/audio) and different
frames need different reference clocks to present correctly.
Enables clocks to wait if the main sync clock has not been started yet. An example of this is a video stream (video/audio) being synced to the video clock. If an audio frame gets produced before the video clock has started it will wait.
Add referencing counting to determine when to release a clock Due to no fixed ownership of clocks (packets, frames, decoders and refresh thread all may own a clock at some point).