avformat_free_context() only frees the memory used by an AVFormatContext
but it does not close the opened media file. This causes a leaked file
descriptor every time a media source frees a demuxer. Using
avformat_close_input() instead frees the context and closes the media
file.
Fixes warnings with deprecated packet functions (av_free_packet and
av_dup packet, which were replaced by av_packet_unref and av_packet_ref
respectively)
Just in case glSwapIntervalEXT and glSwapIntervalSGI aren't available
for whatever reason. This entire patch is most likely completely
redundant on modern mesa drivers.
This allows plugins to update and cache data files from a remote source.
Here are the steps that occur when the API initiates an update check:
1.) It checks to see if the local files are greater than the cached
files. If the local version is newer (for whatever reason), it
replaces the cached version(s) with the local version.
2.) A packages.json file is downloaded from the specified URL. That
packages.json file contains a version number and a list of files to
be updated.
3.) If the downloaded package version is greater than the cached
version, executes step 4-5 on each file.
4.) Checks the version for the file to update in packages.json, and if
the version is greater than the cached version, proceeds to step 5,
otherwise repeat step 4-5 for other files.
5.) Calls the callback given to the update function (if any) with the
file information (file name, buffer, etc), and if the callback
returns true, allows the cached file to be updated and replaced,
otherwise goes back to step 4-6 for the rest of the files.
NOTE: Files are never modified directly. All file saving/modification
is performed in a temporary directory, and then files are moved to their
destination. This should eliminate any possibility of file corruption
(or at least dramatically reduce the possibility).
If the first guessed pts is less than the start_pts, it could
lead to a negative PTS being returned.
Change the behavior so that the first frame's pts, if zero, is
set to the start_pts. If more than one frame is less than the
start_pts, the start_pts is determined invalid and set to 0.
Valid start_pts example:
start_pts = 500
first frame (pts = 0)
pts = 500 (< start_pts)
pts -= 500 (offset by start_pts)
ret 0
second frame (pts = 700)
pts = 700 (no change, > start_pts)
pts -= 500 (offset by start_pts)
ret 200
Invalid start_pts example:
start_pts = 500
first frame (pts = 0)
pts = 500 (< start_pts)
pts -= 500 (offset by start_pts)
ret 0
second frame (pts = 300)
pts = 300 (< start_pts, start_pts set to 0)
pts -= 0 (start_pts is now 0)
ret 300
ff_clock_init expects a parameter with a pointer where it stores the
address of the newly allocated ff_clock, but ff_demuxer_reset does not
provide this parameter. That somehow writes the pointer to the ff_clock
into the packet->base->buf field on the stack of the ff_demuxer_reset
function. This later causes a segmentation fault when the packet is freed.
Closesjp9000/obs-studio#448
This was the reason why game capture could not hook when the hook was
run at administrator level and the game/target was below administrator
level: it was because the plugin created a pipe, and the hook tried to
connect to that pipe, but because the pipe was created as administrator
with default access rights, the pipe did not allow write access for
anything below administrator level, therefor the hook could not connect
to the plugin, and the hook would always fail as a result.
This fixes the issue by creating the pipe with full access rights to
everyone instead of default access rights.
Certain input streams (such as remote streams that are already active)
can start up mid-stream with a very high initial timestamp values.
Because of this, it would cause the libff timer to delay for that
initial timestamp, which often would cause it to not render at all
because it was stuck waiting.
To fix the problem, we should ignore the timestamp difference of the
first frame when it's above a certain threshold.
Now that we're using the timestamps from the stream for playback,
certain types of streams and certain file formats will not start from a
pts of 0. This causes the start of the playback to be delayed. This
code simply ensures that there's no delay on startup. This is basically
the same code as used in FFmpeg itself for handling this situation.
Removed code where if a PTS diff was greater than a certain
threshold it was forced to the previous PTS diff. This breaks
variable length frame media like GIF.
Always use -fPIC when not on WIN32 or APPLE and not just with gcc.
This allows for building obs with clang on linux and FreeBSD
without explicitly specifying -fPIC as compiler flag to cmake.
This adds utility functions for determining which
codecs and formats are supported by loaded FFMpeg
libraries. This includes validating the codecs that
a particular format supports.
Skip decode refresh scheduling if the abort flag is
set when the timer fails to start. This avoids extraneous
refresh scheduling when tearing down the decoders.
Fixes a bug where get_format was overloaded by our own version
when forcing the codec to load with a HW decoder and not
reset to the original get_format if it failed to load.
In the same manner that PNG doesn't appear to work properly
with multiple threads, TIFF, JPEG2000 and WEBP also appears
to not render correctly (use of FFmpeg's ff_thread_* routines)
if decode is called before the automatic thread detection
has returned a suggested thread count for the decoder.
If this is omitted and you use an input that requires the network
you get a warning message about future versions not automatically
doing this for you.
This attaches clocks to packets and frames and defers
the start time until that particular frame is presented.
Any packets/frames in the future with the same clock
will reference that start time.
This fixes issues when there are multiple start times
in a large buffer (looped video/images/audio) and different
frames need different reference clocks to present correctly.
Enables clocks to wait if the main sync clock has not been started yet. An example of this is a video stream (video/audio) being synced to the video clock. If an audio frame gets produced before the video clock has started it will wait.
Add referencing counting to determine when to release a clock Due to no fixed ownership of clocks (packets, frames, decoders and refresh thread all may own a clock at some point).
The bug was undetected because it accidentally fell into an error case that slept the correct amount of time. pthread_cond_timedwait takes an absolute time in the future to wait until. The value we were passing was always in the past so it was immediately failing with a TIMEDOUT error code.
This lets the decoder make decisions based on whether it is a hardware decoder or not. Specifically, hardware decoders are more strict as to which frames can be dropped in an h264 stream.