This also adds the ability to detect whether it stopped due to lack of
space or not -- particularly useful for the FFmpeg output due to
lossless file format support.
For the FFmpeg output, the encoder ids are sort of superfluous. They
really should be optional. If they're not set, it should use the
encoder name string instead to determine the ids automatically.
It seems that certain encoders (quicksync) do not have proper back-end
support in the windows media foundation libraries for certain CPUs.
Quicksync doesn't appear to support CPUs that are not haswell (4xxx) or
above. It's really annoying, but there's not much we can do about it
until we implement our own custom quicksync implementation.
This check simply makes it attempt to spawn an encoder to check to see
whether the encoder can actually be created before registering an
encoder.
The previous commit (672378d20) was supposed to fix issues with the
encoder releasing while data was still being processed, but did not
account for when the encoder has never started up. That was my fault.
Furthermore, the way in which it was waiting to drain events was
incorrect. The encoder may still be active even though there aren't any
events queued. The proper way to wait for an async encoder to finish up
is to process output samples until it requests more input samples.
After I made it so that the encoder internal data gets destroyed when
all outputs stop using it (fa7286f8), the media foundation h264 encoder
started having crashes on shutdown. After a lot of testing, I realized
that the reason it started happening is almost assuredly because active
encoding events had not yet been completed.
After making it wait on those events by calling DrainEvents(true), the
crashes stopped. So asynchronous actions were clearly still occurring
and it was shutting down while data was still being processed, thus
leading to a crash.
Adds a VideoToolbox based H264 encoder for OSX, which most notably
allows the use of hardware encoding (Quicksync).
NOTES:
- Hardware encoding is handled by Apple itself internally. The plugin
itself has little control over many details due to the way that Apple
designed the VideoToolbox interface. Generally however, quicksync is
used if available on the CPU, and quicksync is almost always available
due to the fact that macs are exclusively Intel.
- The VideoToolbox does not seem to implement CBR, so it won't be
available. These encoders are generally not recommended for
streaming.
Implements hardware encoders through the Media Foundation interface
provided by Microsoft.
Supports:
- Quicksync (Intel)
- VCE (AMD)
- NVENC (NVIDIA, might only be supported through MF on Windows 10)
Notes:
- NVENC and VCE do not appear to have proper CBR implementations. This
isn't a fault of our code, but the Media Foundation libraries.
Quicksync however appears to be fine.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is useful for allowing the ability to have private data associated
with the object type definition structures. This private data can be
useful for things like plugin wrappers for other languages, or providing
dynamically generated object types.
Selecting any supported FFmpeg format where ff_format_desc_extensions
returns NULL would crash the std::string constructor, so we pass an
empty extension instead (rtsp is one candidate format that triggers
the crash, on my machine at least)
This is used by some muxers that set AVFMT_NOFILE and doesn't seem to
hurt muxers that don't set it; notable this makes the hls muxer output
its m3u8 playlist with the proper filename in the proper directory
cleaning up my previous commit a bit. we can just keep the
appropriate BMDPixelFormat as a data member and keep StartCapture() a
bit clearer.
this might also be helpful if (when?) the detection code needs to be
more robust or configurable
detect the device type when initializing the device instance and
determine whether to capture YUV or RGB. tested with a Blackmagic
Intensity Pro and a Blackmagic Intensity Pro 4K in the same machine,
capturing at the same time, on Linux
This prevents encoders (hardware encoders in particular) from being
continually active when all outputs disconnect from an encoder. This is
mostly just a temporary measure; the encoding interface may need a bit
of a redesign. It will also definitely needs to be able to flush at
some point. Currently when an output is stopped, the pending data is
discarded, which needs to be fixed.
Allows objects to be created regardless of whether the actual id exists
or not. This is a precaution that preserves objects/settings if for
some reason the id was removed for whatever reason (plugin removed, or
hardware encoder that disappeared). This was already added for sources,
but really needs to be added for other libobs objects as well: outputs,
encoders, services.
If the FFMPEG_AVCODEC_LIBRARIES variable does not exist, it will
generate a cmake error, so check to make sure the variable exists before
executing this code.
These fucntions prevent the computer from going to sleep, hibernating,
or starting up a screen saver.
On linux, it will also attempt to use DBus to prevent gnome/kde/etc
sleep, but it's not necessarily required in order to compile the
library. Otherwise, it will simply call "xdg-screensaver reset" once
every 30 seconds to reset the screensaver timer.
The DBus library is a message bus system used to make applications
communicate with each other. The primary reason for adding it is to
access certain service features to prevent computer sleep/hibernate/etc.
This will also create a HAVE_DBUS variable (set to 1 or 0 if found or
not found)
The user may not want their audio or their display to be captured when
creating a new scene collection. Make new scene collections default to
fully empty.
For the sake of consistency, always create a display capture source on
the very first run of the program, just to have something displayed.
(NOTE: The only exception here is on windows 7/vista, which isn't ideal
for display capture, so it'll continue to be left blank)
CBR is now always on by default for streaming, so there's no reason to
have a setting for this in particular. Still available in advanced
output settings of course, but simple output mode really should be kept
as simple as possible.
This fixes the issue when an output cancels reconnecting, reconnect is
left at true, causing obs_output_active to always return true even
though reconnecting has actually been canceled.
This is mostly just to remove the unnecessary clutter from the output
sections. The reconnect settings are generally rarely modified by users
as it is.
When stream delay is active, the "Start/Stop Streaming" button is
changed in to a menu button, which allows the user to select either the
option to stop the stream (which causes it to count down), or forcibly
stop the stream (which immediately stops the stream and cuts off all
delayed data).
If the user decides they want to start the stream again while in the
process of counting down, they can safely do so without having to wait
for it to stop, and it will schedule it to start up again with the same
delay after the stop.
On the status bar, it will now show whether delay is active, and its
duration. If the stream is in the process of stopping/starting, it will
count down to the stop/start.
If the option to preserve stream cutoff point on unexpected
disconnections/reconnections is enabled, it will update the current
delay duration accordingly.
I added stream delay options to advanced settings not just because I
feel it's an advanced option, but also to reduce clutter in the outputs
section and its sub-sections, which already have far too many options as
it is.
This feature allows a user to delay an output (as long as the output
itself supports it). Needless to say this intended for live streams,
where users may want to delay their streams to prevent stream sniping,
cheating, and other such things.
The design this time was a bit more elaborate, but still simple in
design: the user can now schedule stops/starts without having to wait
for the stream itself to stop before being able to take any action.
Optionally, they can also forcibly stop stream (and delay) in case
something happens which they might not want to be streamed.
Additionally, a new option was added to preserve stream cutoff point on
disconnections/reconnections, so that if you get disconnected while
streaming, when it reconnects, it will reconnect right at the point
where it left off. This will probably be quite useful for a number of
applications in addition to regular delay, such as setting the delay to
1 second and then using this feature to minimize, for example, a
critical stream such as a tournament stream from getting any of its
stream data cut off. However, using this feature will of course cause
the stream data to buffer and increase delay (and memory usage) while
it's in the process of reconnecting.