This also adds the ability to detect whether it stopped due to lack of
space or not -- particularly useful for the FFmpeg output due to
lossless file format support.
For the FFmpeg output, the encoder ids are sort of superfluous. They
really should be optional. If they're not set, it should use the
encoder name string instead to determine the ids automatically.
It seems that certain encoders (quicksync) do not have proper back-end
support in the windows media foundation libraries for certain CPUs.
Quicksync doesn't appear to support CPUs that are not haswell (4xxx) or
above. It's really annoying, but there's not much we can do about it
until we implement our own custom quicksync implementation.
This check simply makes it attempt to spawn an encoder to check to see
whether the encoder can actually be created before registering an
encoder.
The previous commit (672378d20) was supposed to fix issues with the
encoder releasing while data was still being processed, but did not
account for when the encoder has never started up. That was my fault.
Furthermore, the way in which it was waiting to drain events was
incorrect. The encoder may still be active even though there aren't any
events queued. The proper way to wait for an async encoder to finish up
is to process output samples until it requests more input samples.
After I made it so that the encoder internal data gets destroyed when
all outputs stop using it (fa7286f8), the media foundation h264 encoder
started having crashes on shutdown. After a lot of testing, I realized
that the reason it started happening is almost assuredly because active
encoding events had not yet been completed.
After making it wait on those events by calling DrainEvents(true), the
crashes stopped. So asynchronous actions were clearly still occurring
and it was shutting down while data was still being processed, thus
leading to a crash.
Adds a VideoToolbox based H264 encoder for OSX, which most notably
allows the use of hardware encoding (Quicksync).
NOTES:
- Hardware encoding is handled by Apple itself internally. The plugin
itself has little control over many details due to the way that Apple
designed the VideoToolbox interface. Generally however, quicksync is
used if available on the CPU, and quicksync is almost always available
due to the fact that macs are exclusively Intel.
- The VideoToolbox does not seem to implement CBR, so it won't be
available. These encoders are generally not recommended for
streaming.
Implements hardware encoders through the Media Foundation interface
provided by Microsoft.
Supports:
- Quicksync (Intel)
- VCE (AMD)
- NVENC (NVIDIA, might only be supported through MF on Windows 10)
Notes:
- NVENC and VCE do not appear to have proper CBR implementations. This
isn't a fault of our code, but the Media Foundation libraries.
Quicksync however appears to be fine.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is used by some muxers that set AVFMT_NOFILE and doesn't seem to
hurt muxers that don't set it; notable this makes the hls muxer output
its m3u8 playlist with the proper filename in the proper directory
cleaning up my previous commit a bit. we can just keep the
appropriate BMDPixelFormat as a data member and keep StartCapture() a
bit clearer.
this might also be helpful if (when?) the detection code needs to be
more robust or configurable
detect the device type when initializing the device instance and
determine whether to capture YUV or RGB. tested with a Blackmagic
Intensity Pro and a Blackmagic Intensity Pro 4K in the same machine,
capturing at the same time, on Linux
For both cases the cur_level calculations were "wrong". For one channel
case, I assume that was only an oversight, as for two channels case
cur_level "calculation", getting the level from downmixing to mono will
result in an attenuated level than expected. One solution is to use the
highest level of both channels to drive the gate.
..This is rather embarrassing. I used the parameter variable and the
actual variable that I wanted to used went completely unused. Would
static analysis catch something like this, I wonder? Would probably
have to be really good static analysis.
YouTube Gaming is live since today (26 August 2015) and people will ask
for it.
This makes it a bit clearer that YouTube and YouTube Gaming
(which share the same ingestion system) work with OBS MP.
This will use the services.json file present in the cache, or if it has
the wrong format version or is corrupted for whatever reason, uses the
local version instead.
Also a minor refactor, makes it so that you call the open_services_file
function to get the services array, rather than having to get the file
name each time.
This reverts commit 74354dc4cf. I really
shouldn't have modified this, especially not in this way. Was the wrong
approach. The thing I was trying to fix was very rare as well.
When a window being captured is closed, it never tries to reacquire.
This just searches for the window in video_tick and reacquires if the
currently set window is found again.
Closesjp9000/obs-studio#465
I made the rather tough call of not showing all services by default; I
didn't want to have to do this, but too many services are asking to be
put in to the program, and any time I add a service in to the list, I
feel uncomfortable because I feel like I'm potentially advertising them,
and/or they're using our program to advertise as well. Some of these
services are particularly bad at policing illegal/copyrighted content,
host content that I personally find distasteful or incredibly stupid
(what the heck is up with these "vaping" streams?), or are just fairly
terrible websites in general that I just feel uncomfortable with showing
by default.
However, I do not really want to reject anyone either, I want to let
their users be able to use our program with relative ease, but more than
anything I just simple don't want to be seen as "endorsing" some of
these websites (more than others in particular). I know that a "show
all services" checkbox is probably pretty pointless/superfluous thing to
do, but I feel like it's at the very least a means of saying "hey, I
don't really endorse these guys," or "use at your own risk," or
"warning: this website is incredibly terrible."
Honestly, I couldn't really think of any better solution that would
a.) still list all services without outright censoring them, and
b.) prevent us from being seen as "endorsing" all services.
(Although maybe this whole thing feels a bit.. passive aggressive. I
feel like I'm tipping over someone's garden gnome in the middle of the
night while they're sleeping. Still, it's something.)
NOTE: This code is backward compatible; i.e., if you previously had a
service selected that's not common but don't have the "show all"
checkbox checked, it'll still show that service for convenience.
Services almost always recommend this be enabled, and I generally want
to make configuration easier for users; with CBR they don't have to set
things like the CRF value.