This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Martell changed this function without realizing that this was calling a
function below it, not recursively calling itself. The reason why he
got the warning was because there was no forward declaration of the
function that was being called; I think he's used to C where only one
function definition can exist with the same name. In this case, it was
another function with the same name but with different parameters,
something that's permitted in C++. I wish I had realized this sooner.
This fixes the crashes people have been having with devices.
Remove the .lib postfix from strmiids
ksuser provides KSCATEGORY_ENCODER and similar GUIDS used
wmcodecdspuuid provides MEDIASUBTYPE_H264 MEDIASUBTYPE_RAW_AAC1 and
MEDIASUBTYPE_I420 so no need to define them in dshow-formats. The
submodule will have to be updated to support this change.
Previously a DirectShow hardware encoder would get 'stuck' and couldn't
be recreated due to a strange issue with the graph filter not properly
shutting down the encoder. This would make it so that the user could
only use the encoder once, and then it wouldn't work anymore any time it
was initialized again. dshowcapture version 0.4.2 ensures that the
encoder can restart properly by manually shutting down the filter graph.
The DirectShow input source would always turn on first use, whether the
user wanted it to or not. I feel like having an activate/deactivate
option is a really nice thing to have, and makes configuration feel a
little bit less awkward.
Allow the user to select whether to buffer the source or not. The
settings are auto-detect, on, and off. Auto-Detect turns it off for
non-encoded devices, and on for encoded devices.
Webcams, internal devices, and other such things on windows do not
really need to be buffered, and buffering incurs a tiny bit of delay, so
turning off buffering is actually a little better for non-encoded
devices.
This adds support for the AverMedia C985 encoder (which is available on
C985 capture cards) as well as the C353 hardware encoder (which is
currently available on the X99S Gaming 9 motherboards).
These encoders have some limitations, such as limited resolutions
(1280x720 and 1024x768), a max GOP size of 30, and the encoder format
only supports YV12, which requires conversion if the current output
format isn't the same. The C985 and C353 encoders seem to be pretty
much identical, although it seems like the C353 has a bit more efficient
encoding.
I don't believe these are really suitable for streaming, as they do not
really have the encoding efficiency needed to stream at lower bitrates,
and seem to only support variable bitrate. However, for recording these
encoders are quite nice to have available, and work quite well.
The main module code was originally all packed in to the win-dshow.cpp
file, which isn't exactly ideal or clean if one wants to add other
things to the module as a whole.
Previously, due to a bug in libdshowcapture, the NV12 format was
actually being used for YV12 erroneously, and no actual support for YV12
existed. This fixes the bug with NV12 and adds support for YV12.
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.