The LGP issue is caused by the device drivers returning two or more
packets in a single segment of audio data. This fixes it by detecting
that and decoding subsequent packets.
When the FFmpeg audio decoder returns, it returns how many bytes of data
was decoded. To have it decode multiple packets in a single segment,
just subtract the return value from the expected size, and if that size
is still larger than zero, then there are more packets in the segment to
decode. Otherwise, stop.
LGP devices are devices that induce anger in any sane developer because
they're prone to bad audio timestamps when using their decoded data
directly. For that reason, add a hack that smooths the timestamps
within a large threshold to prevent audio skipping.
Useful for two purposes:
1.) When many devices are hooked up to the system and used in separate
scenes, but only one device active at once is desired
2.) Allows users who are dependent on outputting audio to desktop to
disable that audio (via disabling that device) when the device isn't
being displayed
Certain types of sources (display captures, game captures, audio
device captures, video device captures) should not be duplicated. This
capability flag hints that the source prefers references over full
duplication.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Martell changed this function without realizing that this was calling a
function below it, not recursively calling itself. The reason why he
got the warning was because there was no forward declaration of the
function that was being called; I think he's used to C where only one
function definition can exist with the same name. In this case, it was
another function with the same name but with different parameters,
something that's permitted in C++. I wish I had realized this sooner.
This fixes the crashes people have been having with devices.
Remove the .lib postfix from strmiids
ksuser provides KSCATEGORY_ENCODER and similar GUIDS used
wmcodecdspuuid provides MEDIASUBTYPE_H264 MEDIASUBTYPE_RAW_AAC1 and
MEDIASUBTYPE_I420 so no need to define them in dshow-formats. The
submodule will have to be updated to support this change.
Previously a DirectShow hardware encoder would get 'stuck' and couldn't
be recreated due to a strange issue with the graph filter not properly
shutting down the encoder. This would make it so that the user could
only use the encoder once, and then it wouldn't work anymore any time it
was initialized again. dshowcapture version 0.4.2 ensures that the
encoder can restart properly by manually shutting down the filter graph.
The DirectShow input source would always turn on first use, whether the
user wanted it to or not. I feel like having an activate/deactivate
option is a really nice thing to have, and makes configuration feel a
little bit less awkward.
Allow the user to select whether to buffer the source or not. The
settings are auto-detect, on, and off. Auto-Detect turns it off for
non-encoded devices, and on for encoded devices.
Webcams, internal devices, and other such things on windows do not
really need to be buffered, and buffering incurs a tiny bit of delay, so
turning off buffering is actually a little better for non-encoded
devices.
This adds support for the AverMedia C985 encoder (which is available on
C985 capture cards) as well as the C353 hardware encoder (which is
currently available on the X99S Gaming 9 motherboards).
These encoders have some limitations, such as limited resolutions
(1280x720 and 1024x768), a max GOP size of 30, and the encoder format
only supports YV12, which requires conversion if the current output
format isn't the same. The C985 and C353 encoders seem to be pretty
much identical, although it seems like the C353 has a bit more efficient
encoding.
I don't believe these are really suitable for streaming, as they do not
really have the encoding efficiency needed to stream at lower bitrates,
and seem to only support variable bitrate. However, for recording these
encoders are quite nice to have available, and work quite well.
The main module code was originally all packed in to the win-dshow.cpp
file, which isn't exactly ideal or clean if one wants to add other
things to the module as a whole.
Previously, due to a bug in libdshowcapture, the NV12 format was
actually being used for YV12 erroneously, and no actual support for YV12
existed. This fixes the bug with NV12 and adds support for YV12.
This reverts commit c3f4b0f018.
The obs_source_frame should not need to take flags to do this. This
shouldn't be a setting associated with the frame, but rather a setting
associated with the source itself. This was the wrong approach to
solving this particular problem.
Add 'flags' member variable to obs_source_frame structure.
The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as
it's received (in the next frame playback), causing it to disregard the
timestamp value for the sake of video playback (however, note that the
video timestamp is still used for audio synchronization if audio is
present on the source as well).
This is partly a convenience feature, and partly a necessity for certain
plugins (such as the linux v4l plugin) where timestamp information for
the video frames can sometimes be unreliable.
Typedef pointers are unsafe. If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data. I admit this was a fundamental mistake that must
be corrected.
All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.
This does not break ABI though, which is pretty nice.
This prevents certain issues I've encountered with devices where they
expect to shut down in a specific thread they started up in, as well as
a number of other issues, such as the configuration dialogs.
The configuration dialogs require that a message loop be present, and
this was not the case previously because everything was in the video
thread, which has no windows-specific code.
Configuration/crossbar/etc dialogs will now execute correctly.
This adds support for dynamic format changes on the fly. Format,
resolution, sample rate, can all now be changed by the current
directshow device on the fly.
On an asynchronous video source, the source resolution is automatically
handled by the core, and set to the resolution of the last video data
that was sent. There is no need to manually specify a resolution.