If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Certain devices (particularly certain mixers and soundflower 64ch) would
have an arbitrary number of channels, and wouldn't really be mappable to
a specific speaker layout supported by libobs.
So to fix this issue, if the channel count is above 8, force the data to
stereo to ensure playback can still occur, rather than cause it to just
fail.
This fixes an issue primarily with filter rendering: when capturing
windows and displays, their alpha channel is almost always 0, causing
the image to be completely invisible unintentionally. The original fix
for this for many sources was just to turn off the blending, which would
be fine if you're not rendering any filters, but filters will render to
render targets first, and that lack of alpha will end up carrying over
in to the final image.
This doesn't apply to any mac captures because mac actually seems to set
the alpha channel to 1.
The code specific to Windows: helps convert `BSTR` instances to
`std::string`s; provides a Windows COM-specific implementation of
`CreateDeckLinkDiscoveryInstance`; aliases CFUUIDGetUUIDBytes,
CFUUIDBytes, and IUnknownUUID (the Linux SDK does this, but for some
reason the Windows SDK does not).
Some changes were made to the stock DeckLink SDK: removed all references
to legacy API headers in DeckLinkAPI.idl; removed all instances of
`importlib("stdole2.tlb");`.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
If this option is on, the image will unload when the image isn't
displayed anywhere, and then reload it when it's displayed again to
reduce the amount of memory images take up on VRAM at any given time.
If this option is off, then it's always loaded in VRAM regardless of
whether it's displayed or not.
Closes Pull Request #380
(edited by Jim)
Add a source property to enable buffering of frames, which is enabled by
default. This replaces the old "Use system timing" option by setting the
unbuffered source flag instead of using different timestamps.
While similar in intentions to the old option, this method should reduce
latency even more.