Filtering the video before it's output to the texture means that it
happens after all the processing on the timestamps and such of the video
data. This way, the video filter does not have to worry about what's
currently buffered, and it won't affect timing.
When OBS is shutting down, if for some reason the filter is destroyed
before the parent source is destroyed, it would try to remove itself
from the source, but it would decrement the reference and try to destroy
itself again while already in the process of destroying itself.
So, the solution was simply to make sure that if it's removing itself
from the source that it doesn't decrement its own reference.
The "last" filter that's rendered is technically at filter index 0, so
enumeration needs to be from the last index in the list to the first
index in the list.
When rendering a filter to a texture, the target is empty and unused, so
there's no reason for blending to be on when rendering the filter to a
render target.
obs_source_process_filter tried to do everything in a single function,
but the problem is that effect parameters would not properly be
accounted for due to the way it internally draws, therefore it was
necessary to split the functions in to two, you first call
obs_source_process_filter_begin, then you set your effect parameters,
then you finally call obs_source_process_filter_end. This ensures that
when the filter is drawn, that the effect parameters are set.
When the filter chain finally reaches the source and the last filter in
the chain is set to not render directly (meaning it has to render to
texture), it would not render the source with any effect due to the fact
that it expects a filter to be present.
Adds a source callback function that is used when a filter is removed
from a source. This ensures that any data that might be associated with
that source can be cleaned up if necessary.
There are a few more conditions which need to be checked to ensure
whether we can actually bypass or not; in this particular case we are
using the YUV texture shaders, plus the image can also be flipped, so we
can't really use the bypass optimization in those situations.
These functions are primarily for use with filters, filters need to be
able to get the width/height of a target source without it necessarily
getting the post-filtered dimensions.
Previously I had it set so that it would set the async_width and
async_height variables to 0,0 if a null frame occurs, but the problem
with this is that it was inadvertently cause the frame cache to clear
when it starts receiving textures again because it detects it as a size
change.
So instead of changing async_width and async_height to 0 when a null
frame occurs, change it so that if the source is set to an inactive
state, make obs_source_get_width/_get_height return 0 for their
dimensions instead.
When a frame is processed by a filter, it comes directly from the
source's video frame cache. However, if a filter is using or processing
those frames for whatever reason, there would be no guarantee that the
frames would persist during processing, and frames could eventually be
deallocated unexpected, for example when the resolution or format
changes.
So the solution is to implement simple reference counting for the frames
so that the frames will exist until they have been released by any
source or filter that's using them.
Fix a bug where when a source first starts up its async cache, it
unintentionally resets its cache, which means that the first few frames
would be lost.
This optionally specifies that the int/float value should be displayed
with a slider rather than and up/down control. UIs are not necessarily
required to implement it, it's meant to be more of a hint.
If negative was specified it would not parse the negative sign and thus
would not interpret that number as negative, and would cause
shader/effect parsing to simply fail on the file.
This is particularly important for the filter pipeline in order to
ensure that when the last filter is reached that the original blend
state is properly reset.
When render targets are used, they output to the render target inverted
due to the way that opengl works. This fixes that issue by inverting
the projection matrix so that it renders the image upside down and
inverting the front face from counterclockwise to clockwise.
The wrong function was being used to recurse through the filter chain in
obs_source_process_filter, obs_source_get_[width/height] would get the
post-effect dimensions rather than the pre-effect dimensions.
The frame is definitely meant to be modifiable if needed, so it
shouldn't be const. I originally meant for these frames to be
duplicated, but with the source video cache that's both unnecessary and
would reduce performance.
If the audio data had the same format/samplerate as the obs audio
subsystem, it would fail to simply destroy the resampler and set it to
NULL, and then any audio data going through would use the resampler that
was being used before that, causing audio to become garbage.
This bug only started appearing when I recently changed the libobs
internal audio subsystem format to non-interleaved floating point, which
is a common format, and thus caused this bug to actually occur more
often.
I when a source has both async audio/video capability, it would ignore
audio until the video has started. There's really no need to do this,
when the video starts it'll just fix up the timing automatically.
This should fix the case where things like the media source would not be
able to play audio-only files.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.