This is my fault; I made an idiotic assumption about the data and it
ended up causing the plugin to crash. This is definitely one of my more
embarrassing moments.
This just changes the x264 encoder settings; it doesn't actually change
the framerate of OBS. OBS will always output at a constant framerate
regardless of whether this option is on or off; this just changes how
the encoder encodes the data.
With no stream key, no streams were actually being created.
This is a crazy configuration anyway, but it resulted in OBS getting
stuck in the "Connecting" state with no way to cancel.
We now just use the blank key and hope for the best.
Reinstate flag checks in RTMP_Close that were erroneously removed.
Clear out the Link state before we establish a new connection. There is
too much state carried around during authentication that has no good
place to clear it in librtmp, which assumes a clean structure when the
connection is initially established.
Adds Microsoft Media Foundation AAC Encoder that supports
96k to 192k bitrates. This plugin is only enabled on Microsoft
Windows 8+ due to performance issues found on Windows 7.
Authentication code has been updated as per the changes to support
multiple streams.
Authentication is now also enabled by default, and should be a no-op
if the server does not request authentication or username and password
details are not provided.
I've come to realize that it's probably not wise to deviate from the
original version's functionality due to the fact that the original
version works without issues. I'm wondering if some of the capture
problems have been due to the fact that the direct hook method (via
CreateRemoteThread) was removed, so I put it back in, made it default,
and added an option to use anti-cheat compatibility just like in the
original version.
This particularly affected audio encoding, audio encoding previously
would count samples and use it to create an encoding timestamp, but
because I was using a standard integer (which is 32bit by default on
x86), it would max out at about 0x7FFFFFFF samples, which is about 12
hours of samples at 48000 sample rate. After that, it would start going
into negative territory (overflowing). By changing it to int64_t, it
will make it so that audio at 48000 samples per second would only be
able to overflow after about.. 6.09 million years. In other words,
this should fix the issue for good.
Livecoding.tv (coding), gaminglive.tv (gaming), and beam.pro
(gaming/music)
I really don't see any problems with adding these particular services to
the local list while the actual remote ingest lookup code has yet to be
even started yet (as of this writing). They seem to be harmless
services that are dedicated to specific types of content (stated above).
When hooking 64bit functions, sometimes the offset between the function
being hooked and the hook itself can be large enough to where it
requires a 64bit offset to be used. However, because a 64bit jump
requires overwriting so many code instructions in the function, it can
sometimes overwrite code in to an adjacent function, thereby causing a
crash.
The 64bit hook bounce (created by R1CH) is designed to prevent using
very long jumps in the target by creating executable memory within a
32bit offset of that target, and then writing it with the 64bit long
jump instruction instead. Then in the target function, it will jump to
that memory instead, thus forcing the actual hooked function to use a
32bit hook instead of a 64bit hook, and using at most 5 bytes for the
actual hook, preventing any likelihood of it overwriting an adjacent
function.
System timestamps were being used instead of timestamps from the
audio/video input. This would cause potential desync as well as
incremental buffering when using devices with the blackmagic video
source. Using the timestamps direct from the SDK itself fixes those
issues, and causes audio/video to play back properly and in sync.
This filter simply modifies the volume of the signal as a convenient way
of modifying the volume before other filters, or amplify the volume
without having to mess with advanced audio properties.
In addition to the flv file format, this allows the ability to save to
container formats such as mp4, ts, mkv, and any other containers that
support the current codecs being used.
It pipes the encoded data to the ffmpeg-mux process, which then safely
muxes the file from the encoded data. If the main program unexpectedly
terminates, the ffmpeg-mux piped program will safely close the file and
write trailer data, preventing file corruption.
Instead of using system timestamps for playback, use the timestamps
directly from the video/audio data to ensure all the data is synced up
using the obs_source back-end.
I think the original misconception when this was written was that OBS
would not handle timestamp resets/loops, which isn't the case; it will
actually handle all timestamp resets and abnormalities. It's always
best to use the obs_source back-end for all playback and syncing.
When the bitrate was set to 64 CoreAudio would call
complex_input_data_proc more than once, which in turn would cause
consumed bytes in the input buffer to be "freed" more than once (once
for every additional call of complex_input_data_proc and once in
aac_encode)
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
Found via UBSan, actual (sample) error:
"plugins/text-freetype2/text-functionality.c:284:26: runtime error: left
shift of 194 by 24 places cannot be represented in type 'int'"
Add the include directories found by cmake to the jack plugin.
This allows for the plugin to compile when the jack headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the v4l2 plugin.
This allows for the plugin to compile when the vl42 headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the pulseaudio plugin.
This allows for the plugin to compile when the pulseaudio headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Check the actual name of the codec before applying an x264-specific
preset so we don't encounter an "Invalid argument" error when using
other h264 encoders in FFmpeg (such as NVEnc).
Closesjp9000/obs-studio#412
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
When frames were dropped, it would also drop I-frames, which can mess
with the keyframe calculation of certain services that depend on
I-frames in their output protocol (such as HLS).
The kCGDisplayStreamShowCursor option used with the dictionary does not
work if you assign @true or @false to it. After some testing, it needs
to point to the id cast of either kCFBooleanTrue or kCFBooleanFalse in
order for it to work properly.
If it doesn't use either of those values, the display stream seems to
use its internal default, which on 10.8 and 10.9 is visible, and 10.10+
is invisible, which would explain why people on 10.10 couldn't get the
cursor to capture.
Some formats (like WMV) would send out audio packets that
had channels set but did not specify a channel layout.
Solution is to no longer rely on channel layout to get the
channels and just get the channel count directly off the
FFmpeg audio frame.
If capture starts too quickly, the file mapping will return 2, which
means file not found, and it would then reset the capture and try again.
Sometimes this would result in long intervals where it wouldn't capture.
This fixes the issue by simply making game capture retry if file mapping
returns error number 2.
This allows applying a mask based upon the chroma value (U/V) of a
specific color in YUV color space. Commonly used to mask out
greenscreens and bluescreens in live video.
Allows any source to be cropped, though note that this renders to
texture first, so for more optimal results, cropping values should
probably be put in to capture sources that can be cropped as they're
actually rendered by the source.
This filter allows the ability to use a texture to modify the video
output of a source, the ability to:
- apply a color-based mask (dark = transparent, light = opaque)
- apply a mask based upon the alpha of an image
- blend an image via subtraction, addition, or multiplication
Adds a filter for delaying asynchronous video/audio data, for example
from webcams, video devices, or media playback sources. Mainly intended
to allow users to sync up their webcams to other devices.
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Certain devices (particularly certain mixers and soundflower 64ch) would
have an arbitrary number of channels, and wouldn't really be mappable to
a specific speaker layout supported by libobs.
So to fix this issue, if the channel count is above 8, force the data to
stereo to ensure playback can still occur, rather than cause it to just
fail.
This fixes an issue primarily with filter rendering: when capturing
windows and displays, their alpha channel is almost always 0, causing
the image to be completely invisible unintentionally. The original fix
for this for many sources was just to turn off the blending, which would
be fine if you're not rendering any filters, but filters will render to
render targets first, and that lack of alpha will end up carrying over
in to the final image.
This doesn't apply to any mac captures because mac actually seems to set
the alpha channel to 1.
The code specific to Windows: helps convert `BSTR` instances to
`std::string`s; provides a Windows COM-specific implementation of
`CreateDeckLinkDiscoveryInstance`; aliases CFUUIDGetUUIDBytes,
CFUUIDBytes, and IUnknownUUID (the Linux SDK does this, but for some
reason the Windows SDK does not).
Some changes were made to the stock DeckLink SDK: removed all references
to legacy API headers in DeckLinkAPI.idl; removed all instances of
`importlib("stdole2.tlb");`.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
If this option is on, the image will unload when the image isn't
displayed anywhere, and then reload it when it's displayed again to
reduce the amount of memory images take up on VRAM at any given time.
If this option is off, then it's always loaded in VRAM regardless of
whether it's displayed or not.
Closes Pull Request #380
(edited by Jim)
Add a source property to enable buffering of frames, which is enabled by
default. This replaces the old "Use system timing" option by setting the
unbuffered source flag instead of using different timestamps.
While similar in intentions to the old option, this method should reduce
latency even more.
Fix bug 0000151: File loading not properly handled.
Link to bug: https://obsproject.com/mantis/view.php?id=151
A newly selected font is not loaded properly if "read from file" is
active without a valid file. Old error handling lead to random memory
being displayed.
Closes Pull Request #390
(message edited by Jim)
By default, video plays back based upon the timestamp for each frame,
and buffers the frames as needed to ensure that they play back at the
expected timing.
However, this can add some minor additional delay to the video, and may
not be ideal for certain devices such as webcams and generally any
device that has minimal latency. However, because those are the only
type of devices that typically have drivers, there's no real need to
have it on by default.
This adds an option to use buffering, and leaves it off by default.
Closes pull request #384
(message added by jim)
Use the macro from the mac capture plugin to convert the fourcc integer
value to a string. This makes the debug statement for the pixel format
slightly more readable for the casual developer.
Remove the "Leave Unchanged" option for the input and video format
select.
This option was primarily added for cases in which the
resolution and framerate are set by another program or the capture
device itself and the values are not directly supported by the plugin.
One major usecase here would be capture devices for tv signals which
might be set to a spcific resolution and refresh rate, and might fail
to initialize in case any other combination of those settings is forced.
In case of the input this option did not make much sense, as the first
input is probably the best default option in most cases.
For the video format this default is even bad in some cases. If an
format emulated by libv4l2 is selected for example, this will usually
configure the device to use mjpeg with libv4l2 converting it to some
format obs can use. When obs or the source is then restarted and the
"Leave Unchanged" is enabled the plugin will fail, because it will only
notice that the device is set to mjpeg, without any knowledge about the
possibility of letting libv4l2 handle the conversion.
Using the first available and supported format is not nescessarily the
best choice, but still preferable to some random format that will
cause the plugin to not capture at all. Forcing a choice here will
hopefully also make the plugin behaviour more predicatable for the user.
Remove the constraint for device inputs to be of the type "CAMERA".
This was added under the false assumption that inputs of the type
"TUNER" are only used for control purposes.
This was caused to do the new RTMP code that added support for multiple
streams; the stream index needs to be reset on RTMP_Close otherwise it
will keep using the wrong stream information.
I had this issue where IDXGISwapChain::ResizeBuffers would fail in the
hooks, causing games to crash when they resized their backbuffers
because ResizeBuffers would return an 'invalid call' HRESULT value. In
the ResizeBuffers documentation it says that it will only happen if a
backbuffer currently has any outstanding references, but there's no way
this would happen unless ResizeBuffers internally calls Present or vise
versa.
After ResizeBuffers has been called, the very first call to Present will
somehow seemingly invalidate and/or destroy the current backbuffer.
It's very strange, but that seems to be what's going on, at least for
the game I was testing. So if you are performing a post-overlay
capture, then you must ignore the capture on the very first call to
Present.
It's Microsoft's code so you can't really know what's going on, you just
have to work around these strange issues seemingly in the dark.
Martell changed this function without realizing that this was calling a
function below it, not recursively calling itself. The reason why he
got the warning was because there was no forward declaration of the
function that was being called; I think he's used to C where only one
function definition can exist with the same name. In this case, it was
another function with the same name but with different parameters,
something that's permitted in C++. I wish I had realized this sooner.
This fixes the crashes people have been having with devices.