Some games don't catch GL errors via glGetError, so there's a
possibility that an error will pass through to the capture calls,
causing a false failure.
The most simple solution is to just clear the error flag on each capture
call.
To first render the filter, the width/height values must be set, but
currently they're only set in the render function, which means that the
crop filter can never be rendered when the program first starts up.
This would cause the filter to fail to render at all under those
circumstances.
This patch moves the calculations from render to tick to ensure that
they're always called and the values are always set.
The virtual address table values for Reset/ResetEx can sometimes point
to functions that are in libraries outside of D3D8.dll and D3D9.dll, and
will cause a crash if used. Instead, just hook Reset/ResetEx when one
of the Present* functions are called.
Not calling recv when data is received will accumulate data in the
internal recveive buffer until it's full, in which case is will stop
acknowledging. This can lead to unjustified disconnections.
If there was an attempt to destroy the rtmp-stream output while it was
already connecting and stopping at the same time, it would try to join
with the stop thread rather than with the connect thread. The connect
thread would then continue past destruction.
Sometimes stopping a connection can lock up due to data that still
remains to be sent, and this would lock up the thread requesting the
stop (typically the UI thread). So instead of locking up the calling
thread, spawn a new thread specifically for stopping so the calling
thread can continue uninterrupted. If the user attempts to reconnect,
it will wait for the stop thread to complete in the connect thread
before attempting to connect.
API removed:
--------------------
gs_effect_t *obs_get_default_effect(void);
gs_effect_t *obs_get_default_rect_effect(void);
gs_effect_t *obs_get_opaque_effect(void);
gs_effect_t *obs_get_solid_effect(void);
gs_effect_t *obs_get_bicubic_effect(void);
gs_effect_t *obs_get_lanczos_effect(void);
gs_effect_t *obs_get_bilinear_lowres_effect(void);
API added:
--------------------
gs_effect_t *obs_get_base_effect(enum obs_base_effect effect);
Summary:
--------------------
Combines multiple near-identical functions into a single function with
an enum parameter.
Use explicit UTF-8 byte sequence for the "no-break space" character.
Prevents issues with certain editors, and fixes the following compiler
warning on Visual C++:
warning C4819: The file contains a character that cannot be represented
in the current code page (X). Save the file in Unicode format to prevent
data loss
This replaces the name-based detection of the 4K intensity pro, and
allows other devices to be able to use the BGRA pixel format, if the
user so chooses.
Another thread could be manipulating the active_log_contexts array while the current thread is trying to read it, resulting in an uninitialized memory crash as the da_push_back call was not protected by the mutex.
Polls for file changes like the text plugin does. This is an interim
solution; both the text plugin and image source should use a file
monitoring API, preferably implemented through libobs.
Closesjp9000/obs-studio#482
When using a text file with the source and the font face is changed, it
would cause it to fail to update the glyphs and text accordingly. It
would trigger an error jump at line 392 of text-freetype2.c, ultimately
resulting in the text to render garbled after that.
How to reproduce:
Set the source to get text from a file, then just change the font face
(but not the size or anything else).
When updating text from file periodically, newer glyphs that weren't
already cached would not end up being rendered. This fixes the issue by
calling cache_glyphs after the file has been updated.
How to reproduce the original issue:
Set a text-freetype2 source to load an english-only text file. Then
overwrite the text in the file with non-english characters. The
non-english characters will then fail to render.
Reported at https://obsproject.com/mantis/view.php?id=336Closesjp9000/obs-studio#481
Instead of using shell functions to get the windows system directory,
use the kernel32 functions (GetSystemDirectory and
GetSystemWow64Directory). Reduces a bit of unnecessary overhead.
This caches the font list data to a file to minimize load times. Font
data will be refreshed when any font files are added/removed, based upon
a checksum of the font file names and dates (if available).
Microphones and other input devices can often have bad or erroneous
timestamps. Although we handle bad timestamps much better in
obs-studio, there are still lingering issues that can crop up from time
to time with device QPC timestamps that leads to mic data not playing
back properly. It's best if it be off by default rather than on, which
will now cause it to use system timestamps for input devices by default.
This changes it to the same handling as OBS1 for this case.
The new 'offset' value was not being passed back to the caller, which
caused the caller to continue to use the old value and thus would cause
an invalid hook and crash.
The call to CoInitializeEx in the win-mf module caused some sort of
conflict with the decklink module, causing the decklink module to crash
on exit. Instead, let libobs handle COM initialization.
If the GL capture part of the game capture hook fails to initialized for
whatever reason, it will go in to an infinite reacquire loop. If it
fails to initialize shared texture capture, try shared memory capture
instead.
For game capture, if a game is running at for example 800 FPS and limit
capture framerate is off, it would try to capture all 800 of those
frames, dramatically reducing performance more than what would ever be
necessary.
When limit capture framerate is off, instead of capturing all frames,
capture frames at an interval of twice the OBS FPS, identical to how
OBS1 works by default. This should greatly increase performance under
that circumstance.
This also adds the ability to detect whether it stopped due to lack of
space or not -- particularly useful for the FFmpeg output due to
lossless file format support.
For the FFmpeg output, the encoder ids are sort of superfluous. They
really should be optional. If they're not set, it should use the
encoder name string instead to determine the ids automatically.
It seems that certain encoders (quicksync) do not have proper back-end
support in the windows media foundation libraries for certain CPUs.
Quicksync doesn't appear to support CPUs that are not haswell (4xxx) or
above. It's really annoying, but there's not much we can do about it
until we implement our own custom quicksync implementation.
This check simply makes it attempt to spawn an encoder to check to see
whether the encoder can actually be created before registering an
encoder.
The previous commit (672378d20) was supposed to fix issues with the
encoder releasing while data was still being processed, but did not
account for when the encoder has never started up. That was my fault.
Furthermore, the way in which it was waiting to drain events was
incorrect. The encoder may still be active even though there aren't any
events queued. The proper way to wait for an async encoder to finish up
is to process output samples until it requests more input samples.
After I made it so that the encoder internal data gets destroyed when
all outputs stop using it (fa7286f8), the media foundation h264 encoder
started having crashes on shutdown. After a lot of testing, I realized
that the reason it started happening is almost assuredly because active
encoding events had not yet been completed.
After making it wait on those events by calling DrainEvents(true), the
crashes stopped. So asynchronous actions were clearly still occurring
and it was shutting down while data was still being processed, thus
leading to a crash.
Adds a VideoToolbox based H264 encoder for OSX, which most notably
allows the use of hardware encoding (Quicksync).
NOTES:
- Hardware encoding is handled by Apple itself internally. The plugin
itself has little control over many details due to the way that Apple
designed the VideoToolbox interface. Generally however, quicksync is
used if available on the CPU, and quicksync is almost always available
due to the fact that macs are exclusively Intel.
- The VideoToolbox does not seem to implement CBR, so it won't be
available. These encoders are generally not recommended for
streaming.
Implements hardware encoders through the Media Foundation interface
provided by Microsoft.
Supports:
- Quicksync (Intel)
- VCE (AMD)
- NVENC (NVIDIA, might only be supported through MF on Windows 10)
Notes:
- NVENC and VCE do not appear to have proper CBR implementations. This
isn't a fault of our code, but the Media Foundation libraries.
Quicksync however appears to be fine.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is used by some muxers that set AVFMT_NOFILE and doesn't seem to
hurt muxers that don't set it; notable this makes the hls muxer output
its m3u8 playlist with the proper filename in the proper directory
cleaning up my previous commit a bit. we can just keep the
appropriate BMDPixelFormat as a data member and keep StartCapture() a
bit clearer.
this might also be helpful if (when?) the detection code needs to be
more robust or configurable
detect the device type when initializing the device instance and
determine whether to capture YUV or RGB. tested with a Blackmagic
Intensity Pro and a Blackmagic Intensity Pro 4K in the same machine,
capturing at the same time, on Linux
For both cases the cur_level calculations were "wrong". For one channel
case, I assume that was only an oversight, as for two channels case
cur_level "calculation", getting the level from downmixing to mono will
result in an attenuated level than expected. One solution is to use the
highest level of both channels to drive the gate.
..This is rather embarrassing. I used the parameter variable and the
actual variable that I wanted to used went completely unused. Would
static analysis catch something like this, I wonder? Would probably
have to be really good static analysis.
YouTube Gaming is live since today (26 August 2015) and people will ask
for it.
This makes it a bit clearer that YouTube and YouTube Gaming
(which share the same ingestion system) work with OBS MP.
This will use the services.json file present in the cache, or if it has
the wrong format version or is corrupted for whatever reason, uses the
local version instead.
Also a minor refactor, makes it so that you call the open_services_file
function to get the services array, rather than having to get the file
name each time.
This reverts commit 74354dc4cf. I really
shouldn't have modified this, especially not in this way. Was the wrong
approach. The thing I was trying to fix was very rare as well.
When a window being captured is closed, it never tries to reacquire.
This just searches for the window in video_tick and reacquires if the
currently set window is found again.
Closesjp9000/obs-studio#465
I made the rather tough call of not showing all services by default; I
didn't want to have to do this, but too many services are asking to be
put in to the program, and any time I add a service in to the list, I
feel uncomfortable because I feel like I'm potentially advertising them,
and/or they're using our program to advertise as well. Some of these
services are particularly bad at policing illegal/copyrighted content,
host content that I personally find distasteful or incredibly stupid
(what the heck is up with these "vaping" streams?), or are just fairly
terrible websites in general that I just feel uncomfortable with showing
by default.
However, I do not really want to reject anyone either, I want to let
their users be able to use our program with relative ease, but more than
anything I just simple don't want to be seen as "endorsing" some of
these websites (more than others in particular). I know that a "show
all services" checkbox is probably pretty pointless/superfluous thing to
do, but I feel like it's at the very least a means of saying "hey, I
don't really endorse these guys," or "use at your own risk," or
"warning: this website is incredibly terrible."
Honestly, I couldn't really think of any better solution that would
a.) still list all services without outright censoring them, and
b.) prevent us from being seen as "endorsing" all services.
(Although maybe this whole thing feels a bit.. passive aggressive. I
feel like I'm tipping over someone's garden gnome in the middle of the
night while they're sleeping. Still, it's something.)
NOTE: This code is backward compatible; i.e., if you previously had a
service selected that's not common but don't have the "show all"
checkbox checked, it'll still show that service for convenience.
Services almost always recommend this be enabled, and I generally want
to make configuration easier for users; with CBR they don't have to set
things like the CRF value.
The single darray solution was potentially unsafe since you're not
allowed to modify the (encode) buffer between calls to
complex_input_data_proc which is potentially violated if the darray
had to be resized due to capacity being < 2 * in_bytes_required
This is my fault; I made an idiotic assumption about the data and it
ended up causing the plugin to crash. This is definitely one of my more
embarrassing moments.
This just changes the x264 encoder settings; it doesn't actually change
the framerate of OBS. OBS will always output at a constant framerate
regardless of whether this option is on or off; this just changes how
the encoder encodes the data.
With no stream key, no streams were actually being created.
This is a crazy configuration anyway, but it resulted in OBS getting
stuck in the "Connecting" state with no way to cancel.
We now just use the blank key and hope for the best.
Reinstate flag checks in RTMP_Close that were erroneously removed.
Clear out the Link state before we establish a new connection. There is
too much state carried around during authentication that has no good
place to clear it in librtmp, which assumes a clean structure when the
connection is initially established.
Adds Microsoft Media Foundation AAC Encoder that supports
96k to 192k bitrates. This plugin is only enabled on Microsoft
Windows 8+ due to performance issues found on Windows 7.
Authentication code has been updated as per the changes to support
multiple streams.
Authentication is now also enabled by default, and should be a no-op
if the server does not request authentication or username and password
details are not provided.
I've come to realize that it's probably not wise to deviate from the
original version's functionality due to the fact that the original
version works without issues. I'm wondering if some of the capture
problems have been due to the fact that the direct hook method (via
CreateRemoteThread) was removed, so I put it back in, made it default,
and added an option to use anti-cheat compatibility just like in the
original version.
This particularly affected audio encoding, audio encoding previously
would count samples and use it to create an encoding timestamp, but
because I was using a standard integer (which is 32bit by default on
x86), it would max out at about 0x7FFFFFFF samples, which is about 12
hours of samples at 48000 sample rate. After that, it would start going
into negative territory (overflowing). By changing it to int64_t, it
will make it so that audio at 48000 samples per second would only be
able to overflow after about.. 6.09 million years. In other words,
this should fix the issue for good.
Livecoding.tv (coding), gaminglive.tv (gaming), and beam.pro
(gaming/music)
I really don't see any problems with adding these particular services to
the local list while the actual remote ingest lookup code has yet to be
even started yet (as of this writing). They seem to be harmless
services that are dedicated to specific types of content (stated above).
When hooking 64bit functions, sometimes the offset between the function
being hooked and the hook itself can be large enough to where it
requires a 64bit offset to be used. However, because a 64bit jump
requires overwriting so many code instructions in the function, it can
sometimes overwrite code in to an adjacent function, thereby causing a
crash.
The 64bit hook bounce (created by R1CH) is designed to prevent using
very long jumps in the target by creating executable memory within a
32bit offset of that target, and then writing it with the 64bit long
jump instruction instead. Then in the target function, it will jump to
that memory instead, thus forcing the actual hooked function to use a
32bit hook instead of a 64bit hook, and using at most 5 bytes for the
actual hook, preventing any likelihood of it overwriting an adjacent
function.
System timestamps were being used instead of timestamps from the
audio/video input. This would cause potential desync as well as
incremental buffering when using devices with the blackmagic video
source. Using the timestamps direct from the SDK itself fixes those
issues, and causes audio/video to play back properly and in sync.
This filter simply modifies the volume of the signal as a convenient way
of modifying the volume before other filters, or amplify the volume
without having to mess with advanced audio properties.
In addition to the flv file format, this allows the ability to save to
container formats such as mp4, ts, mkv, and any other containers that
support the current codecs being used.
It pipes the encoded data to the ffmpeg-mux process, which then safely
muxes the file from the encoded data. If the main program unexpectedly
terminates, the ffmpeg-mux piped program will safely close the file and
write trailer data, preventing file corruption.
Instead of using system timestamps for playback, use the timestamps
directly from the video/audio data to ensure all the data is synced up
using the obs_source back-end.
I think the original misconception when this was written was that OBS
would not handle timestamp resets/loops, which isn't the case; it will
actually handle all timestamp resets and abnormalities. It's always
best to use the obs_source back-end for all playback and syncing.
When the bitrate was set to 64 CoreAudio would call
complex_input_data_proc more than once, which in turn would cause
consumed bytes in the input buffer to be "freed" more than once (once
for every additional call of complex_input_data_proc and once in
aac_encode)
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
Found via UBSan, actual (sample) error:
"plugins/text-freetype2/text-functionality.c:284:26: runtime error: left
shift of 194 by 24 places cannot be represented in type 'int'"
Add the include directories found by cmake to the jack plugin.
This allows for the plugin to compile when the jack headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the v4l2 plugin.
This allows for the plugin to compile when the vl42 headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the pulseaudio plugin.
This allows for the plugin to compile when the pulseaudio headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.