This reverts commit 8d520b970d.
This can actually cause a hard lock due to the windows API when
destroying window capture. When the graphics thread locks the source
list for doing tick or render, and then the UI thread tries to destroy a
source, the UI thread will wait for the graphics thread to complete
rendering/ticking of sources. The video_tick of window capture would
then check windows in the same process and try to query the window's
name via GetWindowText. However, GetWindowText is synchronous, and will
not return until the window event has been processed by the UI thread,
so it will perpetually lock because the two threads are waiting for each
other to finish.
On windows, if you were saving a file name or directory with characters
that are not of the current windows character set, it could cause the
file saving process to fail. This fixes it so that on windows it uses
wmain and converts the unicode command line to a UTF-8 command line,
which works with FFmpeg.
Prevents game capture from acting as a global source. This fixes an
issue where a game capture in another scene could capture a window and
prevent a separate game capture in the current scene from being able to
capture that same window.
Completely shut down monitor capture when it's not being shown in the
program (for example in a different scene). This fixes an issue where
it would cause lag when a game enters fullscreen mode.
Instead of having a "cbr" setting that turns CBR on and off, adds a
"rate_control" parameter that sets the rate control method, which can be
one of the following: CBR, ABR, VBR, CRF.
If the "cbr" setting is used, it will throw a deprecation warning to the
log.
Instead of using an option that turns CBR on/off, adds rate control
methods: VBR, CBR, CQP, Lossless.
This moves lossless from being a preset to being a rate control method.
When using per-encoder rescaling, QSV would overwrite the current
encoder scale value in the get_video_info callback with the base video
width/height instead of using the current encoder width/height.
(Also modifies obs-ffmpeg to handle empty frames on EOF)
Previously the demuxer could hit EOF before the decoder threads are
finished, resulting in truncated output. In the worse case scenario the
demuxer could read small files before ff_decoder_refresh even has a chance
to start the clocks, resulting in no output at all.
There was no error checking when sending headers/metadata, so what would
happen is that if a header/metadata send failed (meaning the socket was
disconnected), it would continue to act as if it was still connected,
and it would block and lock up on the next send/recv call.
In aa4e18740a I mistakenly thought that I could add the variables
back in and that it would automatically cull variables that aren't used,
but that wasn't the case -- the shader parser always checks to see
whether parameters were set, and if they're not, it'll fail. This fixes
an issue where the shader would try to access parameters that are no
longer needed and fail due to the shader parameter check.
YUV-based shader support has been removed (due to the fact that no
sources ever use YUV shading) so there's no reason to keep around the
YUV processing code.
Currently, multiple QSV encoders cannot be active at the same time
(otherwise it will crash). This is a temporary solution to prevent
crashes from occurring when more than one QSV encoder tries to start up
at the same time.
Additionally, in the future there should be a way for encoders to be
able to communicate with the front-end when an error such as this
occurs.
If the parent source of a scroll filter has a 0 width or 0 height, the
scroll filter would do a division by zero on the size_i variable, which
would then cause the offset variable to perpetually have a non-finite
value, thus preventing the scroll filter from rendering properly after
that due to the non-finite offset value being uploaded to the shader.
(Note: this commit also modifies the obs-filters and test-input modules)
Changes the obs_source_process_filter_begin return type so that it
returns true/false to indicate that filter processing should or should
not continue (for example if the filter is bypassed or if there's some
other sort of issue that causes the filtering to fail)
When using QSV is used on a windows 7 machine with a dedicated card, you
have to fake a monitor connection to your Intel graphics to be able to
use QSV. If you do not, the initialization will fail with an error.
The error for that situation is not handled properly, and a variable
will be used while null. Instead, the function should safely return
after that error is received.
Also, do not call ClearData in the destructor unless QSV has been
properly initialized (if m_pmfxENC is null).
The if statement erroneously ended with a ';', which means that the code
is always executed, but there's no reason to even have these if checks
in the first place as the functions themselves return safely with null
pointers.
When using a chain hook method (forward or reverse), it was unwisely
assumed that the previous hook in the chain would not overwrite new
hooks when it's called. When the game capture hook calls the previous
hook in the chain, certain other programs that hook (in this case,
rivatuner on-screen display) would overwrite the hook with older data
and erase the game capture hook, causing it to only capture the first
frame and then never capture again.
This patch ensures that the hook is always saved before calling the next
hook in the chain and then restored after the call returns. It also
preserves any new hooks that may be added on top of it at any point.
This option is an x264-specific feature that may generate additional
keyframes when a major visual change in the output is detected. This
functionality is undesirable for streaming because it can cause
keyframes to become inconsistent and unpredictable, which can negatively
affect viewer buffering.
This is to work around a Mesa issue that prevents copying
between RGB and RGBA textures. Other drivers seem to allow
this, even though it's technically not allowed by the GL
spec.
Closesjp9000/obs-studio#514
(Note: This commit also modified text-freetype2)
Prevents issues from being able to update files that may not be using
the current system encoding on windows.
Useful for two purposes:
1.) When many devices are hooked up to the system and used in separate
scenes, but only one device active at once is desired
2.) Allows users who are dependent on outputting audio to desktop to
disable that audio (via disabling that device) when the device isn't
being displayed
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
(Note: This commit also modified text-freetype2)
The implementation of get_modified_timestamp() did not check the return
value of stat(), so in case the check fails the values in the stats
structure can be any garbage on the stack and the value of
stats.st_mtime can change on every call. This can trigger a reload of
the image every second, even if the file was not actually modified.
Closesjp9000/obs-studio#520
(Note: This commit also modifies obs-filters and text-freetype2)
This simplifies writing of effects. DrawMatrix is no longer necessary
because there are no sources that require drawing with a color matrix
other than async sources, and async sources are automatically processed
and don't defer their initial render stage to filters.
Originally this on by default, but then was changed to being off by
default because it was thought that there were permission issues, but it
turned out that the permission issues were a separate bug, so it's safe
to have this be default to on again.
This is a fast/immediate solution to a possible bug with caching the DLL
versions for game capture hook addresses - may as well just reload game
capture hook addresses each time the program is run for the time being
just to be safe. Load time will increase a little for the time being
but it's worth it to prevent any issues with game capture.
When a source window was not available, a red background was shown
instead. This was undesirable, and expected behavior would be for the
background to be transparent, enabling what exists behind the source
to be shown.
These functions were mistakenly not marked as static. They are not used
outside of their compiled object module files, therefore there's no
reason for them not to be static.
Apple wants to get people to move over to their own crypto APIs instead
of using OpenSSL, so disable the warning in the files where OpenSSL is
used for the time being.
If the media source is set to restart on activation, it also shuts down
when not active. However, it would *always* start regardless of
active/inactive when the source is first created. It shouldn't do that,
it should start up only when it becomes active.
Certain types of sources (display captures, game captures, audio
device captures, video device captures) should not be duplicated. This
capability flag hints that the source prefers references over full
duplication.
Darkest dungeon uses an unusual technique for drawing its frames: a
fixed 1920x1080 frame buffer used in place of the backbuffer, which is
then stretched to fit the size of the screen (whether the screen is
bigger or smaller than the actual texture).
The custom frame would cause glReadBuffer to initially fail with an
error. When this happens, their custom frame buffer is in use, so all
that needs to be done is simply reset the capture and force the current
output size to 1920x1080 while that custom frame is in use.
They presumably did this in order to ensure the game looks the same at
any resolution. Instead of having to use power-of-two sprites and
mipmaps for every single game sprite and stretch/skew each of them
(which would risk the final output "not looking quite right" at
different resolutions), they simply use non-pow-2 sprites with no
mipmaps and render them all on to one texture of a fixed size and then
stretch that final output texture. That ensures that the actual
composite of the game still looks the same at any resolution, while
reducing texture memory by not requiring each sprite to use a
power-of-two texture and mipmaps.
Adds the option of making the media file restart when the source becomes
active (such as switching to a scene with it).
Due to lack of libff features to start/stop/pause/seek media files,
currently this just destroys the demuxer and recreates it. Ideally,
libff should have some functions to allow a more optimal means of doing
those things.
Reactors a bit of code related to starting up FFmpeg and makes it so the
initial view for the media source's properties displays the most
commonly desired settings.
Instead of the media source properties showing the URL mode by default
along with a whole bunch of properties that are confusing to most users,
starts on file mode and changes defaults to be a bit more sensible
related to file input.
Also, as a temporary measure for fixing color format issues (some video
files would display their color information incorrectly), forced format
conversion is now enabled by default, and has been moved to advanced
settings. Ideally, the actual bug causing color format issues in either
media-io or libff should be fixed at some point.
When browsing for a file, it would also just use *.* for the file
filter, which is a pain to use. This has been changed to use a
reasonable file filter related to common video/audio files so you don't
have to wade through non-media files just to select a media file. A
filter to show all files is still available as well.
Prevents it from loading the entire image in the graphics thread, and
allows for animated gifs in the filter (not that anyone would ever do
that. ..right?)
Fixes what is arguably the most annoying feature of the mask/blend
filter, the fact that the image always stretches to the entire source.
It now centers and preserves aspect ratio by default, with an option to
make it stretch and discard aspect ratio to make it operate as it did
before.
Images continually loading/unloading every time transitioning occurs
adds a lot of unnecessary transition lag. The user can always change
this value manually and/or use scene collections, so change the default
setting to make it not unload/reload by default feels a bit more safe.
This is probably not necessary but might fix an issue where errors pass
through to other parts of the program, possibly causing the crash on
exit related to the xcomposite capture.
Some games don't catch GL errors via glGetError, so there's a
possibility that an error will pass through to the capture calls,
causing a false failure.
The most simple solution is to just clear the error flag on each capture
call.
To first render the filter, the width/height values must be set, but
currently they're only set in the render function, which means that the
crop filter can never be rendered when the program first starts up.
This would cause the filter to fail to render at all under those
circumstances.
This patch moves the calculations from render to tick to ensure that
they're always called and the values are always set.
The virtual address table values for Reset/ResetEx can sometimes point
to functions that are in libraries outside of D3D8.dll and D3D9.dll, and
will cause a crash if used. Instead, just hook Reset/ResetEx when one
of the Present* functions are called.
Not calling recv when data is received will accumulate data in the
internal recveive buffer until it's full, in which case is will stop
acknowledging. This can lead to unjustified disconnections.
If there was an attempt to destroy the rtmp-stream output while it was
already connecting and stopping at the same time, it would try to join
with the stop thread rather than with the connect thread. The connect
thread would then continue past destruction.
Sometimes stopping a connection can lock up due to data that still
remains to be sent, and this would lock up the thread requesting the
stop (typically the UI thread). So instead of locking up the calling
thread, spawn a new thread specifically for stopping so the calling
thread can continue uninterrupted. If the user attempts to reconnect,
it will wait for the stop thread to complete in the connect thread
before attempting to connect.
API removed:
--------------------
gs_effect_t *obs_get_default_effect(void);
gs_effect_t *obs_get_default_rect_effect(void);
gs_effect_t *obs_get_opaque_effect(void);
gs_effect_t *obs_get_solid_effect(void);
gs_effect_t *obs_get_bicubic_effect(void);
gs_effect_t *obs_get_lanczos_effect(void);
gs_effect_t *obs_get_bilinear_lowres_effect(void);
API added:
--------------------
gs_effect_t *obs_get_base_effect(enum obs_base_effect effect);
Summary:
--------------------
Combines multiple near-identical functions into a single function with
an enum parameter.
Use explicit UTF-8 byte sequence for the "no-break space" character.
Prevents issues with certain editors, and fixes the following compiler
warning on Visual C++:
warning C4819: The file contains a character that cannot be represented
in the current code page (X). Save the file in Unicode format to prevent
data loss
This replaces the name-based detection of the 4K intensity pro, and
allows other devices to be able to use the BGRA pixel format, if the
user so chooses.
Another thread could be manipulating the active_log_contexts array while the current thread is trying to read it, resulting in an uninitialized memory crash as the da_push_back call was not protected by the mutex.
Polls for file changes like the text plugin does. This is an interim
solution; both the text plugin and image source should use a file
monitoring API, preferably implemented through libobs.
Closesjp9000/obs-studio#482
When using a text file with the source and the font face is changed, it
would cause it to fail to update the glyphs and text accordingly. It
would trigger an error jump at line 392 of text-freetype2.c, ultimately
resulting in the text to render garbled after that.
How to reproduce:
Set the source to get text from a file, then just change the font face
(but not the size or anything else).
When updating text from file periodically, newer glyphs that weren't
already cached would not end up being rendered. This fixes the issue by
calling cache_glyphs after the file has been updated.
How to reproduce the original issue:
Set a text-freetype2 source to load an english-only text file. Then
overwrite the text in the file with non-english characters. The
non-english characters will then fail to render.
Reported at https://obsproject.com/mantis/view.php?id=336Closesjp9000/obs-studio#481
Instead of using shell functions to get the windows system directory,
use the kernel32 functions (GetSystemDirectory and
GetSystemWow64Directory). Reduces a bit of unnecessary overhead.
This caches the font list data to a file to minimize load times. Font
data will be refreshed when any font files are added/removed, based upon
a checksum of the font file names and dates (if available).
Microphones and other input devices can often have bad or erroneous
timestamps. Although we handle bad timestamps much better in
obs-studio, there are still lingering issues that can crop up from time
to time with device QPC timestamps that leads to mic data not playing
back properly. It's best if it be off by default rather than on, which
will now cause it to use system timestamps for input devices by default.
This changes it to the same handling as OBS1 for this case.
The new 'offset' value was not being passed back to the caller, which
caused the caller to continue to use the old value and thus would cause
an invalid hook and crash.
The call to CoInitializeEx in the win-mf module caused some sort of
conflict with the decklink module, causing the decklink module to crash
on exit. Instead, let libobs handle COM initialization.
If the GL capture part of the game capture hook fails to initialized for
whatever reason, it will go in to an infinite reacquire loop. If it
fails to initialize shared texture capture, try shared memory capture
instead.
For game capture, if a game is running at for example 800 FPS and limit
capture framerate is off, it would try to capture all 800 of those
frames, dramatically reducing performance more than what would ever be
necessary.
When limit capture framerate is off, instead of capturing all frames,
capture frames at an interval of twice the OBS FPS, identical to how
OBS1 works by default. This should greatly increase performance under
that circumstance.
This also adds the ability to detect whether it stopped due to lack of
space or not -- particularly useful for the FFmpeg output due to
lossless file format support.
For the FFmpeg output, the encoder ids are sort of superfluous. They
really should be optional. If they're not set, it should use the
encoder name string instead to determine the ids automatically.
It seems that certain encoders (quicksync) do not have proper back-end
support in the windows media foundation libraries for certain CPUs.
Quicksync doesn't appear to support CPUs that are not haswell (4xxx) or
above. It's really annoying, but there's not much we can do about it
until we implement our own custom quicksync implementation.
This check simply makes it attempt to spawn an encoder to check to see
whether the encoder can actually be created before registering an
encoder.
The previous commit (672378d20) was supposed to fix issues with the
encoder releasing while data was still being processed, but did not
account for when the encoder has never started up. That was my fault.
Furthermore, the way in which it was waiting to drain events was
incorrect. The encoder may still be active even though there aren't any
events queued. The proper way to wait for an async encoder to finish up
is to process output samples until it requests more input samples.
After I made it so that the encoder internal data gets destroyed when
all outputs stop using it (fa7286f8), the media foundation h264 encoder
started having crashes on shutdown. After a lot of testing, I realized
that the reason it started happening is almost assuredly because active
encoding events had not yet been completed.
After making it wait on those events by calling DrainEvents(true), the
crashes stopped. So asynchronous actions were clearly still occurring
and it was shutting down while data was still being processed, thus
leading to a crash.
Adds a VideoToolbox based H264 encoder for OSX, which most notably
allows the use of hardware encoding (Quicksync).
NOTES:
- Hardware encoding is handled by Apple itself internally. The plugin
itself has little control over many details due to the way that Apple
designed the VideoToolbox interface. Generally however, quicksync is
used if available on the CPU, and quicksync is almost always available
due to the fact that macs are exclusively Intel.
- The VideoToolbox does not seem to implement CBR, so it won't be
available. These encoders are generally not recommended for
streaming.
Implements hardware encoders through the Media Foundation interface
provided by Microsoft.
Supports:
- Quicksync (Intel)
- VCE (AMD)
- NVENC (NVIDIA, might only be supported through MF on Windows 10)
Notes:
- NVENC and VCE do not appear to have proper CBR implementations. This
isn't a fault of our code, but the Media Foundation libraries.
Quicksync however appears to be fine.
API changed from:
obs_source_info::get_name(void)
obs_output_info::get_name(void)
obs_encoder_info::get_name(void)
obs_service_info::get_name(void)
API changed to:
obs_source_info::get_name(void *type_data)
obs_output_info::get_name(void *type_data)
obs_encoder_info::get_name(void *type_data)
obs_service_info::get_name(void *type_data)
This allows the type data to be used when getting the name of the
object (useful for plugin wrappers primarily).
NOTE: Though a parameter was added, this is backward-compatible with
older plugins due to calling convention. The new parameter will simply
be ignored by older plugins, and the stack (if used) will be cleaned up
by the caller.
This is used by some muxers that set AVFMT_NOFILE and doesn't seem to
hurt muxers that don't set it; notable this makes the hls muxer output
its m3u8 playlist with the proper filename in the proper directory
cleaning up my previous commit a bit. we can just keep the
appropriate BMDPixelFormat as a data member and keep StartCapture() a
bit clearer.
this might also be helpful if (when?) the detection code needs to be
more robust or configurable
detect the device type when initializing the device instance and
determine whether to capture YUV or RGB. tested with a Blackmagic
Intensity Pro and a Blackmagic Intensity Pro 4K in the same machine,
capturing at the same time, on Linux
For both cases the cur_level calculations were "wrong". For one channel
case, I assume that was only an oversight, as for two channels case
cur_level "calculation", getting the level from downmixing to mono will
result in an attenuated level than expected. One solution is to use the
highest level of both channels to drive the gate.
..This is rather embarrassing. I used the parameter variable and the
actual variable that I wanted to used went completely unused. Would
static analysis catch something like this, I wonder? Would probably
have to be really good static analysis.
YouTube Gaming is live since today (26 August 2015) and people will ask
for it.
This makes it a bit clearer that YouTube and YouTube Gaming
(which share the same ingestion system) work with OBS MP.
This will use the services.json file present in the cache, or if it has
the wrong format version or is corrupted for whatever reason, uses the
local version instead.
Also a minor refactor, makes it so that you call the open_services_file
function to get the services array, rather than having to get the file
name each time.
This reverts commit 74354dc4cf. I really
shouldn't have modified this, especially not in this way. Was the wrong
approach. The thing I was trying to fix was very rare as well.
When a window being captured is closed, it never tries to reacquire.
This just searches for the window in video_tick and reacquires if the
currently set window is found again.
Closesjp9000/obs-studio#465
I made the rather tough call of not showing all services by default; I
didn't want to have to do this, but too many services are asking to be
put in to the program, and any time I add a service in to the list, I
feel uncomfortable because I feel like I'm potentially advertising them,
and/or they're using our program to advertise as well. Some of these
services are particularly bad at policing illegal/copyrighted content,
host content that I personally find distasteful or incredibly stupid
(what the heck is up with these "vaping" streams?), or are just fairly
terrible websites in general that I just feel uncomfortable with showing
by default.
However, I do not really want to reject anyone either, I want to let
their users be able to use our program with relative ease, but more than
anything I just simple don't want to be seen as "endorsing" some of
these websites (more than others in particular). I know that a "show
all services" checkbox is probably pretty pointless/superfluous thing to
do, but I feel like it's at the very least a means of saying "hey, I
don't really endorse these guys," or "use at your own risk," or
"warning: this website is incredibly terrible."
Honestly, I couldn't really think of any better solution that would
a.) still list all services without outright censoring them, and
b.) prevent us from being seen as "endorsing" all services.
(Although maybe this whole thing feels a bit.. passive aggressive. I
feel like I'm tipping over someone's garden gnome in the middle of the
night while they're sleeping. Still, it's something.)
NOTE: This code is backward compatible; i.e., if you previously had a
service selected that's not common but don't have the "show all"
checkbox checked, it'll still show that service for convenience.
Services almost always recommend this be enabled, and I generally want
to make configuration easier for users; with CBR they don't have to set
things like the CRF value.
The single darray solution was potentially unsafe since you're not
allowed to modify the (encode) buffer between calls to
complex_input_data_proc which is potentially violated if the darray
had to be resized due to capacity being < 2 * in_bytes_required
This is my fault; I made an idiotic assumption about the data and it
ended up causing the plugin to crash. This is definitely one of my more
embarrassing moments.
This just changes the x264 encoder settings; it doesn't actually change
the framerate of OBS. OBS will always output at a constant framerate
regardless of whether this option is on or off; this just changes how
the encoder encodes the data.
With no stream key, no streams were actually being created.
This is a crazy configuration anyway, but it resulted in OBS getting
stuck in the "Connecting" state with no way to cancel.
We now just use the blank key and hope for the best.
Reinstate flag checks in RTMP_Close that were erroneously removed.
Clear out the Link state before we establish a new connection. There is
too much state carried around during authentication that has no good
place to clear it in librtmp, which assumes a clean structure when the
connection is initially established.
Adds Microsoft Media Foundation AAC Encoder that supports
96k to 192k bitrates. This plugin is only enabled on Microsoft
Windows 8+ due to performance issues found on Windows 7.
Authentication code has been updated as per the changes to support
multiple streams.
Authentication is now also enabled by default, and should be a no-op
if the server does not request authentication or username and password
details are not provided.
I've come to realize that it's probably not wise to deviate from the
original version's functionality due to the fact that the original
version works without issues. I'm wondering if some of the capture
problems have been due to the fact that the direct hook method (via
CreateRemoteThread) was removed, so I put it back in, made it default,
and added an option to use anti-cheat compatibility just like in the
original version.
This particularly affected audio encoding, audio encoding previously
would count samples and use it to create an encoding timestamp, but
because I was using a standard integer (which is 32bit by default on
x86), it would max out at about 0x7FFFFFFF samples, which is about 12
hours of samples at 48000 sample rate. After that, it would start going
into negative territory (overflowing). By changing it to int64_t, it
will make it so that audio at 48000 samples per second would only be
able to overflow after about.. 6.09 million years. In other words,
this should fix the issue for good.
Livecoding.tv (coding), gaminglive.tv (gaming), and beam.pro
(gaming/music)
I really don't see any problems with adding these particular services to
the local list while the actual remote ingest lookup code has yet to be
even started yet (as of this writing). They seem to be harmless
services that are dedicated to specific types of content (stated above).
When hooking 64bit functions, sometimes the offset between the function
being hooked and the hook itself can be large enough to where it
requires a 64bit offset to be used. However, because a 64bit jump
requires overwriting so many code instructions in the function, it can
sometimes overwrite code in to an adjacent function, thereby causing a
crash.
The 64bit hook bounce (created by R1CH) is designed to prevent using
very long jumps in the target by creating executable memory within a
32bit offset of that target, and then writing it with the 64bit long
jump instruction instead. Then in the target function, it will jump to
that memory instead, thus forcing the actual hooked function to use a
32bit hook instead of a 64bit hook, and using at most 5 bytes for the
actual hook, preventing any likelihood of it overwriting an adjacent
function.
System timestamps were being used instead of timestamps from the
audio/video input. This would cause potential desync as well as
incremental buffering when using devices with the blackmagic video
source. Using the timestamps direct from the SDK itself fixes those
issues, and causes audio/video to play back properly and in sync.
This filter simply modifies the volume of the signal as a convenient way
of modifying the volume before other filters, or amplify the volume
without having to mess with advanced audio properties.
In addition to the flv file format, this allows the ability to save to
container formats such as mp4, ts, mkv, and any other containers that
support the current codecs being used.
It pipes the encoded data to the ffmpeg-mux process, which then safely
muxes the file from the encoded data. If the main program unexpectedly
terminates, the ffmpeg-mux piped program will safely close the file and
write trailer data, preventing file corruption.
Instead of using system timestamps for playback, use the timestamps
directly from the video/audio data to ensure all the data is synced up
using the obs_source back-end.
I think the original misconception when this was written was that OBS
would not handle timestamp resets/loops, which isn't the case; it will
actually handle all timestamp resets and abnormalities. It's always
best to use the obs_source back-end for all playback and syncing.
When the bitrate was set to 64 CoreAudio would call
complex_input_data_proc more than once, which in turn would cause
consumed bytes in the input buffer to be "freed" more than once (once
for every additional call of complex_input_data_proc and once in
aac_encode)
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
Found via UBSan, actual (sample) error:
"plugins/text-freetype2/text-functionality.c:284:26: runtime error: left
shift of 194 by 24 places cannot be represented in type 'int'"
Add the include directories found by cmake to the jack plugin.
This allows for the plugin to compile when the jack headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the v4l2 plugin.
This allows for the plugin to compile when the vl42 headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the pulseaudio plugin.
This allows for the plugin to compile when the pulseaudio headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Check the actual name of the codec before applying an x264-specific
preset so we don't encounter an "Invalid argument" error when using
other h264 encoders in FFmpeg (such as NVEnc).
Closesjp9000/obs-studio#412
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
When frames were dropped, it would also drop I-frames, which can mess
with the keyframe calculation of certain services that depend on
I-frames in their output protocol (such as HLS).
The kCGDisplayStreamShowCursor option used with the dictionary does not
work if you assign @true or @false to it. After some testing, it needs
to point to the id cast of either kCFBooleanTrue or kCFBooleanFalse in
order for it to work properly.
If it doesn't use either of those values, the display stream seems to
use its internal default, which on 10.8 and 10.9 is visible, and 10.10+
is invisible, which would explain why people on 10.10 couldn't get the
cursor to capture.
Some formats (like WMV) would send out audio packets that
had channels set but did not specify a channel layout.
Solution is to no longer rely on channel layout to get the
channels and just get the channel count directly off the
FFmpeg audio frame.
If capture starts too quickly, the file mapping will return 2, which
means file not found, and it would then reset the capture and try again.
Sometimes this would result in long intervals where it wouldn't capture.
This fixes the issue by simply making game capture retry if file mapping
returns error number 2.
This allows applying a mask based upon the chroma value (U/V) of a
specific color in YUV color space. Commonly used to mask out
greenscreens and bluescreens in live video.