Authentication code has been updated as per the changes to support
multiple streams.
Authentication is now also enabled by default, and should be a no-op
if the server does not request authentication or username and password
details are not provided.
I've come to realize that it's probably not wise to deviate from the
original version's functionality due to the fact that the original
version works without issues. I'm wondering if some of the capture
problems have been due to the fact that the direct hook method (via
CreateRemoteThread) was removed, so I put it back in, made it default,
and added an option to use anti-cheat compatibility just like in the
original version.
This particularly affected audio encoding, audio encoding previously
would count samples and use it to create an encoding timestamp, but
because I was using a standard integer (which is 32bit by default on
x86), it would max out at about 0x7FFFFFFF samples, which is about 12
hours of samples at 48000 sample rate. After that, it would start going
into negative territory (overflowing). By changing it to int64_t, it
will make it so that audio at 48000 samples per second would only be
able to overflow after about.. 6.09 million years. In other words,
this should fix the issue for good.
Livecoding.tv (coding), gaminglive.tv (gaming), and beam.pro
(gaming/music)
I really don't see any problems with adding these particular services to
the local list while the actual remote ingest lookup code has yet to be
even started yet (as of this writing). They seem to be harmless
services that are dedicated to specific types of content (stated above).
When hooking 64bit functions, sometimes the offset between the function
being hooked and the hook itself can be large enough to where it
requires a 64bit offset to be used. However, because a 64bit jump
requires overwriting so many code instructions in the function, it can
sometimes overwrite code in to an adjacent function, thereby causing a
crash.
The 64bit hook bounce (created by R1CH) is designed to prevent using
very long jumps in the target by creating executable memory within a
32bit offset of that target, and then writing it with the 64bit long
jump instruction instead. Then in the target function, it will jump to
that memory instead, thus forcing the actual hooked function to use a
32bit hook instead of a 64bit hook, and using at most 5 bytes for the
actual hook, preventing any likelihood of it overwriting an adjacent
function.
System timestamps were being used instead of timestamps from the
audio/video input. This would cause potential desync as well as
incremental buffering when using devices with the blackmagic video
source. Using the timestamps direct from the SDK itself fixes those
issues, and causes audio/video to play back properly and in sync.
This filter simply modifies the volume of the signal as a convenient way
of modifying the volume before other filters, or amplify the volume
without having to mess with advanced audio properties.
In addition to the flv file format, this allows the ability to save to
container formats such as mp4, ts, mkv, and any other containers that
support the current codecs being used.
It pipes the encoded data to the ffmpeg-mux process, which then safely
muxes the file from the encoded data. If the main program unexpectedly
terminates, the ffmpeg-mux piped program will safely close the file and
write trailer data, preventing file corruption.
Instead of using system timestamps for playback, use the timestamps
directly from the video/audio data to ensure all the data is synced up
using the obs_source back-end.
I think the original misconception when this was written was that OBS
would not handle timestamp resets/loops, which isn't the case; it will
actually handle all timestamp resets and abnormalities. It's always
best to use the obs_source back-end for all playback and syncing.
When the bitrate was set to 64 CoreAudio would call
complex_input_data_proc more than once, which in turn would cause
consumed bytes in the input buffer to be "freed" more than once (once
for every additional call of complex_input_data_proc and once in
aac_encode)
This allows the ability to output the audio of the device as desktop
audio (via the WaveOut or DirectSound audio renderers) instead of
capturing the audio only.
In the future, we'll implement audio monitoring which will make this
feature obsolete, but for the time being I decided to add this option as
a temporary measure to allow users to play the audio from their devices
via the DirectShow output.
Found via UBSan, actual (sample) error:
"plugins/text-freetype2/text-functionality.c:284:26: runtime error: left
shift of 194 by 24 places cannot be represented in type 'int'"
Add the include directories found by cmake to the jack plugin.
This allows for the plugin to compile when the jack headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the v4l2 plugin.
This allows for the plugin to compile when the vl42 headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Add the include directories found by cmake to the pulseaudio plugin.
This allows for the plugin to compile when the pulseaudio headers were
found in a directory that is not normally in the search path of the
compiler (e.g. /usr/local/include)
Fix build errors for older versions of the api where
VIDIOC_ENUM_DV_TIMINGS was defined but V4L2_IN_CAP_DV_TIMINGS was not.
I was under the impression that they were added at the same time, but
apparently i was wrong there.
Thanks to kmoore@FreeBSD.org for spotting this on FreeBSD.
In the settings if you select default container then the
format becomes null. If null then audio/video codec ids should
not be set on the output format as they will both be
AV_CODEC_ID_NONE causing a context with no streams specified
to be created (error).
Add compatibility with older versions of the api by not failing to
build when the VIDIOC_ENUM_DV_TIMINGS is missing. In older versions
of the api there was a different system to get dv-timing presets, which
was replaced by the current enumeration system with Linux 3.4.
This will allow for the plugin to be built against older versions of the
api by disabling the enumeration support, thus reducing the
functionality for some devices.
Improve compatibility with older versions of the api by not requiring
V4L2_CAP_DEVICE_CAPS. If we don't have this, we fall back to using the
capabilities member for the whole device instead of the device_caps
member for the currently selected subdevice. Just like we would do if
the device would not support this.
The new device_caps field was introduced with Linux 3.3.
Add BGRX and BGRA as supported video formats, since obs can handle them
directly. I unfortunately missed those when i initially wrote this
mapping due to my webcam not offering those formats.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I'm putting this option in due to the fact that there are legitimate
cases where a device may flip the output unexpectedly (such as the
Datapath VisionDVI-DL running in RGB video format), and that a user may
want to be able to view the source in a projector or source properties
without the image being inverted.
My original line of thinking was that they can just use a transform to
flip the image, but I felt this problem impacts rendering everywhere,
such as in the projector and in the source properties, so having it as
an option in the source itself feels like the best way to ensure that a
user can get it to render everywhere properly.
Check the actual name of the codec before applying an x264-specific
preset so we don't encounter an "Invalid argument" error when using
other h264 encoders in FFmpeg (such as NVEnc).
Closesjp9000/obs-studio#412
Changes:
- Prevent concurrent calls to EnumDevices (resolves a crash with
some device filters (like the XCAPTURE-1) with multiple active
dshow sources)
Adds the following changes:
- Prioritize YUV formats over non-YUV formats for performance and to
prevent intermediary filters
- Directly connect filters when possible to avoid intermediary filters
When frames were dropped, it would also drop I-frames, which can mess
with the keyframe calculation of certain services that depend on
I-frames in their output protocol (such as HLS).
The kCGDisplayStreamShowCursor option used with the dictionary does not
work if you assign @true or @false to it. After some testing, it needs
to point to the id cast of either kCFBooleanTrue or kCFBooleanFalse in
order for it to work properly.
If it doesn't use either of those values, the display stream seems to
use its internal default, which on 10.8 and 10.9 is visible, and 10.10+
is invisible, which would explain why people on 10.10 couldn't get the
cursor to capture.
Some formats (like WMV) would send out audio packets that
had channels set but did not specify a channel layout.
Solution is to no longer rely on channel layout to get the
channels and just get the channel count directly off the
FFmpeg audio frame.
If capture starts too quickly, the file mapping will return 2, which
means file not found, and it would then reset the capture and try again.
Sometimes this would result in long intervals where it wouldn't capture.
This fixes the issue by simply making game capture retry if file mapping
returns error number 2.
This allows applying a mask based upon the chroma value (U/V) of a
specific color in YUV color space. Commonly used to mask out
greenscreens and bluescreens in live video.
Allows any source to be cropped, though note that this renders to
texture first, so for more optimal results, cropping values should
probably be put in to capture sources that can be cropped as they're
actually rendered by the source.
This filter allows the ability to use a texture to modify the video
output of a source, the ability to:
- apply a color-based mask (dark = transparent, light = opaque)
- apply a mask based upon the alpha of an image
- blend an image via subtraction, addition, or multiplication
Adds a filter for delaying asynchronous video/audio data, for example
from webcams, video devices, or media playback sources. Mainly intended
to allow users to sync up their webcams to other devices.
If the settings are reset to defaults or if the settings are just bad,
the video would get stuck on the last frame that was displayed, which
feels a bit awkward. Best to make it stop video output entirely rather
than get stuck on the last video frame.
Certain devices (particularly certain mixers and soundflower 64ch) would
have an arbitrary number of channels, and wouldn't really be mappable to
a specific speaker layout supported by libobs.
So to fix this issue, if the channel count is above 8, force the data to
stereo to ensure playback can still occur, rather than cause it to just
fail.
This fixes an issue primarily with filter rendering: when capturing
windows and displays, their alpha channel is almost always 0, causing
the image to be completely invisible unintentionally. The original fix
for this for many sources was just to turn off the blending, which would
be fine if you're not rendering any filters, but filters will render to
render targets first, and that lack of alpha will end up carrying over
in to the final image.
This doesn't apply to any mac captures because mac actually seems to set
the alpha channel to 1.
The code specific to Windows: helps convert `BSTR` instances to
`std::string`s; provides a Windows COM-specific implementation of
`CreateDeckLinkDiscoveryInstance`; aliases CFUUIDGetUUIDBytes,
CFUUIDBytes, and IUnknownUUID (the Linux SDK does this, but for some
reason the Windows SDK does not).
Some changes were made to the stock DeckLink SDK: removed all references
to legacy API headers in DeckLinkAPI.idl; removed all instances of
`importlib("stdole2.tlb");`.
Core API functions changed:
-----------------------------
EXPORT bool obs_reset_audio(struct audio_output_info *aoi);
EXPORT bool obs_get_audio_info(struct audio_output_info *aoi);
To:
-----------------------------
EXPORT bool obs_reset_audio(const struct obs_audio_info *oai);
EXPORT bool obs_get_audio_info(struct obs_audio_info *oai);
Core structure added:
-----------------------------
struct obs_audio_info {
uint32_t samples_per_sec;
enum speaker_layout speakers;
uint64_t buffer_ms;
};
Non-interleaved (planar) floating point output is standard with audio
filtering, so to prevent audio filters from having to worry about
different audio format implementations and for the sake consistency
between user interfaces, make it so that audio is always set to
non-interleaved floating point output.
API changed from:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
void obs_service_info::apply_encoder_settings(void *data
obs_encoder_t *video_encoder,
obs_encoder_t *audio_encoder);
To:
------------------------
EXPORT void obs_service_apply_encoder_settings(obs_service_t *service,
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
void obs_service_info::apply_encoder_settings(void *data
obs_data_t *video_encoder_settings,
obs_data_t *audio_encoder_settings);
These changes make it so that instead of an encoder potentially being
updated more than once with different settings, that these functions
will be called for the specific settings being used, and the settings
will be updated according to what's required by the service.
This fixes that design flaw and ensures that there's no case where
obs_encoder_update is called where the settings might not have
service-specific settings applied.
If this option is on, the image will unload when the image isn't
displayed anywhere, and then reload it when it's displayed again to
reduce the amount of memory images take up on VRAM at any given time.
If this option is off, then it's always loaded in VRAM regardless of
whether it's displayed or not.
Closes Pull Request #380
(edited by Jim)
Add a source property to enable buffering of frames, which is enabled by
default. This replaces the old "Use system timing" option by setting the
unbuffered source flag instead of using different timestamps.
While similar in intentions to the old option, this method should reduce
latency even more.
Fix bug 0000151: File loading not properly handled.
Link to bug: https://obsproject.com/mantis/view.php?id=151
A newly selected font is not loaded properly if "read from file" is
active without a valid file. Old error handling lead to random memory
being displayed.
Closes Pull Request #390
(message edited by Jim)
By default, video plays back based upon the timestamp for each frame,
and buffers the frames as needed to ensure that they play back at the
expected timing.
However, this can add some minor additional delay to the video, and may
not be ideal for certain devices such as webcams and generally any
device that has minimal latency. However, because those are the only
type of devices that typically have drivers, there's no real need to
have it on by default.
This adds an option to use buffering, and leaves it off by default.
Closes pull request #384
(message added by jim)
Use the macro from the mac capture plugin to convert the fourcc integer
value to a string. This makes the debug statement for the pixel format
slightly more readable for the casual developer.
Remove the "Leave Unchanged" option for the input and video format
select.
This option was primarily added for cases in which the
resolution and framerate are set by another program or the capture
device itself and the values are not directly supported by the plugin.
One major usecase here would be capture devices for tv signals which
might be set to a spcific resolution and refresh rate, and might fail
to initialize in case any other combination of those settings is forced.
In case of the input this option did not make much sense, as the first
input is probably the best default option in most cases.
For the video format this default is even bad in some cases. If an
format emulated by libv4l2 is selected for example, this will usually
configure the device to use mjpeg with libv4l2 converting it to some
format obs can use. When obs or the source is then restarted and the
"Leave Unchanged" is enabled the plugin will fail, because it will only
notice that the device is set to mjpeg, without any knowledge about the
possibility of letting libv4l2 handle the conversion.
Using the first available and supported format is not nescessarily the
best choice, but still preferable to some random format that will
cause the plugin to not capture at all. Forcing a choice here will
hopefully also make the plugin behaviour more predicatable for the user.
Remove the constraint for device inputs to be of the type "CAMERA".
This was added under the false assumption that inputs of the type
"TUNER" are only used for control purposes.
This was caused to do the new RTMP code that added support for multiple
streams; the stream index needs to be reset on RTMP_Close otherwise it
will keep using the wrong stream information.
I had this issue where IDXGISwapChain::ResizeBuffers would fail in the
hooks, causing games to crash when they resized their backbuffers
because ResizeBuffers would return an 'invalid call' HRESULT value. In
the ResizeBuffers documentation it says that it will only happen if a
backbuffer currently has any outstanding references, but there's no way
this would happen unless ResizeBuffers internally calls Present or vise
versa.
After ResizeBuffers has been called, the very first call to Present will
somehow seemingly invalidate and/or destroy the current backbuffer.
It's very strange, but that seems to be what's going on, at least for
the game I was testing. So if you are performing a post-overlay
capture, then you must ignore the capture on the very first call to
Present.
It's Microsoft's code so you can't really know what's going on, you just
have to work around these strange issues seemingly in the dark.
Martell changed this function without realizing that this was calling a
function below it, not recursively calling itself. The reason why he
got the warning was because there was no forward declaration of the
function that was being called; I think he's used to C where only one
function definition can exist with the same name. In this case, it was
another function with the same name but with different parameters,
something that's permitted in C++. I wish I had realized this sooner.
This fixes the crashes people have been having with devices.
Apparently someone dumb (aka me) neglected to properly handle the inline
graphics hook API functions. You're not supposed to 'extern' inline
functions, they need to be defined for each file when ever they're used.
Apparently neglected to use the reference operator. I think this may
partially be one of the reasons why many developers still choose to use
pointers instead of references, but fortunately an actual GOOD compiler
warns about this (aka anything but vc)
Clears up a warning (to prevent && and || confusion), and clarifies what
specifically the if statement is trying to accomplish (check to see if
the capture is valid)
On windows, for whatever reason sockets use the SOCKET type which is not
a signed integer. Still, even though it's not a signed integer, -1 is
used to indicate an invalid socket, but the way you use it is via
microsoft's fabulously dumb little INVALID_SOCKET define, so we have to
make librtmp use that instead.
The HWND type is a void pointer, but HWND values are global and always
32bit despite, so casting to 32bit can cause cast warnings on actual
good compilers like gcc via mingw. This change correctly handles the
casting to 32bits without producing unwanted warnings or errors on
mingw.
win-capture should not postfix .lib to psapi.
The graphics hook also requires psapi when linking.
Also change some link libs as mingw-w64 libraries are not postfixed
.lib.
Hopefully we can get this function merged for mingw-w64 4.1. As for the
4.0 release, adding a new header is a big change, it'll have to wait for
the next version.
Remove the .lib postfix from strmiids
ksuser provides KSCATEGORY_ENCODER and similar GUIDS used
wmcodecdspuuid provides MEDIASUBTYPE_H264 MEDIASUBTYPE_RAW_AAC1 and
MEDIASUBTYPE_I420 so no need to define them in dshow-formats. The
submodule will have to be updated to support this change.
I feel like due to lack of user understanding, it's important to specify
that the higher the preset is (veryfast/superfast/ultrafast) the less
CPU that the encoder will use
Add CBR, CRF to properties so that it can be changed by the user. If
CBR is on, CRF will be disabled. Also added a 'Use Custom Buffer Size'
option to make it so that the buffer size will automatically be set to
the bitrate if its value is false. This is primarily a convenience
feature for users.
Certain RTMP services will support multi audio tracks via RTMP. This
updates librtmp with custom code that enables multiple streams per
connection to be used; each subsequent stream typically containing extra
audio tracks. The audio encoder names are used to indicate the names of
tracks, and the name of the tracks are used for the stream keys for
those subsequent tracks.
This makes FFmpeg usable as an output, and removes or changes most of
the code that was originally intended for testing purposes.
Changes the settings for the FFmpeg output to the following:
* url: Sets the output URL or file path
* video_bitrate: Sets the video bitrate
* audio_bitrate: Sets the audio bitrate
* video_encoder: Sets the video encoder (by name, blank for default)
* audio_encoder: Sets the audio encoder (by name, blank for default)
* video_settings: Sets custom video encoder FFmpeg settings
* audio_settings: Sets custom audio encoder FFmpeg settings
* scale_width: Image scale width (0 if none)
* scale_height: Image scale height (0 if none)
The reason why scale_width and scale_height are provided is because it
may internally convert formats, and it may be a bit more optimal to use
that scaler instead of the pre-output scaler just in case it already has
to convert formats internally anyway (though you can do it either way
you wish).
Video format handling has also changed; it will now attempt to use the
closest format to the current format if available for a given video
codec.
API changed:
--------------------------
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder);
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output);
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings);
Changed to:
--------------------------
/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
obs_output_t *output,
obs_encoder_t *encoder,
size_t idx);
/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
const obs_output_t *output,
size_t idx);
/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
const char *id,
const char *name,
obs_data_t *settings,
size_t mixer_idx);
Overview
--------------------------
This feature allows multiple audio mixers to be used at a time. This
capability was able to be added with surprisingly very little extra
overhead. Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.
Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.
I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.
Sources:
Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time. The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to. For example, 0xF would mean that the source applies
to all four mixers.
Audio Encoders:
Audio encoders now must specify which specific audio mixer they use when
they encode audio data.
Outputs:
Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set. This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
Add a check to the cursor render function to ensure the cursor texture
exists. It seems like it is very unlikely but still possible, that the
first tick which should set the texture might fail. In that case obs
would crash in the render function.
Previously a DirectShow hardware encoder would get 'stuck' and couldn't
be recreated due to a strange issue with the graph filter not properly
shutting down the encoder. This would make it so that the user could
only use the encoder once, and then it wouldn't work anymore any time it
was initialized again. dshowcapture version 0.4.2 ensures that the
encoder can restart properly by manually shutting down the filter graph.
Add support for Static Z Software's Sound Siphon audio routing software
(http://staticz.com/soundsiphon/) which provides more advanced audio routing
possibilities.
If the PSAPI_VERSION macro is not set to 1 when using
GetProcessImageFileName, it will attempt to import it as
K32GetProcessImageFileName from kernel32.dll instead of psapi.dll, which
breaks compatibility with vista and xp.
Profile was being set as a bool rather than a string, resulting in an
embarrassing situation where the profile was being set to 'true' rather
than the actual profile name. ..There really needs to be a compiler
warning for using non-bools as bools. This is one of the reason I
started using !! to change non-bools to bools.
Having macros that state what these numbers mean is much more ideal than
just having a random number thrown in there, wondering why it was used
and what its purpose is (magic numbers).
The activate button is just silly for configuration in retrospect. It's
confusing to users, and was even confusing to some other developers.
Instead of using an 'Activate' button for game capture every time you
want to capture a window, just make the 'window' list have a default 'no
window' value (empty), and then have it always active when an actual
window is selected. The way syphon handles this on mac is actually
where I took the idea from (as suggested by Palana).
With the new code that checks to see if the source is visible, I didn't
realize that I actually didn't set the source variable, so it would end
up never actually drawing.
I didn't check to see if the size of the string was 0, when it's 0 it
won't create the converted string and it'll send a null pointer to
CFStringGetCString, causing it to crash.
If shared memory file mapping fails, I've found that it's somewhat
normal due to something in windows -- usually the capture will always
eventually start up after a few tries. Only seems to apply to some
games though, for example seems to happen with counterstrike a lot for
some strange reason. Capture always eventually starts back up though.
I remember seeing this with OBS1 as well in many cases but always
thought it was some sort of fluke
If using the auto-fullscreen feature to hook in to a fullscreen, I found
that if you don't wait a few seconds before initializing the hook that
you can catch the process when it's just starting up and loading
important libraries (especially things such as steam/uplay/etc), which
can cause a little bit of interference with the process and on rare
occasions cause it to crash.
To help prevent the likelihood of that happening, this just makes it so
that the hook waits at least 3 seconds before even attempting to inject
the hook when using auto-fullscreen mode. After some extensive testing
I haven't had any issues since.
The design to not retry the hooks on most general error is just bad.
There are plenty of legitimate cases where it should retry the hook.
This changes it so that if a general failure occurs or if it isn't
capturing when the inject helper exits, it retries and increases the
length of time between retries.
Variables that track time should not have the name 'interval', they
should have the name 'time' instead so it's crystal clear that the
variable is tracking time.
Adds a variable 'retry_interval' to game capture that allows the
interval at which game capture checks to update to longer intervals if
the hook initialization has some sort of failure.
The reason why I want to do this is because I don't really like it when
the hook updates too often in failure, it just leads to log file spam
that I feel can be reduced, and it frequent updates feel a bit invasive.
I just generally feel more comfortable reducing the interval at which
the hook retries after failure.
This makes a minor adjustment to the interval at which the inject helper
tries to post the inject message to the target process. Only 2 seconds
before, now up to 4 seconds, with the PostThreadMessage called every
half second for the duration.
The reason I did this is because I noticed that on rare occasions that
it wouldn't hook due to the low interval; usually just because the
target process is busy and isn't able to process its message queue, and
therefor the hook wouldn't go through due to the fact that
SetWindowsHookEx won't inject until the set event has occurred. The
inject helper program would just close before the thread message had
finally been processed, which would cancel the SetWindowsHookEx hooking.
The code neglected to take in to account that start_capture can also be
called when the texture updates its size/format in the hook and 'ready'
is signaled again, so it's possible that existing variables in the game
capture structure could be overwritten with new ones unintentionally.
The game capture 'Activate' button is likely to fool users in to
thinking it's not actually active if the game capture displays black, so
if it's active, rename the button to 'Reactivate' in order to sort of
hint at the user that it's actually active.
This is a bit of an optimization to reduce load a little bit if any of
the video capture sources are not currently being displayed on the
screen. They will simply not capture or update their texture data if
they are not currently being shown anywhere.
The mac and window game capture sources don't really apply due to the
fact that their textures aren't updated on the source's end (they update
inside of the hooks).
The DirectShow input source would always turn on first use, whether the
user wanted it to or not. I feel like having an activate/deactivate
option is a really nice thing to have, and makes configuration feel a
little bit less awkward.
I originally had it set the color space and color range in the video
info callback, but I forgot that it's a function that's called after the
encoder is initialized. You can change the color space and color range,
but you have to reconfigure the encoder, and there's no real reason to
do that.
Uses the output duplicator API in order to get a high performance
monitor capture on windows 8+. This is actually designed to be
interchangeable with regular GDI-based monitor capture (uses the same
source id).
Allow the user to select whether to buffer the source or not. The
settings are auto-detect, on, and off. Auto-Detect turns it off for
non-encoded devices, and on for encoded devices.
Webcams, internal devices, and other such things on windows do not
really need to be buffered, and buffering incurs a tiny bit of delay, so
turning off buffering is actually a little better for non-encoded
devices.
The 'sent_headers' member variable of the FLV output would not be reset
when the output was restarted, causing important data to not be written,
thus creating an invalid FLV file.
On i3wm, windows aren't unmapped when switching away from a window's
workspace, but it does cause OBS to lose the capture. Because
switching back will not trigger a MapNotify, the capture fails to
restart unless you resize or move the window (ConfigureNotify). An
Expose event is fired by the wm, however, so catching this correctly
restarts the capture.
Add a new helper library to handle the mouse cursor using xcb.
Since porting the old library without either keeping legacy code or
breaking the api would have been non-trivial, this is added as a
completely separate implementation. Once all code is ported over to
use this library, the old one can be removed.
This adds support for the AverMedia C985 encoder (which is available on
C985 capture cards) as well as the C353 hardware encoder (which is
currently available on the X99S Gaming 9 motherboards).
These encoders have some limitations, such as limited resolutions
(1280x720 and 1024x768), a max GOP size of 30, and the encoder format
only supports YV12, which requires conversion if the current output
format isn't the same. The C985 and C353 encoders seem to be pretty
much identical, although it seems like the C353 has a bit more efficient
encoding.
I don't believe these are really suitable for streaming, as they do not
really have the encoding efficiency needed to stream at lower bitrates,
and seem to only support variable bitrate. However, for recording these
encoders are quite nice to have available, and work quite well.
The main module code was originally all packed in to the win-dshow.cpp
file, which isn't exactly ideal or clean if one wants to add other
things to the module as a whole.