Fixed a few files that went over 80 columns, mostly just a nitpack on my
part.
libobs/obs-nix.c had a rather bad case of leading whitespace.
Also, fixed the x86 obs-studio project files so that it would properly
output to the right directory. It couldn't find libobs.lib because
obs-studio's project settings had it outputting to a different place
than the rest of the projects.
- I seem to have fixed ths issues with the main preview widget. It
seems you just need to set the right window attributes to stop it from
breaking. Though when opengl is enabled, there appears to be a weird
background glitch in the Qt stuff -- I'm not entirely sure what's
going on. Bug in Qt?
Also fixed the layout issues, and the widget now properly resizes and
centers in to its parent widget.
- Prevent the render loop from accessing data if the data isn't valid.
Because obs->data is freed before the graphics stuff, it can cause
the graphics to keep trying to query the obs->data.displays_mutex
after it had already been destroyed.
--------------------------------------------------
Notes and details
--------------------------------------------------
Why was this done? Because wxWidgets was just lacking in many areas. I
know wxWidgets is designed to be used with native controls, and that's
great, but wxWidgets just is not a feature-complete toolkit for
multiplatform applications. It lacks in dialog editors, its code is
archaic and outdated, and I just feel frustrated every time I try to do
things with it.
Qt on the other hand.. I had to actually try Qt to realize how much
better it was as a toolkit. They've got everything from dialog editors,
to an IDE, a debugger, build tools, just everything, and it's all
top-notch and highly maintained. The focus of the toolkit is
application development, and they spend their time trying to help
people do exactly that: make programs. Great support, great tools,
and because of that, great toolkit. I just didn't want to alienate any
developers by being stubborn about native widgets.
There *are* some things that are rather lackluster about it and design
choices I disagree with though. For example, I realize that to have an
easy to use toolkit you have to have some level of code generation.
However, in my personal and humble opinion, moc just feels like a
terrible way to approach the problem. Even now I feel like there are a
variety of ways you could handle code generation and automatic
management of things like that. I don't like the idea of circumventing
the language itself like that. It feels like one giant massive hack.
--------------------------------------------------
Things that aren't working properly:
--------------------------------------------------
- Settings dialog is not implemented. The dialog is complete but the
code to handle the dialog hasn't been constructed yet.
- There is a problem with using Qt widgets as a device target on
windows, with at least OpenGL: if I have the preview widget
automatically resize itself, it seems to cause some sort of video
card failure that I don't understand.
- Because of the above, resizing the preview widget has been disabled
until I can figure out what's going on, so it's currently only a
32x32 area.
- Direct3D doesn't seem to render correctly either, seems that the
viewport is messed up or something. I'm sort of confused about
what's going on with it.
- The new main window seems to be triggering more race conditions than
the wxWidgets main window dialog did. I'm not entirely sure what's
going on here, but this may just be existing race conditions within
libobs itself that I just never spotted before (even though I tend to
be very thorough with race conditions any time I use variables
cross-thread)
- Added some code for FFmpeg output that I'm still playing around with.
Right now I'm just trying to get it to output to file and try to
understand the FFmpeg/libav APIs. Hopefully in the future this plugin
can be used for any sort of output to FFmpeg.
- Fixed a cast warning in audio-io.c with size_t -> uint32_t
- Renamed the 'video_info' and 'audio_info' structures to
'video_conver_info' and 'audio_convert_info' to better represent their
actual purpose, and to avoid confusion with 'audio_output_info' and
'video_output_info' structures.
- Removed a few macros from obs-def.h that were at one point going to be
used but no longer going to be used (at least for now)
Just a minor fix mostly because I noticed that I kept accidentally
forgetting to add checks to the code properly. This is one of those
cases where macros come in useful, as macros can automate the process
and help prevent these mistakes from happening by accident.
Changed the comments to properly reflect the new callbacks, as I had
forgotten to update the comments for them both.
Also, changed "setbitrate" and "request_keyframe" return values to be
boolean.
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
Completely revamped the entire media i/o data and handlers. The
original idea was to have a system that would have connecting media
inputs and outputs, but at a certain point I realized that this was an
unnecessary complexity for what we wanted to do. (Also, it reminded me
of directshow filters, and I HATE directshow with a passion, and
wouldn't wish it upon my greatest enemy)
Now, audio/video outputs are connected to directly, with better callback
handlers, and will eventually have the ability to automatically handle
conversions such as 4:4:4 to 4:2:0 when connecting to an input that uses
them. Doing this will allow the video/audio i/o handlers to also
prevent duplicate conversion, as well as make it easier/simple to use.
My true goal for this is to make output and encoder plugins as simple to
create as possible. I want to be able to be able to create an output
plugin with almost no real hassle of having to worry about image
conversions, media inputs/outputs, etc. A plugin developer shouldn't
have to handle that sort of stuff when he/she doesn't really need to.
Plugins will be able to simply create a callback via obs_video() and/or
obs_audio(), and they will automatically receive the audio/video data in
the formats requested via a simple callback, without needing to do
almost anything else at all.
When the first async video frame is used it would not set audio timing,
moved that code into obs_source_getframe. Also, might consider renaming
obs_source_getframe. "Query frame" instead perhaps? Will sleep on it,
might not even bother.
- Add preliminary (yet to be tested) handling of timestamp invalidation
issues that can happen with specific devices, where timestamps can
reset or go backward/forward in time with no rhyme or reason. Spent
the entire day just trying to figure out the best way to handle this.
If both audio and video are present, it will increment a reference
counter if video timestamps invalidate, and decrement the reference
counter when the audio timestamps invalidate. When the reference
counter is not 0, it will not send audio as the audio will have
invalid timing. What this does is it ensures audio data will never go
out of bounds in relation to the video, and waits for both audio and
video timestamps to "jump" together before resuming audio.
- Moved async video frame timing adjustment code into
obs_source_getframe instead so it's automatically handled whenever
called.
- Removed the 'audio wait buffer' as it was an unnecessary complexity
that could have had problems in the future. Instead, audio will not
be added until video starts for sources that have both async
audio/video. Audio could have buffered for too long of a time anyway,
who knows what devices are going to do.
- Fixed a minor conversion warning in audio-io.c
- In the audio I/O code, if there's a pause in the program or its
threads (especially the audio thread), it'll cause it to sample too
much data, and increase line->base_timestamp to a potentially higher
value than the next audio timestamp that may be added to the line.
This would cause it to crash originally, because it expects audio
data that is within the designated buffering limit.
Because that audio data cannot be filled by that data anyway, just
ignore the audio data until it goes back to the right timing (which
it will as long as the code that is using the line accounts for its
current system time)
- Often, timestamps will go "back" in time with certain.. terrible
devices that no one should use. When this occurs, timing is now
reset so that the new audio comes in directly after the old audio
seamlessly.
- Audio data was just being popped to the "front" of the mix buffer, so
instead it now properly pops into the correct position in the mix
buffer (proper mixing still needs to be implemented)
- Added a test audio sinewave test source that should just play a sine
wave of the middle C note. Using unsigned 8 bit mono to test
ffmpeg's audio resampler, seems to work pretty good.
- Fixed a boolean trap in threading.h for the event_init function, it
now uses enum event_type, which can be EVENT_TYPE_MANUAL or
EVENT_TYPE_AUTO, to specify whether the event is automatically reset
or not.
- Changed display names of test sources to something a little less
vague.
- Removed te whole "if timestamp is 0 just use current system time"
when outputting source audio, if you want to use system time you
should just use system time yourself. Using 0 as some sort of
"indicator" like that just makes things confusing, and prevents you
from legitimately using 0 as a timestamp for your audio data.
- Circular buffer code wasn't correctly handling the splitting of
newly placed data segments, the code was untested and turned out to
just be backwards. It now copied the data to the back and front of
the buffer properly.
For one, I added a new member gs_window for future use.
The member is "display" which represents our connection to X11.
Ideally, we should use this specific connection to deal with our Window.
For now, it's disabled. Read comment for more information.
Secondly, wxGTK apparently doesn't map our window in some cases.
This causes the window ID passed to be bad and will stop (or segfault)
our program. This might be related to the first commit above.
For now, all this commit does is realize the window manually.
- Mixing still isn't implemented, but the audio system should be able
to start up, and mix at least once audio line for the time being.
Will have to write some test audio sources to verify things are
working properly, and build the rest of the output functionality.
- Using a recursive mutex fixes issues where objects need to enter the
main libobs sources mutex while already within the mutex in the same
thread. Otherwise it would keep getting locked on itself on
destruction.
- Apply the volume specified with the audio data packet before
inserting the audio data into the circular buffer. Added functions
for multiplying the volume with all the different audio bit depths.
(Could probably be greatly optmimized later)
- Added a volume variable to the obs_source structure and implemented
functions for manipulating source volume.
- Added a volume variable to the audio_data structure so that the
volume will be applied when mixing.
- Made it so that when a source is added or removed from a scene it
will add a reference to sourceSceneRefs (std::unordered_map). Each
source adds a reference to that every time they are added to a scene,
and releases a reference from it when they are removed from a scene.
When the value reaches 0, the source is no longer in any scenes, and
is then marked for removal and destroyed.
Before, I was using the source internal reference counter, which is a
really bad thing to do because I don't know what might actually be
referencing it. So using a separate discrete reference counter for
the number of scenes it's in is better in this case.