Fixed a few files that went over 80 columns, mostly just a nitpack on my
part.
libobs/obs-nix.c had a rather bad case of leading whitespace.
Also, fixed the x86 obs-studio project files so that it would properly
output to the right directory. It couldn't find libobs.lib because
obs-studio's project settings had it outputting to a different place
than the rest of the projects.
--------------------------------------------------
Notes and details
--------------------------------------------------
Why was this done? Because wxWidgets was just lacking in many areas. I
know wxWidgets is designed to be used with native controls, and that's
great, but wxWidgets just is not a feature-complete toolkit for
multiplatform applications. It lacks in dialog editors, its code is
archaic and outdated, and I just feel frustrated every time I try to do
things with it.
Qt on the other hand.. I had to actually try Qt to realize how much
better it was as a toolkit. They've got everything from dialog editors,
to an IDE, a debugger, build tools, just everything, and it's all
top-notch and highly maintained. The focus of the toolkit is
application development, and they spend their time trying to help
people do exactly that: make programs. Great support, great tools,
and because of that, great toolkit. I just didn't want to alienate any
developers by being stubborn about native widgets.
There *are* some things that are rather lackluster about it and design
choices I disagree with though. For example, I realize that to have an
easy to use toolkit you have to have some level of code generation.
However, in my personal and humble opinion, moc just feels like a
terrible way to approach the problem. Even now I feel like there are a
variety of ways you could handle code generation and automatic
management of things like that. I don't like the idea of circumventing
the language itself like that. It feels like one giant massive hack.
--------------------------------------------------
Things that aren't working properly:
--------------------------------------------------
- Settings dialog is not implemented. The dialog is complete but the
code to handle the dialog hasn't been constructed yet.
- There is a problem with using Qt widgets as a device target on
windows, with at least OpenGL: if I have the preview widget
automatically resize itself, it seems to cause some sort of video
card failure that I don't understand.
- Because of the above, resizing the preview widget has been disabled
until I can figure out what's going on, so it's currently only a
32x32 area.
- Direct3D doesn't seem to render correctly either, seems that the
viewport is messed up or something. I'm sort of confused about
what's going on with it.
- The new main window seems to be triggering more race conditions than
the wxWidgets main window dialog did. I'm not entirely sure what's
going on here, but this may just be existing race conditions within
libobs itself that I just never spotted before (even though I tend to
be very thorough with race conditions any time I use variables
cross-thread)
- Added some code for FFmpeg output that I'm still playing around with.
Right now I'm just trying to get it to output to file and try to
understand the FFmpeg/libav APIs. Hopefully in the future this plugin
can be used for any sort of output to FFmpeg.
- Fixed a cast warning in audio-io.c with size_t -> uint32_t
- Renamed the 'video_info' and 'audio_info' structures to
'video_conver_info' and 'audio_convert_info' to better represent their
actual purpose, and to avoid confusion with 'audio_output_info' and
'video_output_info' structures.
- Removed a few macros from obs-def.h that were at one point going to be
used but no longer going to be used (at least for now)
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
Completely revamped the entire media i/o data and handlers. The
original idea was to have a system that would have connecting media
inputs and outputs, but at a certain point I realized that this was an
unnecessary complexity for what we wanted to do. (Also, it reminded me
of directshow filters, and I HATE directshow with a passion, and
wouldn't wish it upon my greatest enemy)
Now, audio/video outputs are connected to directly, with better callback
handlers, and will eventually have the ability to automatically handle
conversions such as 4:4:4 to 4:2:0 when connecting to an input that uses
them. Doing this will allow the video/audio i/o handlers to also
prevent duplicate conversion, as well as make it easier/simple to use.
My true goal for this is to make output and encoder plugins as simple to
create as possible. I want to be able to be able to create an output
plugin with almost no real hassle of having to worry about image
conversions, media inputs/outputs, etc. A plugin developer shouldn't
have to handle that sort of stuff when he/she doesn't really need to.
Plugins will be able to simply create a callback via obs_video() and/or
obs_audio(), and they will automatically receive the audio/video data in
the formats requested via a simple callback, without needing to do
almost anything else at all.
- Added a test audio sinewave test source that should just play a sine
wave of the middle C note. Using unsigned 8 bit mono to test
ffmpeg's audio resampler, seems to work pretty good.
- Fixed a boolean trap in threading.h for the event_init function, it
now uses enum event_type, which can be EVENT_TYPE_MANUAL or
EVENT_TYPE_AUTO, to specify whether the event is automatically reset
or not.
- Changed display names of test sources to something a little less
vague.
- Removed te whole "if timestamp is 0 just use current system time"
when outputting source audio, if you want to use system time you
should just use system time yourself. Using 0 as some sort of
"indicator" like that just makes things confusing, and prevents you
from legitimately using 0 as a timestamp for your audio data.