2014-01-19 02:16:41 -08:00
|
|
|
/******************************************************************************
|
2014-02-09 04:51:06 -08:00
|
|
|
Copyright (C) 2014 by Hugh Bailey <obs.jim@gmail.com>
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation, either version 2 of the License, or
|
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
******************************************************************************/
|
|
|
|
|
2014-07-09 22:12:57 -07:00
|
|
|
#include <obs-module.h>
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include <util/circlebuf.h>
|
2014-02-28 02:50:30 -08:00
|
|
|
#include <util/threading.h>
|
2014-03-10 13:10:35 -07:00
|
|
|
#include <util/dstr.h>
|
|
|
|
#include <util/darray.h>
|
|
|
|
#include <util/platform.h>
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2014-02-27 22:14:03 -08:00
|
|
|
#include <libavutil/opt.h>
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include <libavformat/avformat.h>
|
|
|
|
#include <libswscale/swscale.h>
|
|
|
|
|
2014-04-04 23:21:19 -07:00
|
|
|
#include "obs-ffmpeg-formats.h"
|
2015-01-25 10:34:58 -08:00
|
|
|
#include "closest-pixel-format.h"
|
2014-04-05 07:12:32 -07:00
|
|
|
#include "obs-ffmpeg-compat.h"
|
2014-04-04 23:21:19 -07:00
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
struct ffmpeg_cfg {
|
|
|
|
const char *url;
|
2015-03-27 23:47:24 -07:00
|
|
|
const char *format_name;
|
|
|
|
const char *format_mime_type;
|
2015-01-25 10:34:58 -08:00
|
|
|
int video_bitrate;
|
|
|
|
int audio_bitrate;
|
|
|
|
const char *video_encoder;
|
2015-03-27 23:47:24 -07:00
|
|
|
int video_encoder_id;
|
2015-01-25 10:34:58 -08:00
|
|
|
const char *audio_encoder;
|
2015-03-27 23:47:24 -07:00
|
|
|
int audio_encoder_id;
|
2015-01-25 10:34:58 -08:00
|
|
|
const char *video_settings;
|
|
|
|
const char *audio_settings;
|
|
|
|
enum AVPixelFormat format;
|
|
|
|
int scale_width;
|
|
|
|
int scale_height;
|
|
|
|
int width;
|
|
|
|
int height;
|
|
|
|
};
|
2014-02-23 15:27:19 -08:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct ffmpeg_data {
|
|
|
|
AVStream *video;
|
|
|
|
AVStream *audio;
|
|
|
|
AVCodec *acodec;
|
|
|
|
AVCodec *vcodec;
|
|
|
|
AVFormatContext *output;
|
|
|
|
struct SwsContext *swscale;
|
|
|
|
|
|
|
|
AVPicture dst_picture;
|
|
|
|
AVFrame *vframe;
|
|
|
|
int frame_size;
|
|
|
|
int total_frames;
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
uint64_t start_timestamp;
|
|
|
|
|
|
|
|
uint32_t audio_samplerate;
|
2014-02-23 15:27:19 -08:00
|
|
|
enum audio_format audio_format;
|
|
|
|
size_t audio_planes;
|
|
|
|
size_t audio_size;
|
2014-02-14 14:13:36 -08:00
|
|
|
struct circlebuf excess_frames[MAX_AV_PLANES];
|
|
|
|
uint8_t *samples[MAX_AV_PLANES];
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
AVFrame *aframe;
|
|
|
|
int total_samples;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
struct ffmpeg_cfg config;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
|
|
|
bool initialized;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ffmpeg_output {
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_output_t *output;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
volatile bool active;
|
|
|
|
struct ffmpeg_data ff_data;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
bool connecting;
|
|
|
|
pthread_t start_thread;
|
|
|
|
|
|
|
|
bool write_thread_active;
|
|
|
|
pthread_mutex_t write_mutex;
|
|
|
|
pthread_t write_thread;
|
2014-09-25 17:44:05 -07:00
|
|
|
os_sem_t *write_sem;
|
|
|
|
os_event_t *stop_event;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
DARRAY(AVPacket) packets;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
static bool new_stream(struct ffmpeg_data *data, AVStream **stream,
|
2015-01-25 10:34:58 -08:00
|
|
|
AVCodec **codec, enum AVCodecID id, const char *name)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2015-01-25 10:34:58 -08:00
|
|
|
*codec = (!!name && *name) ?
|
|
|
|
avcodec_find_encoder_by_name(name) :
|
|
|
|
avcodec_find_encoder(id);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!*codec) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Couldn't find encoder '%s'",
|
2014-01-19 02:16:41 -08:00
|
|
|
avcodec_get_name(id));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
*stream = avformat_new_stream(data->output, *codec);
|
|
|
|
if (!*stream) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Couldn't create stream for encoder '%s'",
|
2014-01-19 02:16:41 -08:00
|
|
|
avcodec_get_name(id));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
(*stream)->id = data->output->nb_streams-1;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
static void parse_params(AVCodecContext *context, char **opts)
|
|
|
|
{
|
|
|
|
if (!context || !context->priv_data)
|
|
|
|
return;
|
|
|
|
|
|
|
|
while (*opts) {
|
|
|
|
char *opt = *opts;
|
|
|
|
char *assign = strchr(opt, '=');
|
|
|
|
|
|
|
|
if (assign) {
|
|
|
|
char *name = opt;
|
|
|
|
char *value;
|
|
|
|
|
|
|
|
*assign = 0;
|
|
|
|
value = assign+1;
|
|
|
|
|
|
|
|
av_opt_set(context->priv_data, name, value, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
opts++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static bool open_video_codec(struct ffmpeg_data *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
|
|
|
AVCodecContext *context = data->video->codec;
|
2015-01-25 10:34:58 -08:00
|
|
|
char **opts = strlist_split(data->config.video_settings, ' ', false);
|
2014-01-19 02:16:41 -08:00
|
|
|
int ret;
|
|
|
|
|
2015-01-18 07:22:33 -08:00
|
|
|
if (data->vcodec->id == AV_CODEC_ID_H264)
|
2014-02-27 22:14:03 -08:00
|
|
|
av_opt_set(context->priv_data, "preset", "veryfast", 0);
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (opts) {
|
|
|
|
parse_params(context, opts);
|
|
|
|
strlist_free(opts);
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
ret = avcodec_open2(context, data->vcodec, NULL);
|
|
|
|
if (ret < 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to open video codec: %s",
|
2014-01-19 02:16:41 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
data->vframe = av_frame_alloc();
|
|
|
|
if (!data->vframe) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to allocate video frame");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
data->vframe->format = context->pix_fmt;
|
|
|
|
data->vframe->width = context->width;
|
|
|
|
data->vframe->height = context->height;
|
|
|
|
|
|
|
|
ret = avpicture_alloc(&data->dst_picture, context->pix_fmt,
|
|
|
|
context->width, context->height);
|
|
|
|
if (ret < 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to allocate dst_picture: %s",
|
2014-01-19 02:16:41 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
*((AVPicture*)data->vframe) = data->dst_picture;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool init_swscale(struct ffmpeg_data *data, AVCodecContext *context)
|
|
|
|
{
|
|
|
|
data->swscale = sws_getContext(
|
2015-01-25 10:34:58 -08:00
|
|
|
data->config.width, data->config.height,
|
|
|
|
data->config.format,
|
|
|
|
data->config.scale_width, data->config.scale_height,
|
|
|
|
context->pix_fmt,
|
2014-02-07 02:03:54 -08:00
|
|
|
SWS_BICUBIC, NULL, NULL, NULL);
|
|
|
|
|
|
|
|
if (!data->swscale) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Could not initialize swscale");
|
2014-02-07 02:03:54 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool create_video_stream(struct ffmpeg_data *data)
|
|
|
|
{
|
2015-01-25 10:34:58 -08:00
|
|
|
enum AVPixelFormat closest_format;
|
2014-01-19 02:16:41 -08:00
|
|
|
AVCodecContext *context;
|
|
|
|
struct obs_video_info ovi;
|
|
|
|
|
|
|
|
if (!obs_get_video_info(&ovi)) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "No active video");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!new_stream(data, &data->video, &data->vcodec,
|
2015-01-25 10:34:58 -08:00
|
|
|
data->output->oformat->video_codec,
|
|
|
|
data->config.video_encoder))
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
closest_format = get_closest_format(data->config.format,
|
|
|
|
data->vcodec->pix_fmts);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
context = data->video->codec;
|
2015-01-25 10:34:58 -08:00
|
|
|
context->bit_rate = data->config.video_bitrate * 1000;
|
|
|
|
context->width = data->config.scale_width;
|
|
|
|
context->height = data->config.scale_height;
|
2015-03-29 10:34:31 -07:00
|
|
|
context->time_base = (AVRational){ ovi.fps_den, ovi.fps_num };
|
2014-03-10 13:10:35 -07:00
|
|
|
context->gop_size = 120;
|
2015-01-25 10:34:58 -08:00
|
|
|
context->pix_fmt = closest_format;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2015-03-29 10:34:31 -07:00
|
|
|
data->video->time_base = context->time_base;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (data->output->oformat->flags & AVFMT_GLOBALHEADER)
|
|
|
|
context->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
if (!open_video_codec(data))
|
2014-02-07 02:03:54 -08:00
|
|
|
return false;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (context->pix_fmt != data->config.format ||
|
|
|
|
data->config.width != data->config.scale_width ||
|
|
|
|
data->config.height != data->config.scale_height) {
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
if (!init_swscale(data, context))
|
|
|
|
return false;
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
return true;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static bool open_audio_codec(struct ffmpeg_data *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
|
|
|
AVCodecContext *context = data->audio->codec;
|
2015-01-25 10:34:58 -08:00
|
|
|
char **opts = strlist_split(data->config.video_settings, ' ', false);
|
2014-01-19 02:16:41 -08:00
|
|
|
int ret;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (opts) {
|
|
|
|
parse_params(context, opts);
|
|
|
|
strlist_free(opts);
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
data->aframe = av_frame_alloc();
|
|
|
|
if (!data->aframe) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to allocate audio frame");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
context->strict_std_compliance = -2;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
ret = avcodec_open2(context, data->acodec, NULL);
|
|
|
|
if (ret < 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to open audio codec: %s",
|
2014-01-19 02:16:41 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-02-09 04:51:06 -08:00
|
|
|
data->frame_size = context->frame_size ? context->frame_size : 1024;
|
|
|
|
|
|
|
|
ret = av_samples_alloc(data->samples, NULL, context->channels,
|
|
|
|
data->frame_size, context->sample_fmt, 0);
|
|
|
|
if (ret < 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Failed to create audio buffer: %s",
|
2014-02-09 04:51:06 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool create_audio_stream(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
AVCodecContext *context;
|
2015-03-07 04:47:12 -08:00
|
|
|
struct obs_audio_info aoi;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
if (!obs_get_audio_info(&aoi)) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "No active audio");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!new_stream(data, &data->audio, &data->acodec,
|
2015-01-25 10:34:58 -08:00
|
|
|
data->output->oformat->audio_codec,
|
|
|
|
data->config.audio_encoder))
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
context = data->audio->codec;
|
2015-01-25 10:34:58 -08:00
|
|
|
context->bit_rate = data->config.audio_bitrate * 1000;
|
2015-03-29 10:34:31 -07:00
|
|
|
context->time_base = (AVRational){ 1, aoi.samples_per_sec };
|
2014-01-19 02:16:41 -08:00
|
|
|
context->channels = get_audio_channels(aoi.speakers);
|
|
|
|
context->sample_rate = aoi.samples_per_sec;
|
2014-02-07 02:03:54 -08:00
|
|
|
context->sample_fmt = data->acodec->sample_fmts ?
|
|
|
|
data->acodec->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2015-03-29 10:34:31 -07:00
|
|
|
data->audio->time_base = context->time_base;
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
data->audio_samplerate = aoi.samples_per_sec;
|
2014-02-23 15:27:19 -08:00
|
|
|
data->audio_format = convert_ffmpeg_sample_format(context->sample_fmt);
|
|
|
|
data->audio_planes = get_audio_planes(data->audio_format, aoi.speakers);
|
|
|
|
data->audio_size = get_audio_size(data->audio_format, aoi.speakers, 1);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (data->output->oformat->flags & AVFMT_GLOBALHEADER)
|
|
|
|
context->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
return open_audio_codec(data);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool init_streams(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
AVOutputFormat *format = data->output->oformat;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
if (format->video_codec != AV_CODEC_ID_NONE)
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!create_video_stream(data))
|
|
|
|
return false;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
if (format->audio_codec != AV_CODEC_ID_NONE)
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!create_audio_stream(data))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool open_output_file(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
AVOutputFormat *format = data->output->oformat;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if ((format->flags & AVFMT_NOFILE) == 0) {
|
2015-01-25 10:34:58 -08:00
|
|
|
ret = avio_open(&data->output->pb, data->config.url,
|
2014-01-19 02:16:41 -08:00
|
|
|
AVIO_FLAG_WRITE);
|
|
|
|
if (ret < 0) {
|
2015-01-25 10:34:58 -08:00
|
|
|
blog(LOG_WARNING, "Couldn't open '%s', %s",
|
|
|
|
data->config.url, av_err2str(ret));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = avformat_write_header(data->output, NULL);
|
|
|
|
if (ret < 0) {
|
2015-01-25 10:34:58 -08:00
|
|
|
blog(LOG_WARNING, "Error opening '%s': %s",
|
|
|
|
data->config.url, av_err2str(ret));
|
2014-01-20 00:40:15 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void close_video(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
avcodec_close(data->video->codec);
|
2014-02-07 02:03:54 -08:00
|
|
|
avpicture_free(&data->dst_picture);
|
2015-03-27 18:30:37 -07:00
|
|
|
|
|
|
|
// This format for some reason derefs video frame
|
|
|
|
// too many times
|
|
|
|
if (data->vcodec->id == AV_CODEC_ID_A64_MULTI ||
|
|
|
|
data->vcodec->id == AV_CODEC_ID_A64_MULTI5)
|
|
|
|
return;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
av_frame_free(&data->vframe);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void close_audio(struct ffmpeg_data *data)
|
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
for (size_t i = 0; i < MAX_AV_PLANES; i++)
|
2014-02-09 04:51:06 -08:00
|
|
|
circlebuf_free(&data->excess_frames[i]);
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
av_freep(&data->samples[0]);
|
2014-01-19 02:16:41 -08:00
|
|
|
avcodec_close(data->audio->codec);
|
|
|
|
av_frame_free(&data->aframe);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ffmpeg_data_free(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
if (data->initialized)
|
|
|
|
av_write_trailer(data->output);
|
|
|
|
|
|
|
|
if (data->video)
|
|
|
|
close_video(data);
|
|
|
|
if (data->audio)
|
|
|
|
close_audio(data);
|
|
|
|
|
2014-03-11 14:46:34 -07:00
|
|
|
if (data->output) {
|
|
|
|
if ((data->output->oformat->flags & AVFMT_NOFILE) == 0)
|
|
|
|
avio_close(data->output->pb);
|
|
|
|
|
|
|
|
avformat_free_context(data->output);
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
memset(data, 0, sizeof(struct ffmpeg_data));
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
static inline const char *safe_str(const char *s)
|
|
|
|
{
|
|
|
|
if (s == NULL)
|
|
|
|
return "(NULL)";
|
|
|
|
else
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
static bool ffmpeg_data_init(struct ffmpeg_data *data,
|
|
|
|
struct ffmpeg_cfg *config)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-03-10 13:10:35 -07:00
|
|
|
bool is_rtmp = false;
|
2014-02-28 02:50:30 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
memset(data, 0, sizeof(struct ffmpeg_data));
|
2015-01-25 10:34:58 -08:00
|
|
|
data->config = *config;
|
2014-02-10 09:22:35 -08:00
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!config->url || !*config->url)
|
2014-02-10 09:22:35 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
av_register_all();
|
2014-03-10 13:10:35 -07:00
|
|
|
avformat_network_init();
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
is_rtmp = (astrcmpi_n(config->url, "rtmp://", 7) == 0);
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
AVOutputFormat *output_format = av_guess_format(
|
|
|
|
is_rtmp ? "flv" : data->config.format_name,
|
|
|
|
data->config.url,
|
|
|
|
is_rtmp ? NULL : data->config.format_mime_type);
|
|
|
|
|
|
|
|
if (output_format == NULL) {
|
|
|
|
blog(LOG_WARNING, "Couldn't find matching output format with "
|
|
|
|
" parameters: name=%s, url=%s, mime=%s",
|
|
|
|
safe_str(is_rtmp ?
|
|
|
|
"flv" : data->config.format_name),
|
|
|
|
safe_str(data->config.url),
|
|
|
|
safe_str(is_rtmp ?
|
|
|
|
NULL : data->config.format_mime_type));
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
avformat_alloc_output_context2(&data->output, output_format,
|
|
|
|
NULL, NULL);
|
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
if (is_rtmp) {
|
|
|
|
data->output->oformat->video_codec = AV_CODEC_ID_H264;
|
|
|
|
data->output->oformat->audio_codec = AV_CODEC_ID_AAC;
|
2015-03-27 23:47:24 -07:00
|
|
|
} else {
|
|
|
|
data->output->oformat->video_codec =
|
|
|
|
data->config.video_encoder_id;
|
|
|
|
data->output->oformat->audio_codec =
|
|
|
|
data->config.audio_encoder_id;
|
2014-03-10 19:04:00 -07:00
|
|
|
}
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!data->output) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "Couldn't create avformat context");
|
2014-01-19 02:16:41 -08:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!init_streams(data))
|
|
|
|
goto fail;
|
|
|
|
if (!open_output_file(data))
|
|
|
|
goto fail;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
av_dump_format(data->output, 0, NULL, 1);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
data->initialized = true;
|
|
|
|
return true;
|
|
|
|
|
|
|
|
fail:
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "ffmpeg_data_init failed");
|
2014-01-19 02:16:41 -08:00
|
|
|
ffmpeg_data_free(data);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
|
2014-06-25 00:13:00 -07:00
|
|
|
static const char *ffmpeg_output_getname(void)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-07-09 22:12:57 -07:00
|
|
|
return obs_module_text("FFmpegOutput");
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-02-27 22:14:03 -08:00
|
|
|
static void ffmpeg_log_callback(void *param, int level, const char *format,
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
va_list args)
|
2014-02-07 02:03:54 -08:00
|
|
|
{
|
2014-02-28 20:46:22 -08:00
|
|
|
if (level <= AV_LOG_INFO)
|
2014-02-27 22:14:03 -08:00
|
|
|
blogva(LOG_DEBUG, format, args);
|
2014-02-14 14:13:36 -08:00
|
|
|
|
|
|
|
UNUSED_PARAMETER(param);
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
static void *ffmpeg_output_create(obs_data_t *settings, obs_output_t *output)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-09 11:34:07 -08:00
|
|
|
struct ffmpeg_output *data = bzalloc(sizeof(struct ffmpeg_output));
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_init_value(&data->write_mutex);
|
2014-01-19 02:16:41 -08:00
|
|
|
data->output = output;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
if (pthread_mutex_init(&data->write_mutex, NULL) != 0)
|
|
|
|
goto fail;
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_event_init(&data->stop_event, OS_EVENT_TYPE_AUTO) != 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
goto fail;
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_sem_init(&data->write_sem, 0) != 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
goto fail;
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
av_log_set_callback(ffmpeg_log_callback);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
UNUSED_PARAMETER(settings);
|
2014-01-19 02:16:41 -08:00
|
|
|
return data;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
fail:
|
|
|
|
pthread_mutex_destroy(&data->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_event_destroy(data->stop_event);
|
2014-03-10 13:10:35 -07:00
|
|
|
bfree(data);
|
|
|
|
return NULL;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static void ffmpeg_output_stop(void *data);
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static void ffmpeg_output_destroy(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
|
|
|
if (output) {
|
2014-03-10 13:10:35 -07:00
|
|
|
if (output->connecting)
|
|
|
|
pthread_join(output->start_thread, NULL);
|
|
|
|
|
|
|
|
ffmpeg_output_stop(output);
|
|
|
|
|
|
|
|
pthread_mutex_destroy(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_destroy(output->write_sem);
|
|
|
|
os_event_destroy(output->stop_event);
|
2014-01-19 02:16:41 -08:00
|
|
|
bfree(data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-18 12:37:56 -08:00
|
|
|
static inline void copy_data(AVPicture *pic, const struct video_data *frame,
|
2014-02-07 02:03:54 -08:00
|
|
|
int height)
|
|
|
|
{
|
2014-04-04 11:54:32 -07:00
|
|
|
for (int plane = 0; plane < MAX_AV_PLANES; plane++) {
|
|
|
|
if (!frame->data[plane])
|
|
|
|
continue;
|
|
|
|
|
2014-02-09 04:51:06 -08:00
|
|
|
int frame_rowsize = (int)frame->linesize[plane];
|
2014-02-07 02:03:54 -08:00
|
|
|
int pic_rowsize = pic->linesize[plane];
|
|
|
|
int bytes = frame_rowsize < pic_rowsize ?
|
|
|
|
frame_rowsize : pic_rowsize;
|
|
|
|
int plane_height = plane == 0 ? height : height/2;
|
|
|
|
|
|
|
|
for (int y = 0; y < plane_height; y++) {
|
|
|
|
int pos_frame = y * frame_rowsize;
|
|
|
|
int pos_pic = y * pic_rowsize;
|
|
|
|
|
|
|
|
memcpy(pic->data[plane] + pos_pic,
|
|
|
|
frame->data[plane] + pos_frame,
|
|
|
|
bytes);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
static void receive_video(void *param, struct video_data *frame)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-07 02:03:54 -08:00
|
|
|
struct ffmpeg_output *output = param;
|
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
2015-03-27 22:54:47 -07:00
|
|
|
|
|
|
|
// codec doesn't support video or none configured
|
|
|
|
if (!data->video)
|
|
|
|
return;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
AVCodecContext *context = data->video->codec;
|
|
|
|
AVPacket packet = {0};
|
2014-03-11 16:07:22 -07:00
|
|
|
int ret = 0, got_packet;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
av_init_packet(&packet);
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
if (!data->start_timestamp)
|
|
|
|
data->start_timestamp = frame->timestamp;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!!data->swscale)
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
sws_scale(data->swscale, (const uint8_t *const *)frame->data,
|
2014-02-14 14:13:36 -08:00
|
|
|
(const int*)frame->linesize,
|
2015-01-25 10:34:58 -08:00
|
|
|
0, data->config.height, data->dst_picture.data,
|
2014-02-07 02:03:54 -08:00
|
|
|
data->dst_picture.linesize);
|
|
|
|
else
|
|
|
|
copy_data(&data->dst_picture, frame, context->height);
|
|
|
|
|
|
|
|
if (data->output->flags & AVFMT_RAWPICTURE) {
|
|
|
|
packet.flags |= AV_PKT_FLAG_KEY;
|
|
|
|
packet.stream_index = data->video->index;
|
|
|
|
packet.data = data->dst_picture.data[0];
|
|
|
|
packet.size = sizeof(AVPicture);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
da_push_back(output->packets, &packet);
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
} else {
|
|
|
|
data->vframe->pts = data->total_frames;
|
|
|
|
ret = avcodec_encode_video2(context, &packet, data->vframe,
|
|
|
|
&got_packet);
|
|
|
|
if (ret < 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "receive_video: Error encoding "
|
|
|
|
"video: %s", av_err2str(ret));
|
2014-02-07 02:03:54 -08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && got_packet && packet.size) {
|
|
|
|
packet.pts = rescale_ts(packet.pts, context,
|
2014-04-05 01:13:11 -07:00
|
|
|
data->video->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
packet.dts = rescale_ts(packet.dts, context,
|
2014-04-05 01:13:11 -07:00
|
|
|
data->video->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
packet.duration = (int)av_rescale_q(packet.duration,
|
|
|
|
context->time_base,
|
|
|
|
data->video->time_base);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
da_push_back(output->packets, &packet);
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-02-07 02:03:54 -08:00
|
|
|
} else {
|
|
|
|
ret = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret != 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "receive_video: Error writing video: %s",
|
2014-02-07 02:03:54 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
}
|
|
|
|
|
|
|
|
data->total_frames++;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-04-04 23:21:19 -07:00
|
|
|
static void encode_audio(struct ffmpeg_output *output,
|
2014-02-09 04:51:06 -08:00
|
|
|
struct AVCodecContext *context, size_t block_size)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-03-10 13:10:35 -07:00
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
AVPacket packet = {0};
|
|
|
|
int ret, got_packet;
|
2014-03-10 13:10:35 -07:00
|
|
|
size_t total_size = data->frame_size * block_size * context->channels;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
data->aframe->nb_samples = data->frame_size;
|
|
|
|
data->aframe->pts = av_rescale_q(data->total_samples,
|
2014-02-07 02:03:54 -08:00
|
|
|
(AVRational){1, context->sample_rate},
|
|
|
|
context->time_base);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = avcodec_fill_audio_frame(data->aframe, context->channels,
|
|
|
|
context->sample_fmt, data->samples[0],
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
(int)total_size, 1);
|
2014-02-09 04:51:06 -08:00
|
|
|
if (ret < 0) {
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
blog(LOG_WARNING, "encode_audio: avcodec_fill_audio_frame "
|
2014-02-28 19:02:29 -08:00
|
|
|
"failed: %s", av_err2str(ret));
|
2014-02-09 04:51:06 -08:00
|
|
|
return;
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
data->total_samples += data->frame_size;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = avcodec_encode_audio2(context, &packet, data->aframe,
|
2014-02-07 02:03:54 -08:00
|
|
|
&got_packet);
|
|
|
|
if (ret < 0) {
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
blog(LOG_WARNING, "encode_audio: Error encoding audio: %s",
|
2014-02-07 02:03:54 -08:00
|
|
|
av_err2str(ret));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!got_packet)
|
|
|
|
return;
|
|
|
|
|
2014-04-05 01:13:11 -07:00
|
|
|
packet.pts = rescale_ts(packet.pts, context, data->audio->time_base);
|
|
|
|
packet.dts = rescale_ts(packet.dts, context, data->audio->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
packet.duration = (int)av_rescale_q(packet.duration, context->time_base,
|
2014-03-10 13:10:35 -07:00
|
|
|
data->audio->time_base);
|
|
|
|
packet.stream_index = data->audio->index;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-02-28 02:50:30 -08:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
2014-03-10 13:10:35 -07:00
|
|
|
da_push_back(output->packets, &packet);
|
2014-02-28 02:50:30 -08:00
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
static bool prepare_audio(struct ffmpeg_data *data,
|
|
|
|
const struct audio_data *frame, struct audio_data *output)
|
|
|
|
{
|
|
|
|
*output = *frame;
|
|
|
|
|
|
|
|
if (frame->timestamp < data->start_timestamp) {
|
|
|
|
uint64_t duration = (uint64_t)frame->frames * 1000000000 /
|
|
|
|
(uint64_t)data->audio_samplerate;
|
|
|
|
uint64_t end_ts = (frame->timestamp + duration);
|
|
|
|
uint64_t cutoff;
|
|
|
|
|
|
|
|
if (end_ts <= data->start_timestamp)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
cutoff = data->start_timestamp - frame->timestamp;
|
|
|
|
cutoff = cutoff * (uint64_t)data->audio_samplerate /
|
|
|
|
1000000000;
|
|
|
|
|
|
|
|
for (size_t i = 0; i < data->audio_planes; i++)
|
|
|
|
output->data[i] += data->audio_size * (uint32_t)cutoff;
|
|
|
|
output->frames -= (uint32_t)cutoff;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
static void receive_audio(void *param, struct audio_data *frame)
|
2014-02-09 04:51:06 -08:00
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = param;
|
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
2014-02-24 00:48:14 -08:00
|
|
|
size_t frame_size_bytes;
|
|
|
|
struct audio_data in;
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2015-03-27 22:54:47 -07:00
|
|
|
// codec doesn't support audio or none configured
|
|
|
|
if (!data->audio)
|
|
|
|
return;
|
|
|
|
|
2014-02-09 04:51:06 -08:00
|
|
|
AVCodecContext *context = data->audio->codec;
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
if (!data->start_timestamp)
|
|
|
|
return;
|
|
|
|
if (!prepare_audio(data, frame, &in))
|
|
|
|
return;
|
|
|
|
|
|
|
|
frame_size_bytes = (size_t)data->frame_size * data->audio_size;
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2014-02-23 15:27:19 -08:00
|
|
|
for (size_t i = 0; i < data->audio_planes; i++)
|
2014-02-24 00:51:39 -08:00
|
|
|
circlebuf_push_back(&data->excess_frames[i], in.data[i],
|
2014-02-24 00:48:14 -08:00
|
|
|
in.frames * data->audio_size);
|
2014-02-09 04:51:06 -08:00
|
|
|
|
|
|
|
while (data->excess_frames[0].size >= frame_size_bytes) {
|
2014-02-23 15:27:19 -08:00
|
|
|
for (size_t i = 0; i < data->audio_planes; i++)
|
2014-02-09 04:51:06 -08:00
|
|
|
circlebuf_pop_front(&data->excess_frames[i],
|
|
|
|
data->samples[i], frame_size_bytes);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
encode_audio(output, context, data->audio_size);
|
2014-02-09 04:51:06 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static bool process_packet(struct ffmpeg_output *output)
|
|
|
|
{
|
|
|
|
AVPacket packet;
|
|
|
|
bool new_packet = false;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
if (output->packets.num) {
|
|
|
|
packet = output->packets.array[0];
|
|
|
|
da_erase(output->packets, 0);
|
|
|
|
new_packet = true;
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
|
|
|
|
|
|
|
if (!new_packet)
|
|
|
|
return true;
|
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
/*blog(LOG_DEBUG, "size = %d, flags = %lX, stream = %d, "
|
|
|
|
"packets queued: %lu",
|
|
|
|
packet.size, packet.flags,
|
|
|
|
packet.stream_index, output->packets.num);*/
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
ret = av_interleaved_write_frame(output->ff_data.output, &packet);
|
|
|
|
if (ret < 0) {
|
2014-03-11 09:14:21 -07:00
|
|
|
av_free_packet(&packet);
|
2014-03-10 13:10:35 -07:00
|
|
|
blog(LOG_WARNING, "receive_audio: Error writing packet: %s",
|
|
|
|
av_err2str(ret));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *write_thread(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
while (os_sem_wait(output->write_sem) == 0) {
|
2014-03-10 13:10:35 -07:00
|
|
|
/* check to see if shutting down */
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_event_try(output->stop_event) == 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
break;
|
|
|
|
|
|
|
|
if (!process_packet(output)) {
|
2014-03-11 09:14:21 -07:00
|
|
|
pthread_detach(output->write_thread);
|
|
|
|
output->write_thread_active = false;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ffmpeg_output_stop(output);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
output->active = false;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
static inline const char *get_string_or_null(obs_data_t *settings,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
const char *value = obs_data_get_string(settings, name);
|
|
|
|
if (!value || !strlen(value))
|
|
|
|
return NULL;
|
|
|
|
return value;
|
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static bool try_connect(struct ffmpeg_output *output)
|
|
|
|
{
|
2015-01-25 10:34:58 -08:00
|
|
|
video_t *video = obs_output_video(output->output);
|
|
|
|
struct ffmpeg_cfg config;
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_data_t *settings;
|
2015-01-25 10:34:58 -08:00
|
|
|
bool success;
|
2014-03-10 13:10:35 -07:00
|
|
|
int ret;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
settings = obs_output_get_settings(output->output);
|
2015-01-25 10:34:58 -08:00
|
|
|
config.url = obs_data_get_string(settings, "url");
|
2015-03-27 23:47:24 -07:00
|
|
|
config.format_name = get_string_or_null(settings, "format_name");
|
|
|
|
config.format_mime_type = get_string_or_null(settings,
|
|
|
|
"format_mime_type");
|
2015-01-25 10:34:58 -08:00
|
|
|
config.video_bitrate = (int)obs_data_get_int(settings, "video_bitrate");
|
|
|
|
config.audio_bitrate = (int)obs_data_get_int(settings, "audio_bitrate");
|
2015-03-27 23:47:24 -07:00
|
|
|
config.video_encoder = get_string_or_null(settings, "video_encoder");
|
|
|
|
config.video_encoder_id = (int)obs_data_get_int(settings,
|
|
|
|
"video_encoder_id");
|
|
|
|
config.audio_encoder = get_string_or_null(settings, "audio_encoder");
|
|
|
|
config.audio_encoder_id = (int)obs_data_get_int(settings,
|
|
|
|
"audio_encoder_id");
|
2015-01-25 10:34:58 -08:00
|
|
|
config.video_settings = obs_data_get_string(settings, "video_settings");
|
|
|
|
config.audio_settings = obs_data_get_string(settings, "audio_settings");
|
|
|
|
config.scale_width = (int)obs_data_get_int(settings, "scale_width");
|
|
|
|
config.scale_height = (int)obs_data_get_int(settings, "scale_height");
|
|
|
|
config.width = (int)obs_output_get_width(output->output);
|
|
|
|
config.height = (int)obs_output_get_height(output->output);
|
|
|
|
config.format = obs_to_ffmpeg_video_format(
|
|
|
|
video_output_get_format(video));
|
|
|
|
|
|
|
|
if (config.format == AV_PIX_FMT_NONE) {
|
|
|
|
blog(LOG_DEBUG, "invalid pixel format used for FFmpeg output");
|
2014-02-10 09:22:35 -08:00
|
|
|
return false;
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!config.scale_width)
|
|
|
|
config.scale_width = config.width;
|
|
|
|
if (!config.scale_height)
|
|
|
|
config.scale_height = config.height;
|
2014-02-10 09:22:35 -08:00
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
success = ffmpeg_data_init(&output->ff_data, &config);
|
|
|
|
obs_data_release(settings);
|
2014-08-10 17:09:15 -07:00
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!success)
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
|
2014-02-18 12:37:56 -08:00
|
|
|
struct audio_convert_info aci = {
|
2014-02-23 15:27:19 -08:00
|
|
|
.format = output->ff_data.audio_format
|
2014-02-18 12:37:56 -08:00
|
|
|
};
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
output->active = true;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!obs_output_can_begin_data_capture(output->output, 0))
|
|
|
|
return false;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = pthread_create(&output->write_thread, NULL, write_thread, output);
|
|
|
|
if (ret != 0) {
|
|
|
|
blog(LOG_WARNING, "ffmpeg_output_start: failed to create write "
|
|
|
|
"thread.");
|
|
|
|
ffmpeg_output_stop(output);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
obs_output_set_video_conversion(output->output, NULL);
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
obs_output_set_audio_conversion(output->output, &aci);
|
|
|
|
obs_output_begin_data_capture(output->output, 0);
|
2014-03-10 13:10:35 -07:00
|
|
|
output->write_thread_active = true;
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static void *start_thread(void *data)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!try_connect(output))
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
obs_output_signal_stop(output->output,
|
2014-04-01 11:55:18 -07:00
|
|
|
OBS_OUTPUT_CONNECT_FAILED);
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
output->connecting = false;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ffmpeg_output_start(void *data)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (output->connecting)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
ret = pthread_create(&output->start_thread, NULL, start_thread, output);
|
|
|
|
return (output->connecting = (ret == 0));
|
|
|
|
}
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static void ffmpeg_output_stop(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
|
|
|
if (output->active) {
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
obs_output_end_data_capture(output->output);
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
if (output->write_thread_active) {
|
2014-03-10 19:04:00 -07:00
|
|
|
os_event_signal(output->stop_event);
|
|
|
|
os_sem_post(output->write_sem);
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_join(output->write_thread, NULL);
|
|
|
|
output->write_thread_active = false;
|
|
|
|
}
|
|
|
|
|
2014-03-11 09:14:21 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
for (size_t i = 0; i < output->packets.num; i++)
|
|
|
|
av_free_packet(output->packets.array+i);
|
|
|
|
da_free(output->packets);
|
2014-03-11 09:14:21 -07:00
|
|
|
|
2014-03-11 09:16:16 -07:00
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-11 09:14:21 -07:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
ffmpeg_data_free(&output->ff_data);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct obs_output_info ffmpeg_output = {
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
.id = "ffmpeg_output",
|
|
|
|
.flags = OBS_OUTPUT_AUDIO | OBS_OUTPUT_VIDEO,
|
2014-08-04 14:38:26 -07:00
|
|
|
.get_name = ffmpeg_output_getname,
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
.create = ffmpeg_output_create,
|
|
|
|
.destroy = ffmpeg_output_destroy,
|
|
|
|
.start = ffmpeg_output_start,
|
|
|
|
.stop = ffmpeg_output_stop,
|
|
|
|
.raw_video = receive_video,
|
|
|
|
.raw_audio = receive_audio,
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|