2014-01-19 02:16:41 -08:00
|
|
|
/******************************************************************************
|
2014-02-09 04:51:06 -08:00
|
|
|
Copyright (C) 2014 by Hugh Bailey <obs.jim@gmail.com>
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation, either version 2 of the License, or
|
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
******************************************************************************/
|
|
|
|
|
2014-07-09 22:12:57 -07:00
|
|
|
#include <obs-module.h>
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
#include <util/circlebuf.h>
|
2014-02-28 02:50:30 -08:00
|
|
|
#include <util/threading.h>
|
2014-03-10 13:10:35 -07:00
|
|
|
#include <util/dstr.h>
|
|
|
|
#include <util/darray.h>
|
|
|
|
#include <util/platform.h>
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
|
2019-07-01 05:26:43 -07:00
|
|
|
#include "obs-ffmpeg-output.h"
|
2014-04-04 23:21:19 -07:00
|
|
|
#include "obs-ffmpeg-formats.h"
|
2014-04-05 07:12:32 -07:00
|
|
|
#include "obs-ffmpeg-compat.h"
|
2014-04-04 23:21:19 -07:00
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct ffmpeg_output {
|
2019-06-22 22:13:45 -07:00
|
|
|
obs_output_t *output;
|
|
|
|
volatile bool active;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct ffmpeg_data ff_data;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool connecting;
|
|
|
|
pthread_t start_thread;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t total_bytes;
|
2017-05-12 22:01:16 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
uint64_t audio_start_ts;
|
|
|
|
uint64_t video_start_ts;
|
|
|
|
uint64_t stop_ts;
|
|
|
|
volatile bool stopping;
|
2016-06-11 11:42:29 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
bool write_thread_active;
|
|
|
|
pthread_mutex_t write_mutex;
|
|
|
|
pthread_t write_thread;
|
|
|
|
os_sem_t *write_sem;
|
|
|
|
os_event_t *stop_event;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
DARRAY(AVPacket) packets;
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2019-03-30 07:45:51 -07:00
|
|
|
static void ffmpeg_output_set_last_error(struct ffmpeg_data *data,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *error)
|
2019-03-30 07:45:51 -07:00
|
|
|
{
|
|
|
|
if (data->last_error)
|
|
|
|
bfree(data->last_error);
|
|
|
|
|
|
|
|
data->last_error = bstrdup(error);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ffmpeg_log_error(int log_level, struct ffmpeg_data *data,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *format, ...)
|
2019-03-30 07:45:51 -07:00
|
|
|
{
|
|
|
|
va_list args;
|
|
|
|
char out[4096];
|
|
|
|
|
|
|
|
va_start(args, format);
|
|
|
|
vsnprintf(out, sizeof(out), format, args);
|
|
|
|
va_end(args);
|
|
|
|
|
|
|
|
ffmpeg_output_set_last_error(data, out);
|
|
|
|
|
|
|
|
blog(log_level, "%s", out);
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
static bool new_stream(struct ffmpeg_data *data, AVStream **stream,
|
2019-06-22 22:13:45 -07:00
|
|
|
AVCodec **codec, enum AVCodecID id, const char *name)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2019-06-22 22:13:45 -07:00
|
|
|
*codec = (!!name && *name) ? avcodec_find_encoder_by_name(name)
|
|
|
|
: avcodec_find_encoder(id);
|
2015-01-25 10:34:58 -08:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!*codec) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Couldn't find encoder '%s'",
|
|
|
|
avcodec_get_name(id));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
*stream = avformat_new_stream(data->output, *codec);
|
|
|
|
if (!*stream) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Couldn't create stream for encoder '%s'",
|
|
|
|
avcodec_get_name(id));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
(*stream)->id = data->output->nb_streams - 1;
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-02-16 12:42:25 -08:00
|
|
|
static bool parse_params(AVCodecContext *context, char **opts)
|
2015-01-25 10:34:58 -08:00
|
|
|
{
|
2017-02-16 12:42:25 -08:00
|
|
|
bool ret = true;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!context || !context->priv_data)
|
2017-02-16 12:42:25 -08:00
|
|
|
return true;
|
2015-01-25 10:34:58 -08:00
|
|
|
|
|
|
|
while (*opts) {
|
|
|
|
char *opt = *opts;
|
|
|
|
char *assign = strchr(opt, '=');
|
|
|
|
|
|
|
|
if (assign) {
|
|
|
|
char *name = opt;
|
|
|
|
char *value;
|
|
|
|
|
|
|
|
*assign = 0;
|
2019-06-22 22:13:45 -07:00
|
|
|
value = assign + 1;
|
2015-01-25 10:34:58 -08:00
|
|
|
|
2019-10-09 09:54:01 -07:00
|
|
|
if (av_opt_set(context, name, value,
|
|
|
|
AV_OPT_SEARCH_CHILDREN)) {
|
2019-06-22 22:13:45 -07:00
|
|
|
blog(LOG_WARNING, "Failed to set %s=%s", name,
|
|
|
|
value);
|
2017-02-16 12:42:25 -08:00
|
|
|
ret = false;
|
|
|
|
}
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
opts++;
|
|
|
|
}
|
2017-02-16 12:42:25 -08:00
|
|
|
|
|
|
|
return ret;
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static bool open_video_codec(struct ffmpeg_data *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
|
|
|
AVCodecContext *context = data->video->codec;
|
2015-01-25 10:34:58 -08:00
|
|
|
char **opts = strlist_split(data->config.video_settings, ' ', false);
|
2014-01-19 02:16:41 -08:00
|
|
|
int ret;
|
|
|
|
|
2015-04-06 11:41:42 -07:00
|
|
|
if (strcmp(data->vcodec->name, "libx264") == 0)
|
2014-02-27 22:14:03 -08:00
|
|
|
av_opt_set(context->priv_data, "preset", "veryfast", 0);
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (opts) {
|
2017-02-16 12:42:25 -08:00
|
|
|
// libav requires x264 parameters in a special format which may be non-obvious
|
2019-06-22 22:13:45 -07:00
|
|
|
if (!parse_params(context, opts) &&
|
|
|
|
strcmp(data->vcodec->name, "libx264") == 0)
|
|
|
|
blog(LOG_WARNING,
|
|
|
|
"If you're trying to set x264 parameters, use x264-params=name=value:name=value");
|
2015-01-25 10:34:58 -08:00
|
|
|
strlist_free(opts);
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
ret = avcodec_open2(context, data->vcodec, NULL);
|
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to open video codec: %s",
|
|
|
|
av_err2str(ret));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
data->vframe = av_frame_alloc();
|
|
|
|
if (!data->vframe) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to allocate video frame");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
data->vframe->format = context->pix_fmt;
|
2019-06-22 22:13:45 -07:00
|
|
|
data->vframe->width = context->width;
|
2014-01-19 02:16:41 -08:00
|
|
|
data->vframe->height = context->height;
|
2015-09-18 23:51:31 -07:00
|
|
|
data->vframe->colorspace = data->config.color_space;
|
|
|
|
data->vframe->color_range = data->config.color_range;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2017-12-05 13:53:18 -08:00
|
|
|
ret = av_frame_get_buffer(data->vframe, base_get_alignment());
|
2014-01-19 02:16:41 -08:00
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to allocate vframe: %s",
|
|
|
|
av_err2str(ret));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool init_swscale(struct ffmpeg_data *data, AVCodecContext *context)
|
|
|
|
{
|
|
|
|
data->swscale = sws_getContext(
|
2019-06-22 22:13:45 -07:00
|
|
|
data->config.width, data->config.height, data->config.format,
|
|
|
|
data->config.scale_width, data->config.scale_height,
|
|
|
|
context->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
if (!data->swscale) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Could not initialize swscale");
|
2014-02-07 02:03:54 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool create_video_stream(struct ffmpeg_data *data)
|
|
|
|
{
|
2015-01-25 10:34:58 -08:00
|
|
|
enum AVPixelFormat closest_format;
|
2014-01-19 02:16:41 -08:00
|
|
|
AVCodecContext *context;
|
|
|
|
struct obs_video_info ovi;
|
|
|
|
|
|
|
|
if (!obs_get_video_info(&ovi)) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data, "No active video");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!new_stream(data, &data->video, &data->vcodec,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->output->oformat->video_codec,
|
|
|
|
data->config.video_encoder))
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
|
2019-10-16 04:24:27 -07:00
|
|
|
closest_format = avcodec_find_best_pix_fmt_of_list(
|
|
|
|
data->vcodec->pix_fmts, data->config.format, 0, NULL);
|
2015-01-25 10:34:58 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
context = data->video->codec;
|
2020-01-03 09:32:07 -08:00
|
|
|
context->bit_rate = (int64_t)data->config.video_bitrate * 1000;
|
2019-06-22 22:13:45 -07:00
|
|
|
context->width = data->config.scale_width;
|
|
|
|
context->height = data->config.scale_height;
|
|
|
|
context->time_base = (AVRational){ovi.fps_den, ovi.fps_num};
|
|
|
|
context->gop_size = data->config.gop_size;
|
|
|
|
context->pix_fmt = closest_format;
|
|
|
|
context->colorspace = data->config.color_space;
|
|
|
|
context->color_range = data->config.color_range;
|
|
|
|
context->thread_count = 0;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2015-03-29 10:34:31 -07:00
|
|
|
data->video->time_base = context->time_base;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (data->output->oformat->flags & AVFMT_GLOBALHEADER)
|
2017-10-29 15:50:01 -07:00
|
|
|
context->flags |= CODEC_FLAG_GLOBAL_H;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
if (!open_video_codec(data))
|
2014-02-07 02:03:54 -08:00
|
|
|
return false;
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
if (context->pix_fmt != data->config.format ||
|
|
|
|
data->config.width != data->config.scale_width ||
|
2015-01-25 10:34:58 -08:00
|
|
|
data->config.height != data->config.scale_height) {
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
if (!init_swscale(data, context))
|
|
|
|
return false;
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
return true;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
static bool open_audio_codec(struct ffmpeg_data *data, int idx)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2018-05-17 16:47:58 -07:00
|
|
|
AVCodecContext *context = data->audio_streams[idx]->codec;
|
2017-02-16 12:44:08 -08:00
|
|
|
char **opts = strlist_split(data->config.audio_settings, ' ', false);
|
2014-01-19 02:16:41 -08:00
|
|
|
int ret;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (opts) {
|
|
|
|
parse_params(context, opts);
|
|
|
|
strlist_free(opts);
|
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->aframe[idx] = av_frame_alloc();
|
|
|
|
if (!data->aframe[idx]) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to allocate audio frame");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->aframe[idx]->format = context->sample_fmt;
|
|
|
|
data->aframe[idx]->channels = context->channels;
|
|
|
|
data->aframe[idx]->channel_layout = context->channel_layout;
|
|
|
|
data->aframe[idx]->sample_rate = context->sample_rate;
|
obs-ffmpeg: fill in more fields on audio frames
After you call av_frame_alloc(), ffmpeg expects you to fill in certain
fields on the frame, depending on whether it's an audio or video frame.
obs-ffmpeg did this in the two places where it allocates video frames,
but not where it allocates audio frames. On my system, using trunk
ffmpeg and the Opus codec, this causes OBS to crash while calling
avcodec_send_frame, ultimately because av_frame_copy fails due to
'dst->format < 0' (as 'format' stays at the default of -1), causing a
null pointer to be added to a buffer queue, which later gets
dereferenced.
Oddly, the fields in question can just be copied directly from
corresponding fields in the AVCodecContext, but I don't see any ffmpeg
API to automatically copy all relevant fields, and all the examples I've
seen do it by hand. So this patch does the same.
2018-04-18 13:13:48 -07:00
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
context->strict_std_compliance = -2;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
ret = avcodec_open2(context, data->acodec, NULL);
|
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to open audio codec: %s",
|
|
|
|
av_err2str(ret));
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-02-09 04:51:06 -08:00
|
|
|
data->frame_size = context->frame_size ? context->frame_size : 1024;
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
ret = av_samples_alloc(data->samples[idx], NULL, context->channels,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->frame_size, context->sample_fmt, 0);
|
2014-02-09 04:51:06 -08:00
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to create audio buffer: %s",
|
|
|
|
av_err2str(ret));
|
2014-02-09 04:51:06 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
static bool create_audio_stream(struct ffmpeg_data *data, int idx)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
|
|
|
AVCodecContext *context;
|
2018-05-17 16:47:58 -07:00
|
|
|
AVStream *stream;
|
2015-03-07 04:47:12 -08:00
|
|
|
struct obs_audio_info aoi;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
if (!obs_get_audio_info(&aoi)) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data, "No active audio");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
if (!new_stream(data, &stream, &data->acodec,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->output->oformat->audio_codec,
|
|
|
|
data->config.audio_encoder))
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->audio_streams[idx] = stream;
|
2019-06-22 22:13:45 -07:00
|
|
|
context = data->audio_streams[idx]->codec;
|
2020-01-03 09:32:07 -08:00
|
|
|
context->bit_rate = (int64_t)data->config.audio_bitrate * 1000;
|
2019-06-22 22:13:45 -07:00
|
|
|
context->time_base = (AVRational){1, aoi.samples_per_sec};
|
|
|
|
context->channels = get_audio_channels(aoi.speakers);
|
|
|
|
context->sample_rate = aoi.samples_per_sec;
|
|
|
|
context->channel_layout =
|
|
|
|
av_get_default_channel_layout(context->channels);
|
libobs: Add surround sound audio support
(This commit also modifies the following modules: UI,
deps/media-playback, coreaudio-encoder, decklink, linux-alsa,
linux-pulseaudio, mac-capture, obs-ffmpeg, obs-filters, obs-libfdk,
obs-outputs, win-dshow, and win-wasapi)
Adds surround sound audio support to the core, core plugins, and user
interface.
Compatible streaming services: Twitch, FB 360 live
Compatible protocols: rtmp / mpeg-ts tcp udp
Compatible file formats: mkv mp4 ts (others untested)
Compatible codecs: ffmpeg aac, fdk_aac, CoreAudio aac,
opus, vorbis, pcm (others untested).
Tested streaming servers: wowza, nginx
HLS, mpeg-dash : surround passthrough
Html5 players tested with live surround:
videojs, mediaelement, viblast (hls+dash), hls.js
Decklink: on win32, swap channels order for 5.1 7.1
(due to different channel mapping on wav, mpeg, ffmpeg)
Audio filters: surround working.
Monitoring: surround working (win macOs linux (pulse-audio)).
VST: stereo plugins keep in general only the first two channels.
surround plugins should work (e.g. mcfx does).
OS: win, macOs, linux (alsa, pulse-audio).
Misc: larger audio bitrates unlocked to accommodate more channels
NB: mf-aac only supports mono and stereo + 5.1 on win 10
(not implemented due to lack of usefulness)
Closes jp9000/obs-studio#968
2017-05-26 17:15:54 -07:00
|
|
|
|
|
|
|
//AVlib default channel layout for 5 channels is 5.0 ; fix for 4.1
|
|
|
|
if (aoi.speakers == SPEAKERS_4POINT1)
|
|
|
|
context->channel_layout = av_get_channel_layout("4.1");
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
context->sample_fmt = data->acodec->sample_fmts
|
|
|
|
? data->acodec->sample_fmts[0]
|
|
|
|
: AV_SAMPLE_FMT_FLTP;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->audio_streams[idx]->time_base = context->time_base;
|
2015-03-29 10:34:31 -07:00
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
data->audio_samplerate = aoi.samples_per_sec;
|
2014-02-23 15:27:19 -08:00
|
|
|
data->audio_format = convert_ffmpeg_sample_format(context->sample_fmt);
|
|
|
|
data->audio_planes = get_audio_planes(data->audio_format, aoi.speakers);
|
|
|
|
data->audio_size = get_audio_size(data->audio_format, aoi.speakers, 1);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (data->output->oformat->flags & AVFMT_GLOBALHEADER)
|
2017-10-29 15:50:01 -07:00
|
|
|
context->flags |= CODEC_FLAG_GLOBAL_H;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
return open_audio_codec(data, idx);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool init_streams(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
AVOutputFormat *format = data->output->oformat;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
if (format->video_codec != AV_CODEC_ID_NONE)
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!create_video_stream(data))
|
|
|
|
return false;
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
if (format->audio_codec != AV_CODEC_ID_NONE &&
|
|
|
|
data->num_audio_streams) {
|
|
|
|
data->audio_streams =
|
|
|
|
calloc(1, data->num_audio_streams * sizeof(void *));
|
2018-05-17 16:47:58 -07:00
|
|
|
for (int i = 0; i < data->num_audio_streams; i++) {
|
|
|
|
if (!create_audio_stream(data, i))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
2014-01-19 02:16:41 -08:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool open_output_file(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
AVOutputFormat *format = data->output->oformat;
|
|
|
|
int ret;
|
|
|
|
|
2015-09-16 01:24:40 -07:00
|
|
|
AVDictionary *dict = NULL;
|
2019-06-22 22:13:45 -07:00
|
|
|
if ((ret = av_dict_parse_string(&dict, data->config.muxer_settings, "=",
|
|
|
|
" ", 0))) {
|
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
|
|
|
"Failed to parse muxer settings: %s\n%s",
|
|
|
|
av_err2str(ret), data->config.muxer_settings);
|
2015-09-16 01:24:40 -07:00
|
|
|
|
|
|
|
av_dict_free(&dict);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (av_dict_count(dict) > 0) {
|
|
|
|
struct dstr str = {0};
|
|
|
|
|
|
|
|
AVDictionaryEntry *entry = NULL;
|
|
|
|
while ((entry = av_dict_get(dict, "", entry,
|
2019-06-22 22:13:45 -07:00
|
|
|
AV_DICT_IGNORE_SUFFIX)))
|
2015-09-16 01:24:40 -07:00
|
|
|
dstr_catf(&str, "\n\t%s=%s", entry->key, entry->value);
|
|
|
|
|
2017-11-24 21:17:10 -08:00
|
|
|
blog(LOG_INFO, "Using muxer settings: %s", str.array);
|
2015-09-16 01:24:40 -07:00
|
|
|
dstr_free(&str);
|
|
|
|
}
|
|
|
|
|
2017-11-24 21:17:10 -08:00
|
|
|
if ((format->flags & AVFMT_NOFILE) == 0) {
|
|
|
|
ret = avio_open2(&data->output->pb, data->config.url,
|
2019-06-22 22:13:45 -07:00
|
|
|
AVIO_FLAG_WRITE, NULL, &dict);
|
2017-11-24 21:17:10 -08:00
|
|
|
if (ret < 0) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
2019-06-22 22:13:45 -07:00
|
|
|
"Couldn't open '%s', %s",
|
|
|
|
data->config.url, av_err2str(ret));
|
2017-11-24 21:17:10 -08:00
|
|
|
av_dict_free(&dict);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
strncpy(data->output->filename, data->config.url,
|
2019-06-22 22:13:45 -07:00
|
|
|
sizeof(data->output->filename));
|
2017-11-24 21:17:10 -08:00
|
|
|
data->output->filename[sizeof(data->output->filename) - 1] = 0;
|
|
|
|
|
2015-09-16 01:24:40 -07:00
|
|
|
ret = avformat_write_header(data->output, &dict);
|
2014-01-19 02:16:41 -08:00
|
|
|
if (ret < 0) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data, "Error opening '%s': %s",
|
2019-06-22 22:13:45 -07:00
|
|
|
data->config.url, av_err2str(ret));
|
2014-01-20 00:40:15 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2017-11-24 21:18:57 -08:00
|
|
|
if (av_dict_count(dict) > 0) {
|
|
|
|
struct dstr str = {0};
|
|
|
|
|
|
|
|
AVDictionaryEntry *entry = NULL;
|
|
|
|
while ((entry = av_dict_get(dict, "", entry,
|
2019-06-22 22:13:45 -07:00
|
|
|
AV_DICT_IGNORE_SUFFIX)))
|
2017-11-24 21:18:57 -08:00
|
|
|
dstr_catf(&str, "\n\t%s=%s", entry->key, entry->value);
|
|
|
|
|
|
|
|
blog(LOG_INFO, "Invalid muxer settings: %s", str.array);
|
|
|
|
dstr_free(&str);
|
|
|
|
}
|
|
|
|
|
2015-09-16 01:24:40 -07:00
|
|
|
av_dict_free(&dict);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void close_video(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
avcodec_close(data->video->codec);
|
2017-12-05 13:53:18 -08:00
|
|
|
av_frame_unref(data->vframe);
|
2015-03-27 18:30:37 -07:00
|
|
|
|
|
|
|
// This format for some reason derefs video frame
|
|
|
|
// too many times
|
|
|
|
if (data->vcodec->id == AV_CODEC_ID_A64_MULTI ||
|
|
|
|
data->vcodec->id == AV_CODEC_ID_A64_MULTI5)
|
|
|
|
return;
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
av_frame_free(&data->vframe);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void close_audio(struct ffmpeg_data *data)
|
|
|
|
{
|
2018-05-17 16:47:58 -07:00
|
|
|
for (int idx = 0; idx < data->num_audio_streams; idx++) {
|
|
|
|
for (size_t i = 0; i < MAX_AV_PLANES; i++)
|
|
|
|
circlebuf_free(&data->excess_frames[idx][i]);
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2019-03-05 12:54:12 -08:00
|
|
|
if (data->samples[idx][0])
|
|
|
|
av_freep(&data->samples[idx][0]);
|
|
|
|
if (data->audio_streams[idx])
|
2019-03-05 12:38:14 -08:00
|
|
|
avcodec_close(data->audio_streams[idx]->codec);
|
2019-03-05 12:54:12 -08:00
|
|
|
if (data->aframe[idx])
|
2019-03-05 12:38:14 -08:00
|
|
|
av_frame_free(&data->aframe[idx]);
|
2018-05-17 16:47:58 -07:00
|
|
|
}
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2019-07-01 05:26:43 -07:00
|
|
|
void ffmpeg_data_free(struct ffmpeg_data *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
|
|
|
if (data->initialized)
|
|
|
|
av_write_trailer(data->output);
|
|
|
|
|
|
|
|
if (data->video)
|
|
|
|
close_video(data);
|
2018-05-17 16:47:58 -07:00
|
|
|
if (data->audio_streams) {
|
2014-01-19 02:16:41 -08:00
|
|
|
close_audio(data);
|
2018-05-17 16:47:58 -07:00
|
|
|
free(data->audio_streams);
|
|
|
|
data->audio_streams = NULL;
|
|
|
|
}
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-03-11 14:46:34 -07:00
|
|
|
if (data->output) {
|
|
|
|
if ((data->output->oformat->flags & AVFMT_NOFILE) == 0)
|
|
|
|
avio_close(data->output->pb);
|
|
|
|
|
|
|
|
avformat_free_context(data->output);
|
|
|
|
}
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2019-03-30 07:45:51 -07:00
|
|
|
if (data->last_error)
|
|
|
|
bfree(data->last_error);
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
memset(data, 0, sizeof(struct ffmpeg_data));
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
static inline const char *safe_str(const char *s)
|
|
|
|
{
|
|
|
|
if (s == NULL)
|
|
|
|
return "(NULL)";
|
|
|
|
else
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2015-09-18 22:11:49 -07:00
|
|
|
static enum AVCodecID get_codec_id(const char *name, int id)
|
|
|
|
{
|
|
|
|
AVCodec *codec;
|
|
|
|
|
|
|
|
if (id != 0)
|
|
|
|
return (enum AVCodecID)id;
|
|
|
|
|
|
|
|
if (!name || !*name)
|
|
|
|
return AV_CODEC_ID_NONE;
|
|
|
|
|
|
|
|
codec = avcodec_find_encoder_by_name(name);
|
|
|
|
if (!codec)
|
|
|
|
return AV_CODEC_ID_NONE;
|
|
|
|
|
|
|
|
return codec->id;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_encoder_ids(struct ffmpeg_data *data)
|
|
|
|
{
|
|
|
|
data->output->oformat->video_codec = get_codec_id(
|
2019-06-22 22:13:45 -07:00
|
|
|
data->config.video_encoder, data->config.video_encoder_id);
|
2015-09-18 22:11:49 -07:00
|
|
|
|
|
|
|
data->output->oformat->audio_codec = get_codec_id(
|
2019-06-22 22:13:45 -07:00
|
|
|
data->config.audio_encoder, data->config.audio_encoder_id);
|
2015-09-18 22:11:49 -07:00
|
|
|
}
|
|
|
|
|
2019-07-01 05:26:43 -07:00
|
|
|
bool ffmpeg_data_init(struct ffmpeg_data *data, struct ffmpeg_cfg *config)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-03-10 13:10:35 -07:00
|
|
|
bool is_rtmp = false;
|
2014-02-28 02:50:30 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
memset(data, 0, sizeof(struct ffmpeg_data));
|
2015-01-25 10:34:58 -08:00
|
|
|
data->config = *config;
|
2018-05-17 16:47:58 -07:00
|
|
|
data->num_audio_streams = config->audio_mix_count;
|
|
|
|
data->audio_tracks = config->audio_tracks;
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!config->url || !*config->url)
|
2014-02-10 09:22:35 -08:00
|
|
|
return false;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2019-07-28 18:31:43 -07:00
|
|
|
#if LIBAVCODEC_VERSION_INT < AV_VERSION_INT(58, 9, 100)
|
2014-01-19 02:16:41 -08:00
|
|
|
av_register_all();
|
2019-07-28 18:31:43 -07:00
|
|
|
#endif
|
2014-03-10 13:10:35 -07:00
|
|
|
avformat_network_init();
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
is_rtmp = (astrcmpi_n(config->url, "rtmp://", 7) == 0);
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
AVOutputFormat *output_format = av_guess_format(
|
2019-06-22 22:13:45 -07:00
|
|
|
is_rtmp ? "flv" : data->config.format_name, data->config.url,
|
|
|
|
is_rtmp ? NULL : data->config.format_mime_type);
|
2015-03-27 23:47:24 -07:00
|
|
|
|
|
|
|
if (output_format == NULL) {
|
2019-06-22 22:13:45 -07:00
|
|
|
ffmpeg_log_error(
|
|
|
|
LOG_WARNING, data,
|
2019-03-30 07:45:51 -07:00
|
|
|
"Couldn't find matching output format with "
|
|
|
|
"parameters: name=%s, url=%s, mime=%s",
|
2019-06-22 22:13:45 -07:00
|
|
|
safe_str(is_rtmp ? "flv" : data->config.format_name),
|
2019-03-30 07:45:51 -07:00
|
|
|
safe_str(data->config.url),
|
2019-06-22 22:13:45 -07:00
|
|
|
safe_str(is_rtmp ? NULL
|
|
|
|
: data->config.format_mime_type));
|
2019-03-30 07:45:51 -07:00
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
avformat_alloc_output_context2(&data->output, output_format, NULL,
|
|
|
|
NULL);
|
2015-03-27 23:47:24 -07:00
|
|
|
|
2019-03-05 12:58:09 -08:00
|
|
|
if (!data->output) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, data,
|
2019-06-22 22:13:45 -07:00
|
|
|
"Couldn't create avformat context");
|
2019-03-05 12:58:09 -08:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
if (is_rtmp) {
|
|
|
|
data->output->oformat->video_codec = AV_CODEC_ID_H264;
|
|
|
|
data->output->oformat->audio_codec = AV_CODEC_ID_AAC;
|
2015-03-27 23:47:24 -07:00
|
|
|
} else {
|
2015-09-18 22:11:49 -07:00
|
|
|
if (data->config.format_name)
|
|
|
|
set_encoder_ids(data);
|
2014-03-10 19:04:00 -07:00
|
|
|
}
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
if (!init_streams(data))
|
|
|
|
goto fail;
|
|
|
|
if (!open_output_file(data))
|
|
|
|
goto fail;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
av_dump_format(data->output, 0, NULL, 1);
|
|
|
|
|
2014-01-19 02:16:41 -08:00
|
|
|
data->initialized = true;
|
|
|
|
return true;
|
|
|
|
|
|
|
|
fail:
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "ffmpeg_data_init failed");
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------- */
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static inline bool stopping(struct ffmpeg_output *output)
|
|
|
|
{
|
|
|
|
return os_atomic_load_bool(&output->stopping);
|
|
|
|
}
|
|
|
|
|
2015-09-16 01:30:51 -07:00
|
|
|
static const char *ffmpeg_output_getname(void *unused)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2015-09-16 01:30:51 -07:00
|
|
|
UNUSED_PARAMETER(unused);
|
2014-07-09 22:12:57 -07:00
|
|
|
return obs_module_text("FFmpegOutput");
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2014-02-27 22:14:03 -08:00
|
|
|
static void ffmpeg_log_callback(void *param, int level, const char *format,
|
2019-06-22 22:13:45 -07:00
|
|
|
va_list args)
|
2014-02-07 02:03:54 -08:00
|
|
|
{
|
2014-02-28 20:46:22 -08:00
|
|
|
if (level <= AV_LOG_INFO)
|
2014-02-27 22:14:03 -08:00
|
|
|
blogva(LOG_DEBUG, format, args);
|
2014-02-14 14:13:36 -08:00
|
|
|
|
|
|
|
UNUSED_PARAMETER(param);
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
static void *ffmpeg_output_create(obs_data_t *settings, obs_output_t *output)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-09 11:34:07 -08:00
|
|
|
struct ffmpeg_output *data = bzalloc(sizeof(struct ffmpeg_output));
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_init_value(&data->write_mutex);
|
2014-01-19 02:16:41 -08:00
|
|
|
data->output = output;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
if (pthread_mutex_init(&data->write_mutex, NULL) != 0)
|
|
|
|
goto fail;
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_event_init(&data->stop_event, OS_EVENT_TYPE_AUTO) != 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
goto fail;
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_sem_init(&data->write_sem, 0) != 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
goto fail;
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
av_log_set_callback(ffmpeg_log_callback);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
UNUSED_PARAMETER(settings);
|
2014-01-19 02:16:41 -08:00
|
|
|
return data;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
fail:
|
|
|
|
pthread_mutex_destroy(&data->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_event_destroy(data->stop_event);
|
2014-03-10 13:10:35 -07:00
|
|
|
bfree(data);
|
|
|
|
return NULL;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static void ffmpeg_output_full_stop(void *data);
|
2015-09-18 22:14:46 -07:00
|
|
|
static void ffmpeg_deactivate(struct ffmpeg_output *output);
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
static void ffmpeg_output_destroy(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
|
|
|
if (output) {
|
2014-03-10 13:10:35 -07:00
|
|
|
if (output->connecting)
|
|
|
|
pthread_join(output->start_thread, NULL);
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
ffmpeg_output_full_stop(output);
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
pthread_mutex_destroy(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_destroy(output->write_sem);
|
|
|
|
os_event_destroy(output->stop_event);
|
2014-01-19 02:16:41 -08:00
|
|
|
bfree(data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-05 13:53:18 -08:00
|
|
|
static inline void copy_data(AVFrame *pic, const struct video_data *frame,
|
2019-06-22 22:13:45 -07:00
|
|
|
int height, enum AVPixelFormat format)
|
2014-02-07 02:03:54 -08:00
|
|
|
{
|
2016-11-08 21:38:55 -08:00
|
|
|
int h_chroma_shift, v_chroma_shift;
|
2019-06-22 22:13:45 -07:00
|
|
|
av_pix_fmt_get_chroma_sub_sample(format, &h_chroma_shift,
|
|
|
|
&v_chroma_shift);
|
2014-04-04 11:54:32 -07:00
|
|
|
for (int plane = 0; plane < MAX_AV_PLANES; plane++) {
|
|
|
|
if (!frame->data[plane])
|
|
|
|
continue;
|
|
|
|
|
2014-02-09 04:51:06 -08:00
|
|
|
int frame_rowsize = (int)frame->linesize[plane];
|
2019-06-22 22:13:45 -07:00
|
|
|
int pic_rowsize = pic->linesize[plane];
|
|
|
|
int bytes = frame_rowsize < pic_rowsize ? frame_rowsize
|
|
|
|
: pic_rowsize;
|
2016-11-08 21:38:55 -08:00
|
|
|
int plane_height = height >> (plane ? v_chroma_shift : 0);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
for (int y = 0; y < plane_height; y++) {
|
|
|
|
int pos_frame = y * frame_rowsize;
|
2019-06-22 22:13:45 -07:00
|
|
|
int pos_pic = y * pic_rowsize;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
memcpy(pic->data[plane] + pos_pic,
|
2019-06-22 22:13:45 -07:00
|
|
|
frame->data[plane] + pos_frame, bytes);
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
static void receive_video(void *param, struct video_data *frame)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-07 02:03:54 -08:00
|
|
|
struct ffmpeg_output *output = param;
|
2019-06-22 22:13:45 -07:00
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
2015-03-27 22:54:47 -07:00
|
|
|
|
|
|
|
// codec doesn't support video or none configured
|
|
|
|
if (!data->video)
|
|
|
|
return;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
AVCodecContext *context = data->video->codec;
|
|
|
|
AVPacket packet = {0};
|
2014-03-11 16:07:22 -07:00
|
|
|
int ret = 0, got_packet;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
av_init_packet(&packet);
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (!output->video_start_ts)
|
|
|
|
output->video_start_ts = frame->timestamp;
|
2014-02-24 00:48:14 -08:00
|
|
|
if (!data->start_timestamp)
|
|
|
|
data->start_timestamp = frame->timestamp;
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (!!data->swscale)
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
sws_scale(data->swscale, (const uint8_t *const *)frame->data,
|
2019-06-22 22:13:45 -07:00
|
|
|
(const int *)frame->linesize, 0, data->config.height,
|
|
|
|
data->vframe->data, data->vframe->linesize);
|
2014-02-07 02:03:54 -08:00
|
|
|
else
|
2019-06-22 22:13:45 -07:00
|
|
|
copy_data(data->vframe, frame, context->height,
|
|
|
|
context->pix_fmt);
|
2017-12-21 10:22:23 -08:00
|
|
|
#if LIBAVFORMAT_VERSION_MAJOR < 58
|
|
|
|
if (data->output->flags & AVFMT_RAWPICTURE) {
|
2019-06-22 22:13:45 -07:00
|
|
|
packet.flags |= AV_PKT_FLAG_KEY;
|
|
|
|
packet.stream_index = data->video->index;
|
|
|
|
packet.data = data->vframe->data[0];
|
|
|
|
packet.size = sizeof(AVPicture);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
da_push_back(output->packets, &packet);
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
|
|
|
} else {
|
2017-12-21 10:22:23 -08:00
|
|
|
#endif
|
2014-02-07 02:03:54 -08:00
|
|
|
data->vframe->pts = data->total_frames;
|
2017-12-05 13:53:18 -08:00
|
|
|
#if LIBAVFORMAT_VERSION_INT >= AV_VERSION_INT(57, 40, 101)
|
|
|
|
ret = avcodec_send_frame(context, data->vframe);
|
|
|
|
if (ret == 0)
|
|
|
|
ret = avcodec_receive_packet(context, &packet);
|
|
|
|
|
|
|
|
got_packet = (ret == 0);
|
|
|
|
|
|
|
|
if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
|
|
|
|
ret = 0;
|
|
|
|
#else
|
2019-06-22 22:13:45 -07:00
|
|
|
ret = avcodec_encode_video2(context, &packet, data->vframe,
|
|
|
|
&got_packet);
|
2017-12-05 13:53:18 -08:00
|
|
|
#endif
|
2014-02-07 02:03:54 -08:00
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
blog(LOG_WARNING,
|
|
|
|
"receive_video: Error encoding "
|
|
|
|
"video: %s",
|
|
|
|
av_err2str(ret));
|
2019-03-30 07:45:51 -07:00
|
|
|
//FIXME: stop the encode with an error
|
2014-02-07 02:03:54 -08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && got_packet && packet.size) {
|
|
|
|
packet.pts = rescale_ts(packet.pts, context,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->video->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
packet.dts = rescale_ts(packet.dts, context,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->video->time_base);
|
|
|
|
packet.duration = (int)av_rescale_q(
|
|
|
|
packet.duration, context->time_base,
|
|
|
|
data->video->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
da_push_back(output->packets, &packet);
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-02-07 02:03:54 -08:00
|
|
|
} else {
|
|
|
|
ret = 0;
|
|
|
|
}
|
2017-12-21 10:22:23 -08:00
|
|
|
#if LIBAVFORMAT_VERSION_MAJOR < 58
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
2017-12-21 10:22:23 -08:00
|
|
|
#endif
|
2014-02-07 02:03:54 -08:00
|
|
|
if (ret != 0) {
|
2014-02-28 19:02:29 -08:00
|
|
|
blog(LOG_WARNING, "receive_video: Error writing video: %s",
|
2019-06-22 22:13:45 -07:00
|
|
|
av_err2str(ret));
|
2019-03-30 07:45:51 -07:00
|
|
|
//FIXME: stop the encode with an error
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
data->total_frames++;
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
static void encode_audio(struct ffmpeg_output *output, int idx,
|
2019-06-22 22:13:45 -07:00
|
|
|
struct AVCodecContext *context, size_t block_size)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-03-10 13:10:35 -07:00
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
|
|
|
|
2014-02-07 02:03:54 -08:00
|
|
|
AVPacket packet = {0};
|
|
|
|
int ret, got_packet;
|
2014-03-10 13:10:35 -07:00
|
|
|
size_t total_size = data->frame_size * block_size * context->channels;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->aframe[idx]->nb_samples = data->frame_size;
|
2019-06-22 22:13:45 -07:00
|
|
|
data->aframe[idx]->pts = av_rescale_q(
|
|
|
|
data->total_samples[idx], (AVRational){1, context->sample_rate},
|
|
|
|
context->time_base);
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
ret = avcodec_fill_audio_frame(data->aframe[idx], context->channels,
|
2019-06-22 22:13:45 -07:00
|
|
|
context->sample_fmt,
|
|
|
|
data->samples[idx][0], (int)total_size,
|
|
|
|
1);
|
2014-02-09 04:51:06 -08:00
|
|
|
if (ret < 0) {
|
2019-06-22 22:13:45 -07:00
|
|
|
blog(LOG_WARNING,
|
|
|
|
"encode_audio: avcodec_fill_audio_frame "
|
|
|
|
"failed: %s",
|
|
|
|
av_err2str(ret));
|
2019-03-30 07:45:51 -07:00
|
|
|
//FIXME: stop the encode with an error
|
2014-02-09 04:51:06 -08:00
|
|
|
return;
|
2014-02-07 02:03:54 -08:00
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
data->total_samples[idx] += data->frame_size;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2017-12-05 13:53:18 -08:00
|
|
|
#if LIBAVFORMAT_VERSION_INT >= AV_VERSION_INT(57, 40, 101)
|
2018-05-17 16:47:58 -07:00
|
|
|
ret = avcodec_send_frame(context, data->aframe[idx]);
|
2017-12-05 13:53:18 -08:00
|
|
|
if (ret == 0)
|
|
|
|
ret = avcodec_receive_packet(context, &packet);
|
|
|
|
|
|
|
|
got_packet = (ret == 0);
|
|
|
|
|
|
|
|
if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
|
|
|
|
ret = 0;
|
|
|
|
#else
|
2018-05-17 16:47:58 -07:00
|
|
|
ret = avcodec_encode_audio2(context, &packet, data->aframe[idx],
|
2019-06-22 22:13:45 -07:00
|
|
|
&got_packet);
|
2017-12-05 13:53:18 -08:00
|
|
|
#endif
|
2014-02-07 02:03:54 -08:00
|
|
|
if (ret < 0) {
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
blog(LOG_WARNING, "encode_audio: Error encoding audio: %s",
|
2019-06-22 22:13:45 -07:00
|
|
|
av_err2str(ret));
|
2019-03-30 07:45:51 -07:00
|
|
|
//FIXME: stop the encode with an error
|
2014-02-07 02:03:54 -08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!got_packet)
|
|
|
|
return;
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
packet.pts = rescale_ts(packet.pts, context,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->audio_streams[idx]->time_base);
|
2018-05-17 16:47:58 -07:00
|
|
|
packet.dts = rescale_ts(packet.dts, context,
|
2019-06-22 22:13:45 -07:00
|
|
|
data->audio_streams[idx]->time_base);
|
|
|
|
packet.duration =
|
|
|
|
(int)av_rescale_q(packet.duration, context->time_base,
|
|
|
|
data->audio_streams[idx]->time_base);
|
2018-05-17 16:47:58 -07:00
|
|
|
packet.stream_index = data->audio_streams[idx]->index;
|
2014-02-07 02:03:54 -08:00
|
|
|
|
2014-02-28 02:50:30 -08:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
2014-03-10 13:10:35 -07:00
|
|
|
da_push_back(output->packets, &packet);
|
2014-02-28 02:50:30 -08:00
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-10 19:04:00 -07:00
|
|
|
os_sem_post(output->write_sem);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
/* Given a bitmask for the selected tracks and the mix index,
|
|
|
|
* this returns the stream index which will be passed to the muxer. */
|
|
|
|
static int get_track_order(int track_config, size_t mix_index)
|
|
|
|
{
|
|
|
|
int position = 0;
|
|
|
|
for (size_t i = 0; i < mix_index; i++) {
|
|
|
|
if (track_config & 1 << i)
|
|
|
|
position++;
|
|
|
|
}
|
|
|
|
return position;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void receive_audio(void *param, size_t mix_idx, struct audio_data *frame)
|
2014-02-09 04:51:06 -08:00
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = param;
|
2019-06-22 22:13:45 -07:00
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
2014-02-24 00:48:14 -08:00
|
|
|
size_t frame_size_bytes;
|
2019-07-07 12:14:17 -07:00
|
|
|
struct audio_data in = *frame;
|
2018-05-17 16:47:58 -07:00
|
|
|
int track_order;
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2019-02-11 12:11:04 -08:00
|
|
|
// codec doesn't support audio or none configured
|
|
|
|
if (!data->audio_streams)
|
|
|
|
return;
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
/* check that the track was selected */
|
|
|
|
if ((data->audio_tracks & (1 << mix_idx)) == 0)
|
2015-03-27 22:54:47 -07:00
|
|
|
return;
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
/* get track order (first selected, etc ...) */
|
|
|
|
track_order = get_track_order(data->audio_tracks, mix_idx);
|
|
|
|
|
|
|
|
AVCodecContext *context = data->audio_streams[track_order]->codec;
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
if (!data->start_timestamp)
|
|
|
|
return;
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (!output->audio_start_ts)
|
|
|
|
output->audio_start_ts = in.timestamp;
|
|
|
|
|
2014-02-24 00:48:14 -08:00
|
|
|
frame_size_bytes = (size_t)data->frame_size * data->audio_size;
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2014-02-23 15:27:19 -08:00
|
|
|
for (size_t i = 0; i < data->audio_planes; i++)
|
2018-05-17 16:47:58 -07:00
|
|
|
circlebuf_push_back(&data->excess_frames[track_order][i],
|
2019-06-22 22:13:45 -07:00
|
|
|
in.data[i], in.frames * data->audio_size);
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
while (data->excess_frames[track_order][0].size >= frame_size_bytes) {
|
2014-02-23 15:27:19 -08:00
|
|
|
for (size_t i = 0; i < data->audio_planes; i++)
|
2019-06-22 22:13:45 -07:00
|
|
|
circlebuf_pop_front(
|
|
|
|
&data->excess_frames[track_order][i],
|
|
|
|
data->samples[track_order][i],
|
|
|
|
frame_size_bytes);
|
2014-02-09 04:51:06 -08:00
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
encode_audio(output, track_order, context, data->audio_size);
|
2014-02-09 04:51:06 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static uint64_t get_packet_sys_dts(struct ffmpeg_output *output,
|
2019-06-22 22:13:45 -07:00
|
|
|
AVPacket *packet)
|
2016-06-11 11:42:29 -07:00
|
|
|
{
|
|
|
|
struct ffmpeg_data *data = &output->ff_data;
|
2019-07-07 15:46:04 -07:00
|
|
|
uint64_t pause_offset = obs_output_get_pause_offset(output->output);
|
2016-06-11 11:42:29 -07:00
|
|
|
uint64_t start_ts;
|
|
|
|
|
|
|
|
AVRational time_base;
|
|
|
|
|
|
|
|
if (data->video && data->video->index == packet->stream_index) {
|
|
|
|
time_base = data->video->time_base;
|
|
|
|
start_ts = output->video_start_ts;
|
|
|
|
} else {
|
2018-05-17 16:47:58 -07:00
|
|
|
time_base = data->audio_streams[0]->time_base;
|
2016-06-11 11:42:29 -07:00
|
|
|
start_ts = output->audio_start_ts;
|
|
|
|
}
|
|
|
|
|
2019-07-07 15:46:04 -07:00
|
|
|
return start_ts + pause_offset +
|
|
|
|
(uint64_t)av_rescale_q(packet->dts, time_base,
|
|
|
|
(AVRational){1, 1000000000});
|
2016-06-11 11:42:29 -07:00
|
|
|
}
|
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
static int process_packet(struct ffmpeg_output *output)
|
2014-03-10 13:10:35 -07:00
|
|
|
{
|
|
|
|
AVPacket packet;
|
|
|
|
bool new_packet = false;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
|
|
|
if (output->packets.num) {
|
|
|
|
packet = output->packets.array[0];
|
|
|
|
da_erase(output->packets, 0);
|
|
|
|
new_packet = true;
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
|
|
|
|
|
|
|
if (!new_packet)
|
2015-09-18 22:14:46 -07:00
|
|
|
return 0;
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
/*blog(LOG_DEBUG, "size = %d, flags = %lX, stream = %d, "
|
|
|
|
"packets queued: %lu",
|
|
|
|
packet.size, packet.flags,
|
|
|
|
packet.stream_index, output->packets.num);*/
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (stopping(output)) {
|
|
|
|
uint64_t sys_ts = get_packet_sys_dts(output, &packet);
|
|
|
|
if (sys_ts >= output->stop_ts) {
|
|
|
|
ffmpeg_output_full_stop(output);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-05-12 22:01:16 -07:00
|
|
|
output->total_bytes += packet.size;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = av_interleaved_write_frame(output->ff_data.output, &packet);
|
|
|
|
if (ret < 0) {
|
2014-03-11 09:14:21 -07:00
|
|
|
av_free_packet(&packet);
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, &output->ff_data,
|
2019-06-22 22:13:45 -07:00
|
|
|
"receive_audio: Error writing packet: %s",
|
|
|
|
av_err2str(ret));
|
2015-09-18 22:14:46 -07:00
|
|
|
return ret;
|
2014-03-10 13:10:35 -07:00
|
|
|
}
|
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
return 0;
|
2014-03-10 13:10:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *write_thread(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
2014-03-10 19:04:00 -07:00
|
|
|
while (os_sem_wait(output->write_sem) == 0) {
|
2014-03-10 13:10:35 -07:00
|
|
|
/* check to see if shutting down */
|
2014-03-10 19:04:00 -07:00
|
|
|
if (os_event_try(output->stop_event) == 0)
|
2014-03-10 13:10:35 -07:00
|
|
|
break;
|
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
int ret = process_packet(output);
|
|
|
|
if (ret != 0) {
|
|
|
|
int code = OBS_OUTPUT_ERROR;
|
|
|
|
|
2014-03-11 09:14:21 -07:00
|
|
|
pthread_detach(output->write_thread);
|
|
|
|
output->write_thread_active = false;
|
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
if (ret == -ENOSPC)
|
|
|
|
code = OBS_OUTPUT_NO_SPACE;
|
|
|
|
|
|
|
|
obs_output_signal_stop(output->output, code);
|
|
|
|
ffmpeg_deactivate(output);
|
2014-03-10 13:10:35 -07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
output->active = false;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-03-27 23:47:24 -07:00
|
|
|
static inline const char *get_string_or_null(obs_data_t *settings,
|
2019-06-22 22:13:45 -07:00
|
|
|
const char *name)
|
2015-03-27 23:47:24 -07:00
|
|
|
{
|
|
|
|
const char *value = obs_data_get_string(settings, name);
|
|
|
|
if (!value || !strlen(value))
|
|
|
|
return NULL;
|
|
|
|
return value;
|
|
|
|
}
|
|
|
|
|
2018-05-17 16:47:58 -07:00
|
|
|
static int get_audio_mix_count(int audio_mix_mask)
|
|
|
|
{
|
|
|
|
int mix_count = 0;
|
|
|
|
for (int i = 0; i < MAX_AUDIO_MIXES; i++) {
|
|
|
|
if ((audio_mix_mask & (1 << i)) != 0) {
|
|
|
|
mix_count++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return mix_count;
|
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static bool try_connect(struct ffmpeg_output *output)
|
|
|
|
{
|
2015-01-25 10:34:58 -08:00
|
|
|
video_t *video = obs_output_video(output->output);
|
2015-09-18 23:51:31 -07:00
|
|
|
const struct video_output_info *voi = video_output_get_info(video);
|
2015-01-25 10:34:58 -08:00
|
|
|
struct ffmpeg_cfg config;
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_data_t *settings;
|
2015-01-25 10:34:58 -08:00
|
|
|
bool success;
|
2014-03-10 13:10:35 -07:00
|
|
|
int ret;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
settings = obs_output_get_settings(output->output);
|
2017-02-17 08:24:47 -08:00
|
|
|
|
|
|
|
obs_data_set_default_int(settings, "gop_size", 120);
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
config.url = obs_data_get_string(settings, "url");
|
2015-03-27 23:47:24 -07:00
|
|
|
config.format_name = get_string_or_null(settings, "format_name");
|
2019-06-22 22:13:45 -07:00
|
|
|
config.format_mime_type =
|
|
|
|
get_string_or_null(settings, "format_mime_type");
|
2015-09-16 01:24:40 -07:00
|
|
|
config.muxer_settings = obs_data_get_string(settings, "muxer_settings");
|
2015-01-25 10:34:58 -08:00
|
|
|
config.video_bitrate = (int)obs_data_get_int(settings, "video_bitrate");
|
|
|
|
config.audio_bitrate = (int)obs_data_get_int(settings, "audio_bitrate");
|
2017-02-17 08:24:47 -08:00
|
|
|
config.gop_size = (int)obs_data_get_int(settings, "gop_size");
|
2015-03-27 23:47:24 -07:00
|
|
|
config.video_encoder = get_string_or_null(settings, "video_encoder");
|
2019-06-22 22:13:45 -07:00
|
|
|
config.video_encoder_id =
|
|
|
|
(int)obs_data_get_int(settings, "video_encoder_id");
|
2015-03-27 23:47:24 -07:00
|
|
|
config.audio_encoder = get_string_or_null(settings, "audio_encoder");
|
2019-06-22 22:13:45 -07:00
|
|
|
config.audio_encoder_id =
|
|
|
|
(int)obs_data_get_int(settings, "audio_encoder_id");
|
2015-01-25 10:34:58 -08:00
|
|
|
config.video_settings = obs_data_get_string(settings, "video_settings");
|
|
|
|
config.audio_settings = obs_data_get_string(settings, "audio_settings");
|
|
|
|
config.scale_width = (int)obs_data_get_int(settings, "scale_width");
|
|
|
|
config.scale_height = (int)obs_data_get_int(settings, "scale_height");
|
2019-06-22 22:13:45 -07:00
|
|
|
config.width = (int)obs_output_get_width(output->output);
|
2015-01-25 10:34:58 -08:00
|
|
|
config.height = (int)obs_output_get_height(output->output);
|
2019-06-22 22:13:45 -07:00
|
|
|
config.format =
|
|
|
|
obs_to_ffmpeg_video_format(video_output_get_format(video));
|
2018-05-17 16:47:58 -07:00
|
|
|
config.audio_tracks = (int)obs_output_get_mixers(output->output);
|
|
|
|
config.audio_mix_count = get_audio_mix_count(config.audio_tracks);
|
2015-01-25 10:34:58 -08:00
|
|
|
|
2015-09-18 23:51:31 -07:00
|
|
|
if (format_is_yuv(voi->format)) {
|
2019-06-22 22:13:45 -07:00
|
|
|
config.color_range = voi->range == VIDEO_RANGE_FULL
|
|
|
|
? AVCOL_RANGE_JPEG
|
|
|
|
: AVCOL_RANGE_MPEG;
|
|
|
|
config.color_space = voi->colorspace == VIDEO_CS_709
|
|
|
|
? AVCOL_SPC_BT709
|
|
|
|
: AVCOL_SPC_BT470BG;
|
2015-09-18 23:51:31 -07:00
|
|
|
} else {
|
|
|
|
config.color_range = AVCOL_RANGE_UNSPECIFIED;
|
|
|
|
config.color_space = AVCOL_SPC_RGB;
|
|
|
|
}
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
if (config.format == AV_PIX_FMT_NONE) {
|
|
|
|
blog(LOG_DEBUG, "invalid pixel format used for FFmpeg output");
|
2014-02-10 09:22:35 -08:00
|
|
|
return false;
|
2015-01-25 10:34:58 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!config.scale_width)
|
|
|
|
config.scale_width = config.width;
|
|
|
|
if (!config.scale_height)
|
|
|
|
config.scale_height = config.height;
|
2014-02-10 09:22:35 -08:00
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
success = ffmpeg_data_init(&output->ff_data, &config);
|
|
|
|
obs_data_release(settings);
|
2014-08-10 17:09:15 -07:00
|
|
|
|
2019-03-30 07:45:51 -07:00
|
|
|
if (!success) {
|
|
|
|
if (output->ff_data.last_error) {
|
|
|
|
obs_output_set_last_error(output->output,
|
2019-06-22 22:13:45 -07:00
|
|
|
output->ff_data.last_error);
|
2019-03-30 07:45:51 -07:00
|
|
|
}
|
|
|
|
ffmpeg_data_free(&output->ff_data);
|
2014-01-19 02:16:41 -08:00
|
|
|
return false;
|
2019-03-30 07:45:51 -07:00
|
|
|
}
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2019-06-22 22:13:45 -07:00
|
|
|
struct audio_convert_info aci = {.format =
|
|
|
|
output->ff_data.audio_format};
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-02-14 14:13:36 -08:00
|
|
|
output->active = true;
|
2014-01-19 02:16:41 -08:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!obs_output_can_begin_data_capture(output->output, 0))
|
|
|
|
return false;
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = pthread_create(&output->write_thread, NULL, write_thread, output);
|
|
|
|
if (ret != 0) {
|
2019-03-30 07:45:51 -07:00
|
|
|
ffmpeg_log_error(LOG_WARNING, &output->ff_data,
|
2019-06-22 22:13:45 -07:00
|
|
|
"ffmpeg_output_start: failed to create write "
|
|
|
|
"thread.");
|
2016-06-11 11:42:29 -07:00
|
|
|
ffmpeg_output_full_stop(output);
|
2014-03-10 13:10:35 -07:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-01-25 10:34:58 -08:00
|
|
|
obs_output_set_video_conversion(output->output, NULL);
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
obs_output_set_audio_conversion(output->output, &aci);
|
|
|
|
obs_output_begin_data_capture(output->output, 0);
|
2014-03-10 13:10:35 -07:00
|
|
|
output->write_thread_active = true;
|
2014-01-19 02:16:41 -08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
static void *start_thread(void *data)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!try_connect(output))
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
obs_output_signal_stop(output->output,
|
2019-06-22 22:13:45 -07:00
|
|
|
OBS_OUTPUT_CONNECT_FAILED);
|
2014-03-10 13:10:35 -07:00
|
|
|
|
|
|
|
output->connecting = false;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ffmpeg_output_start(void *data)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (output->connecting)
|
|
|
|
return false;
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
os_atomic_set_bool(&output->stopping, false);
|
|
|
|
output->audio_start_ts = 0;
|
|
|
|
output->video_start_ts = 0;
|
2017-05-12 22:01:16 -07:00
|
|
|
output->total_bytes = 0;
|
2016-06-11 11:42:29 -07:00
|
|
|
|
2014-03-10 13:10:35 -07:00
|
|
|
ret = pthread_create(&output->start_thread, NULL, start_thread, output);
|
|
|
|
return (output->connecting = (ret == 0));
|
|
|
|
}
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static void ffmpeg_output_full_stop(void *data)
|
2014-01-19 02:16:41 -08:00
|
|
|
{
|
2014-02-14 14:13:36 -08:00
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
|
|
|
if (output->active) {
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
obs_output_end_data_capture(output->output);
|
2015-09-18 22:14:46 -07:00
|
|
|
ffmpeg_deactivate(output);
|
|
|
|
}
|
|
|
|
}
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static void ffmpeg_output_stop(void *data, uint64_t ts)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
|
|
|
|
if (output->active) {
|
|
|
|
if (ts == 0) {
|
|
|
|
ffmpeg_output_full_stop(output);
|
|
|
|
} else {
|
|
|
|
os_atomic_set_bool(&output->stopping, true);
|
|
|
|
output->stop_ts = ts;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
static void ffmpeg_deactivate(struct ffmpeg_output *output)
|
|
|
|
{
|
|
|
|
if (output->write_thread_active) {
|
|
|
|
os_event_signal(output->stop_event);
|
|
|
|
os_sem_post(output->write_sem);
|
|
|
|
pthread_join(output->write_thread, NULL);
|
|
|
|
output->write_thread_active = false;
|
|
|
|
}
|
2014-03-10 13:10:35 -07:00
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
pthread_mutex_lock(&output->write_mutex);
|
2014-03-11 09:14:21 -07:00
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
for (size_t i = 0; i < output->packets.num; i++)
|
2019-06-22 22:13:45 -07:00
|
|
|
av_free_packet(output->packets.array + i);
|
2015-09-18 22:14:46 -07:00
|
|
|
da_free(output->packets);
|
2014-03-11 09:14:21 -07:00
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
pthread_mutex_unlock(&output->write_mutex);
|
2014-03-11 09:14:21 -07:00
|
|
|
|
2015-09-18 22:14:46 -07:00
|
|
|
ffmpeg_data_free(&output->ff_data);
|
2014-01-19 02:16:41 -08:00
|
|
|
}
|
|
|
|
|
2017-05-12 22:01:16 -07:00
|
|
|
static uint64_t ffmpeg_output_total_bytes(void *data)
|
|
|
|
{
|
|
|
|
struct ffmpeg_output *output = data;
|
|
|
|
return output->total_bytes;
|
|
|
|
}
|
|
|
|
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
struct obs_output_info ffmpeg_output = {
|
2019-06-22 22:13:45 -07:00
|
|
|
.id = "ffmpeg_output",
|
2019-07-07 15:46:04 -07:00
|
|
|
.flags = OBS_OUTPUT_AUDIO | OBS_OUTPUT_VIDEO | OBS_OUTPUT_MULTI_TRACK |
|
|
|
|
OBS_OUTPUT_CAN_PAUSE,
|
2019-06-22 22:13:45 -07:00
|
|
|
.get_name = ffmpeg_output_getname,
|
|
|
|
.create = ffmpeg_output_create,
|
|
|
|
.destroy = ffmpeg_output_destroy,
|
|
|
|
.start = ffmpeg_output_start,
|
|
|
|
.stop = ffmpeg_output_stop,
|
Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs. If an encoder
is destroyed, it will automatically remove itself from that output.
I specifically didn't want to do reference counting because it leaves
too much potential for unchecked references and it just felt like it
would be more trouble than it's worth.
- Add a 'flags' value to the output definition structure. This lets
the output specify if it uses video/audio, and whether the output is
meant to be used with OBS encoders or not.
- Remove boilerplate code for outputs. This makes it easier to program
outputs. The boilerplate code involved before was mostly just
involving connecting to the audio/video data streams directly in each
output plugin.
Instead of doing that, simply add plugin callback functions for
receiving video/audio (either encoded or non-encoded, whichever it's
set to use), and then call obs_output_begin_data_capture and
obs_output_end_data_capture to automatically handle setting up
connections to raw or encoded video/audio streams for the plugin.
- Remove 'active' function from output callbacks, as it's no longer
really needed now that the libobs output context automatically knows
when the output is active or not.
- Make it so that an encoder cannot be destroyed until all data
connections to the encoder have been removed.
- Change the 'start' and 'stop' functions in the encoder interface to
just an 'initialize' callback, which initializes the encoder.
- Make it so that the encoder must be initialized first before the data
stream can be started. The reason why initialization was separated
from starting the encoder stream was because we need to be able to
check that the settings used with the encoder *can* be used first.
This problem was especially annoying if you had both video/audio
encoding. Before, you'd have to check the return value from
obs_encoder_start, and if that second encoder fails, then you
basically had to stop the first encoder again, making for
unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
|
|
|
.raw_video = receive_video,
|
2018-05-17 16:47:58 -07:00
|
|
|
.raw_audio2 = receive_audio,
|
2017-05-12 22:01:16 -07:00
|
|
|
.get_total_bytes = ffmpeg_output_total_bytes,
|
Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc. You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.
The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used. It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:
1.) Requiring exports to create sources/outputs/encoders/etc meant that
you could not create them by any other means, which meant that
things like faruton's .net plugin would become difficult.
2.) Export function declarations could not be checked, therefore if you
created a function with the wrong parameters and parameter types,
the compiler wouldn't know how to check for that.
3.) Required overly complex load functions in libobs just to handle it.
It makes much more sense to just have a load function that you call
manually. Complexity is the bane of all good programs.
4.) It required that you have functions of specific names, which looked
and felt somewhat unsightly.
So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction. You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.
It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.
The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.
Also, started writing some doxygen documentation in to the main library
headers. Will add more detailed documentation as I go.
2014-02-12 07:04:50 -08:00
|
|
|
};
|