Add preliminary output/encoder interface
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
2014-01-16 22:34:51 -07:00
|
|
|
/******************************************************************************
|
2014-04-01 11:55:18 -07:00
|
|
|
Copyright (C) 2014 by Hugh Bailey <obs.jim@gmail.com>
|
Add preliminary output/encoder interface
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
2014-01-16 22:34:51 -07:00
|
|
|
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation, either version 2 of the License, or
|
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
******************************************************************************/
|
|
|
|
|
2014-07-09 22:12:57 -07:00
|
|
|
#include <obs-module.h>
|
2014-04-01 11:55:18 -07:00
|
|
|
#include <obs-avc.h>
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
#include <util/platform.h>
|
2014-04-02 00:42:12 -07:00
|
|
|
#include <util/circlebuf.h>
|
2014-04-01 11:55:18 -07:00
|
|
|
#include <util/dstr.h>
|
|
|
|
#include <util/threading.h>
|
2014-05-09 17:19:05 +02:00
|
|
|
#include <inttypes.h>
|
2014-04-01 11:55:18 -07:00
|
|
|
#include "librtmp/rtmp.h"
|
|
|
|
#include "librtmp/log.h"
|
|
|
|
#include "flv-mux.h"
|
2016-07-29 08:30:30 -07:00
|
|
|
#include "net-if.h"
|
2013-11-13 06:24:20 -07:00
|
|
|
|
2015-11-11 02:48:33 -08:00
|
|
|
#ifdef _WIN32
|
|
|
|
#include <Iphlpapi.h>
|
2015-11-11 16:36:50 -08:00
|
|
|
#else
|
|
|
|
#include <sys/ioctl.h>
|
2015-11-11 02:48:33 -08:00
|
|
|
#endif
|
|
|
|
|
2014-07-02 00:20:50 -07:00
|
|
|
#define do_log(level, format, ...) \
|
|
|
|
blog(level, "[rtmp stream: '%s'] " format, \
|
2014-08-04 08:41:15 -07:00
|
|
|
obs_output_get_name(stream->output), ##__VA_ARGS__)
|
2014-07-02 00:20:50 -07:00
|
|
|
|
|
|
|
#define warn(format, ...) do_log(LOG_WARNING, format, ##__VA_ARGS__)
|
|
|
|
#define info(format, ...) do_log(LOG_INFO, format, ##__VA_ARGS__)
|
|
|
|
#define debug(format, ...) do_log(LOG_DEBUG, format, ##__VA_ARGS__)
|
|
|
|
|
2014-07-01 15:08:01 -07:00
|
|
|
#define OPT_DROP_THRESHOLD "drop_threshold_ms"
|
2015-11-01 15:00:01 -08:00
|
|
|
#define OPT_MAX_SHUTDOWN_TIME_SEC "max_shutdown_time_sec"
|
2016-07-29 08:30:30 -07:00
|
|
|
#define OPT_BIND_IP "bind_ip"
|
2014-07-01 15:08:01 -07:00
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
//#define TEST_FRAMEDROPS
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
struct rtmp_stream {
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_output_t *output;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_mutex_t packets_mutex;
|
|
|
|
struct circlebuf packets;
|
2014-12-18 12:38:37 -08:00
|
|
|
bool sent_headers;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-02 15:53:12 -08:00
|
|
|
volatile bool connecting;
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_t connect_thread;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-18 07:48:27 -08:00
|
|
|
volatile bool active;
|
2015-11-01 14:57:55 -08:00
|
|
|
volatile bool disconnected;
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_t send_thread;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-01 15:00:01 -08:00
|
|
|
int max_shutdown_time_sec;
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
os_sem_t *send_sem;
|
|
|
|
os_event_t *stop_event;
|
2016-06-11 11:42:29 -07:00
|
|
|
uint64_t stop_ts;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
struct dstr path, key;
|
|
|
|
struct dstr username, password;
|
2015-08-14 17:47:43 -07:00
|
|
|
struct dstr encoder_name;
|
2016-07-29 08:30:30 -07:00
|
|
|
struct dstr bind_ip;
|
2014-04-02 00:42:12 -07:00
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
/* frame drop variables */
|
|
|
|
int64_t drop_threshold_usec;
|
|
|
|
int64_t min_drop_dts_usec;
|
|
|
|
int min_priority;
|
|
|
|
|
|
|
|
int64_t last_dts_usec;
|
|
|
|
|
2014-07-06 17:28:01 -07:00
|
|
|
uint64_t total_bytes_sent;
|
|
|
|
int dropped_frames;
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
RTMP rtmp;
|
Implement encoder interface (still preliminary)
- Implement OBS encoder interface. It was previously incomplete, but
now is reaching some level of completion, though probably should
still be considered preliminary.
I had originally implemented it so that encoders only have a 'reset'
function to reset their parameters, but I felt that having both a
'start' and 'stop' function would be useful.
Encoders are now assigned to a specific video/audio media output each
rather than implicitely assigned to the main obs video/audio
contexts. This allows separate encoder contexts that aren't
necessarily assigned to the main video/audio context (which is useful
for things such as recording specific sources). Will probably have
to do this for regular obs outputs as well.
When creating an encoder, you must now explicitely state whether that
encoder is an audio or video encoder.
Audio and video can optionally be automatically converted depending
on what the encoder specifies.
When something 'attaches' to an encoder, the first attachment starts
the encoder, and the encoder automatically attaches to the media
output context associated with it. Subsequent attachments won't have
the same effect, they will just start receiving the same encoder data
when the next keyframe plays (along with SEI if any). When detaching
from the encoder, the last detachment will fully stop the encoder and
detach the encoder from the media output context associated with the
encoder.
SEI must actually be exported separately; because new encoder
attachments may not always be at the beginning of the stream, the
first keyframe they get must have that SEI data in it. If the
encoder has SEI data, it needs only add one small function to simply
query that SEI data, and then that data will be handled automatically
by libobs for all subsequent encoder attachments.
- Implement x264 encoder plugin, move x264 files to separate plugin to
separate necessary dependencies.
- Change video/audio frame output structures to not use const
qualifiers to prevent issues with non-const function usage elsewhere.
This was an issue when writing the x264 encoder, as the x264 encoder
expects non-const frame data.
Change stagesurf_map to return a non-const data type to prevent this
as well.
- Change full range parameter of video scaler to be an enum rather than
boolean
2014-03-16 16:21:34 -07:00
|
|
|
};
|
|
|
|
|
2015-09-16 01:30:51 -07:00
|
|
|
static const char *rtmp_stream_getname(void *unused)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
2015-09-16 01:30:51 -07:00
|
|
|
UNUSED_PARAMETER(unused);
|
2014-07-09 22:12:57 -07:00
|
|
|
return obs_module_text("RTMPStream");
|
2013-11-13 06:24:20 -07:00
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static void log_rtmp(int level, const char *format, va_list args)
|
|
|
|
{
|
2014-04-24 21:11:46 -07:00
|
|
|
if (level > RTMP_LOGWARNING)
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
return;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
blogva(LOG_INFO, format, args);
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2015-11-01 15:19:50 -08:00
|
|
|
static inline size_t num_buffered_packets(struct rtmp_stream *stream);
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
static inline void free_packets(struct rtmp_stream *stream)
|
|
|
|
{
|
2015-11-01 15:19:50 -08:00
|
|
|
size_t num_packets;
|
|
|
|
|
2015-11-01 14:52:49 -08:00
|
|
|
pthread_mutex_lock(&stream->packets_mutex);
|
|
|
|
|
2015-11-01 15:19:50 -08:00
|
|
|
num_packets = num_buffered_packets(stream);
|
2015-11-17 07:49:05 -08:00
|
|
|
if (num_packets)
|
|
|
|
info("Freeing %d remaining packets", (int)num_packets);
|
2015-11-01 15:19:50 -08:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
while (stream->packets.size) {
|
|
|
|
struct encoder_packet packet;
|
|
|
|
circlebuf_pop_front(&stream->packets, &packet, sizeof(packet));
|
|
|
|
obs_free_encoder_packet(&packet);
|
|
|
|
}
|
2015-11-01 14:52:49 -08:00
|
|
|
pthread_mutex_unlock(&stream->packets_mutex);
|
2014-04-02 00:42:12 -07:00
|
|
|
}
|
|
|
|
|
2015-11-02 18:21:31 -08:00
|
|
|
static inline bool stopping(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
return os_event_try(stream->stop_event) != EAGAIN;
|
|
|
|
}
|
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
static inline bool connecting(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
return os_atomic_load_bool(&stream->connecting);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool active(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
return os_atomic_load_bool(&stream->active);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool disconnected(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
return os_atomic_load_bool(&stream->disconnected);
|
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static void rtmp_stream_destroy(void *data)
|
|
|
|
{
|
|
|
|
struct rtmp_stream *stream = data;
|
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
if (stopping(stream) && !connecting(stream)) {
|
2015-11-18 13:59:13 -08:00
|
|
|
pthread_join(stream->send_thread, NULL);
|
2015-11-01 15:15:20 -08:00
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
} else if (connecting(stream) || active(stream)) {
|
2015-11-01 15:15:20 -08:00
|
|
|
if (stream->connecting)
|
|
|
|
pthread_join(stream->connect_thread, NULL);
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
stream->stop_ts = 0;
|
2015-11-02 15:53:12 -08:00
|
|
|
os_event_signal(stream->stop_event);
|
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
if (active(stream)) {
|
2015-11-01 15:15:20 -08:00
|
|
|
os_sem_post(stream->send_sem);
|
|
|
|
obs_output_end_data_capture(stream->output);
|
2015-11-18 13:59:13 -08:00
|
|
|
pthread_join(stream->send_thread, NULL);
|
2015-11-01 15:15:20 -08:00
|
|
|
}
|
|
|
|
}
|
2014-04-26 02:04:37 +02:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (stream) {
|
2014-04-02 00:42:12 -07:00
|
|
|
free_packets(stream);
|
2014-04-01 11:55:18 -07:00
|
|
|
dstr_free(&stream->path);
|
|
|
|
dstr_free(&stream->key);
|
|
|
|
dstr_free(&stream->username);
|
|
|
|
dstr_free(&stream->password);
|
2015-08-14 17:47:43 -07:00
|
|
|
dstr_free(&stream->encoder_name);
|
2016-07-29 08:30:30 -07:00
|
|
|
dstr_free(&stream->bind_ip);
|
2014-04-01 11:55:18 -07:00
|
|
|
os_event_destroy(stream->stop_event);
|
|
|
|
os_sem_destroy(stream->send_sem);
|
|
|
|
pthread_mutex_destroy(&stream->packets_mutex);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
circlebuf_free(&stream->packets);
|
2014-04-01 11:55:18 -07:00
|
|
|
bfree(stream);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
static void *rtmp_stream_create(obs_data_t *settings, obs_output_t *output)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
2014-04-01 11:55:18 -07:00
|
|
|
struct rtmp_stream *stream = bzalloc(sizeof(struct rtmp_stream));
|
|
|
|
stream->output = output;
|
|
|
|
pthread_mutex_init_value(&stream->packets_mutex);
|
|
|
|
|
|
|
|
RTMP_Init(&stream->rtmp);
|
|
|
|
RTMP_LogSetCallback(log_rtmp);
|
2014-04-24 21:11:46 -07:00
|
|
|
RTMP_LogSetLevel(RTMP_LOGWARNING);
|
2014-04-01 11:55:18 -07:00
|
|
|
|
|
|
|
if (pthread_mutex_init(&stream->packets_mutex, NULL) != 0)
|
|
|
|
goto fail;
|
|
|
|
if (os_event_init(&stream->stop_event, OS_EVENT_TYPE_MANUAL) != 0)
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
UNUSED_PARAMETER(settings);
|
|
|
|
return stream;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
rtmp_stream_destroy(stream);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
static void rtmp_stream_stop(void *data, uint64_t ts)
|
2014-04-01 11:55:18 -07:00
|
|
|
{
|
2014-04-02 00:42:12 -07:00
|
|
|
struct rtmp_stream *stream = data;
|
2015-11-01 15:15:20 -08:00
|
|
|
|
2015-11-02 18:21:31 -08:00
|
|
|
if (stopping(stream))
|
2015-11-01 15:15:20 -08:00
|
|
|
return;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
if (connecting(stream))
|
2015-11-02 14:00:02 -08:00
|
|
|
pthread_join(stream->connect_thread, NULL);
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
stream->stop_ts = ts / 1000ULL;
|
2015-11-02 15:53:12 -08:00
|
|
|
os_event_signal(stream->stop_event);
|
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
if (active(stream)) {
|
2016-06-11 11:42:29 -07:00
|
|
|
if (stream->stop_ts == 0)
|
|
|
|
os_sem_post(stream->send_sem);
|
2014-04-02 00:42:12 -07:00
|
|
|
}
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_rtmp_str(AVal *val, const char *str)
|
|
|
|
{
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
bool valid = (str && *str);
|
2014-04-01 11:55:18 -07:00
|
|
|
val->av_val = valid ? (char*)str : NULL;
|
|
|
|
val->av_len = valid ? (int)strlen(str) : 0;
|
Add preliminary output/encoder interface
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
2014-01-16 22:34:51 -07:00
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static inline void set_rtmp_dstr(AVal *val, struct dstr *str)
|
Add preliminary output/encoder interface
- First, I redid the output interface for libobs. I feel like it's
going in a pretty good direction in terms of design.
Right now, the design is so that outputs and encoders are separate.
One or more outputs can connect to a specific encoder to receive its
data, or the output can connect directly to raw data from libobs
output itself, if the output doesn't want to use a designated encoder.
Data is received via callbacks set when you connect to the encoder or
raw output. Multiple outputs can receive the data from a single
encoder context if need be (such as for streaming to multiple channels
at once, and/or recording with the same data).
When an encoder is first connected to, it will connect to raw output,
and start encoding. Additional connections will receive that same
data being encoded as well after that. When the last encoder has
disconnected, it will stop encoding. If for some reason the encoder
needs to stop, it will use the callback with NULL to signal that
encoding has stopped. Some of these things may be subject to change
in the future, though it feels pretty good with this design so far.
Will have to see how well it works out in practice versus theory.
- Second, Started adding preliminary RTMP/x264 output plugin code.
To speed things up, I might just make a direct raw->FFmpeg output to
create a quick output plugin that we can start using for testing all
the subsystems.
2014-01-16 22:34:51 -07:00
|
|
|
{
|
2014-08-05 13:38:24 -07:00
|
|
|
bool valid = !dstr_is_empty(str);
|
2014-04-01 11:55:18 -07:00
|
|
|
val->av_val = valid ? str->array : NULL;
|
|
|
|
val->av_len = valid ? (int)str->len : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool get_next_packet(struct rtmp_stream *stream,
|
|
|
|
struct encoder_packet *packet)
|
|
|
|
{
|
|
|
|
bool new_packet = false;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&stream->packets_mutex);
|
2014-04-02 00:42:12 -07:00
|
|
|
if (stream->packets.size) {
|
|
|
|
circlebuf_pop_front(&stream->packets, packet,
|
|
|
|
sizeof(struct encoder_packet));
|
2014-04-01 11:55:18 -07:00
|
|
|
new_packet = true;
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&stream->packets_mutex);
|
|
|
|
|
|
|
|
return new_packet;
|
2013-11-13 06:24:20 -07:00
|
|
|
}
|
|
|
|
|
2015-11-16 13:49:34 -08:00
|
|
|
static bool discard_recv_data(struct rtmp_stream *stream, size_t size)
|
2015-11-11 16:36:50 -08:00
|
|
|
{
|
2015-11-16 13:49:34 -08:00
|
|
|
RTMP *rtmp = &stream->rtmp;
|
2015-11-11 16:36:50 -08:00
|
|
|
uint8_t buf[512];
|
2015-11-16 13:49:34 -08:00
|
|
|
#ifdef _WIN32
|
|
|
|
int ret;
|
|
|
|
#else
|
|
|
|
ssize_t ret;
|
|
|
|
#endif
|
2015-11-11 16:36:50 -08:00
|
|
|
|
|
|
|
do {
|
|
|
|
size_t bytes = size > 512 ? 512 : size;
|
|
|
|
size -= bytes;
|
|
|
|
|
|
|
|
#ifdef _WIN32
|
2015-11-16 13:49:34 -08:00
|
|
|
ret = recv(rtmp->m_sb.sb_socket, buf, (int)bytes, 0);
|
2015-11-11 16:36:50 -08:00
|
|
|
#else
|
2015-11-16 13:49:34 -08:00
|
|
|
ret = recv(rtmp->m_sb.sb_socket, buf, bytes, 0);
|
2015-11-11 16:36:50 -08:00
|
|
|
#endif
|
2015-11-16 13:49:34 -08:00
|
|
|
|
|
|
|
if (ret <= 0) {
|
|
|
|
#ifdef _WIN32
|
|
|
|
int error = WSAGetLastError();
|
|
|
|
#else
|
|
|
|
int error = errno;
|
|
|
|
#endif
|
|
|
|
if (ret < 0) {
|
|
|
|
do_log(LOG_ERROR, "recv error: %d (%d bytes)",
|
|
|
|
error, (int)size);
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
2015-11-11 16:36:50 -08:00
|
|
|
} while (size > 0);
|
2015-11-16 13:49:34 -08:00
|
|
|
|
|
|
|
return true;
|
2015-11-11 16:36:50 -08:00
|
|
|
}
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
static int send_packet(struct rtmp_stream *stream,
|
2015-01-28 20:45:58 -08:00
|
|
|
struct encoder_packet *packet, bool is_header, size_t idx)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
2014-04-01 11:55:18 -07:00
|
|
|
uint8_t *data;
|
|
|
|
size_t size;
|
2015-11-11 16:36:50 -08:00
|
|
|
int recv_size = 0;
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
int ret = 0;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-11 16:36:50 -08:00
|
|
|
#ifdef _WIN32
|
|
|
|
ret = ioctlsocket(stream->rtmp.m_sb.sb_socket, FIONREAD,
|
|
|
|
(u_long*)&recv_size);
|
|
|
|
#else
|
|
|
|
ret = ioctl(stream->rtmp.m_sb.sb_socket, FIONREAD, &recv_size);
|
|
|
|
#endif
|
|
|
|
|
2015-11-16 13:49:34 -08:00
|
|
|
if (ret >= 0 && recv_size > 0) {
|
|
|
|
if (!discard_recv_data(stream, (size_t)recv_size))
|
|
|
|
return -1;
|
|
|
|
}
|
2015-11-11 16:36:50 -08:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
flv_packet_mux(packet, &data, &size, is_header);
|
2014-07-06 15:06:15 -07:00
|
|
|
#ifdef TEST_FRAMEDROPS
|
|
|
|
os_sleep_ms(rand() % 40);
|
|
|
|
#endif
|
2015-01-28 20:45:58 -08:00
|
|
|
ret = RTMP_Write(&stream->rtmp, (char*)data, (int)size, (int)idx);
|
2014-04-01 11:55:18 -07:00
|
|
|
bfree(data);
|
2014-04-02 00:42:12 -07:00
|
|
|
|
|
|
|
obs_free_encoder_packet(packet);
|
2014-07-06 17:28:01 -07:00
|
|
|
|
|
|
|
stream->total_bytes_sent += size;
|
2014-04-02 00:42:12 -07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
static inline bool send_headers(struct rtmp_stream *stream);
|
2014-12-18 12:38:37 -08:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static void *send_thread(void *data)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
2014-04-01 11:55:18 -07:00
|
|
|
struct rtmp_stream *stream = data;
|
|
|
|
|
2015-11-02 13:57:22 -08:00
|
|
|
os_set_thread_name("rtmp-stream: send_thread");
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
while (os_sem_wait(stream->send_sem) == 0) {
|
|
|
|
struct encoder_packet packet;
|
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (stopping(stream) && stream->stop_ts == 0) {
|
2014-04-01 11:55:18 -07:00
|
|
|
break;
|
2016-06-11 11:42:29 -07:00
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!get_next_packet(stream, &packet))
|
|
|
|
continue;
|
2014-12-18 12:38:37 -08:00
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (stopping(stream)) {
|
|
|
|
if (packet.sys_dts_usec >= (int64_t)stream->stop_ts) {
|
|
|
|
obs_free_encoder_packet(&packet);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
if (!stream->sent_headers) {
|
|
|
|
if (!send_headers(stream)) {
|
|
|
|
os_atomic_set_bool(&stream->disconnected, true);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2014-12-18 12:38:37 -08:00
|
|
|
|
2015-01-28 20:45:58 -08:00
|
|
|
if (send_packet(stream, &packet, false, packet.track_idx) < 0) {
|
2015-11-03 14:09:48 -08:00
|
|
|
os_atomic_set_bool(&stream->disconnected, true);
|
2014-04-02 00:42:12 -07:00
|
|
|
break;
|
|
|
|
}
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2015-11-18 10:30:13 -08:00
|
|
|
if (disconnected(stream)) {
|
2014-07-02 00:20:50 -07:00
|
|
|
info("Disconnected from %s", stream->path.array);
|
2014-07-02 00:24:55 -07:00
|
|
|
} else {
|
|
|
|
info("User stopped the stream");
|
2014-04-24 21:11:46 -07:00
|
|
|
}
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2015-11-18 07:49:51 -08:00
|
|
|
RTMP_Close(&stream->rtmp);
|
|
|
|
|
2015-11-02 18:21:31 -08:00
|
|
|
if (!stopping(stream)) {
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_detach(stream->send_thread);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
obs_output_signal_stop(stream->output, OBS_OUTPUT_DISCONNECTED);
|
2016-06-11 11:42:29 -07:00
|
|
|
} else {
|
|
|
|
obs_output_end_data_capture(stream->output);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
}
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2016-06-20 02:22:40 -07:00
|
|
|
free_packets(stream);
|
2015-11-18 13:59:13 -08:00
|
|
|
os_event_reset(stream->stop_event);
|
2015-11-18 07:48:27 -08:00
|
|
|
os_atomic_set_bool(&stream->active, false);
|
2015-03-07 10:07:16 -08:00
|
|
|
stream->sent_headers = false;
|
2014-04-01 11:55:18 -07:00
|
|
|
return NULL;
|
2013-11-13 06:24:20 -07:00
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
static bool send_meta_data(struct rtmp_stream *stream, size_t idx, bool *next)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
2014-04-02 00:42:12 -07:00
|
|
|
uint8_t *meta_data;
|
|
|
|
size_t meta_data_size;
|
2016-04-26 20:14:31 -07:00
|
|
|
bool success = true;
|
|
|
|
|
|
|
|
*next = flv_meta_data(stream->output, &meta_data,
|
2015-01-28 20:45:58 -08:00
|
|
|
&meta_data_size, false, idx);
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
if (*next) {
|
|
|
|
success = RTMP_Write(&stream->rtmp, (char*)meta_data,
|
|
|
|
(int)meta_data_size, (int)idx) >= 0;
|
2015-01-28 20:45:58 -08:00
|
|
|
bfree(meta_data);
|
|
|
|
}
|
|
|
|
|
|
|
|
return success;
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
static bool send_audio_header(struct rtmp_stream *stream, size_t idx,
|
|
|
|
bool *next)
|
2014-04-01 11:55:18 -07:00
|
|
|
{
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_output_t *context = stream->output;
|
2015-01-28 20:45:58 -08:00
|
|
|
obs_encoder_t *aencoder = obs_output_get_audio_encoder(context, idx);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
uint8_t *header;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
|
|
|
struct encoder_packet packet = {
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
.type = OBS_ENCODER_AUDIO,
|
2014-04-01 11:55:18 -07:00
|
|
|
.timebase_den = 1
|
|
|
|
};
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
if (!aencoder) {
|
|
|
|
*next = false;
|
|
|
|
return true;
|
|
|
|
}
|
2015-01-28 20:45:58 -08:00
|
|
|
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
obs_encoder_get_extra_data(aencoder, &header, &packet.size);
|
|
|
|
packet.data = bmemdup(header, packet.size);
|
2016-04-26 20:14:31 -07:00
|
|
|
return send_packet(stream, &packet, true, idx) >= 0;
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
static bool send_video_header(struct rtmp_stream *stream)
|
2014-04-01 11:55:18 -07:00
|
|
|
{
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_output_t *context = stream->output;
|
|
|
|
obs_encoder_t *vencoder = obs_output_get_video_encoder(context);
|
2014-04-02 00:42:12 -07:00
|
|
|
uint8_t *header;
|
|
|
|
size_t size;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
|
|
|
struct encoder_packet packet = {
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
.type = OBS_ENCODER_VIDEO,
|
|
|
|
.timebase_den = 1,
|
|
|
|
.keyframe = true
|
2014-04-01 11:55:18 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
obs_encoder_get_extra_data(vencoder, &header, &size);
|
2014-04-02 00:42:12 -07:00
|
|
|
packet.size = obs_parse_avc_header(&packet.data, header, size);
|
2016-04-26 20:14:31 -07:00
|
|
|
return send_packet(stream, &packet, true, 0) >= 0;
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
static inline bool send_headers(struct rtmp_stream *stream)
|
2014-04-01 11:55:18 -07:00
|
|
|
{
|
2014-12-18 12:38:37 -08:00
|
|
|
stream->sent_headers = true;
|
2015-01-28 20:45:58 -08:00
|
|
|
size_t i = 0;
|
2016-04-26 20:14:31 -07:00
|
|
|
bool next = true;
|
2015-01-28 20:45:58 -08:00
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
if (!send_audio_header(stream, i++, &next))
|
|
|
|
return false;
|
|
|
|
if (!send_video_header(stream))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
while (next) {
|
|
|
|
if (!send_audio_header(stream, i++, &next))
|
|
|
|
return false;
|
|
|
|
}
|
2015-01-28 20:45:58 -08:00
|
|
|
|
2016-04-26 20:14:31 -07:00
|
|
|
return true;
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool reset_semaphore(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
os_sem_destroy(stream->send_sem);
|
|
|
|
return os_sem_init(&stream->send_sem, 0) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef _WIN32
|
|
|
|
#define socklen_t int
|
|
|
|
#endif
|
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
#define MIN_SENDBUF_SIZE 65535
|
|
|
|
|
|
|
|
static void adjust_sndbuf_size(struct rtmp_stream *stream, int new_size)
|
2014-04-01 11:55:18 -07:00
|
|
|
{
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
int cur_sendbuf_size = new_size;
|
|
|
|
socklen_t int_size = sizeof(int);
|
2014-04-01 11:55:18 -07:00
|
|
|
|
|
|
|
getsockopt(stream->rtmp.m_sb.sb_socket, SOL_SOCKET, SO_SNDBUF,
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
(char*)&cur_sendbuf_size, &int_size);
|
2014-04-01 11:55:18 -07:00
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
if (cur_sendbuf_size < new_size) {
|
|
|
|
cur_sendbuf_size = new_size;
|
2014-04-01 11:55:18 -07:00
|
|
|
setsockopt(stream->rtmp.m_sb.sb_socket, SOL_SOCKET, SO_SNDBUF,
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
(const char*)&cur_sendbuf_size, int_size);
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int init_send(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
int ret;
|
2015-01-28 20:45:58 -08:00
|
|
|
size_t idx = 0;
|
2016-04-26 20:14:31 -07:00
|
|
|
bool next = true;
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
2014-07-01 13:26:20 -07:00
|
|
|
#if defined(_WIN32)
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
adjust_sndbuf_size(stream, MIN_SENDBUF_SIZE);
|
|
|
|
#endif
|
2014-04-01 11:55:18 -07:00
|
|
|
|
|
|
|
reset_semaphore(stream);
|
|
|
|
|
|
|
|
ret = pthread_create(&stream->send_thread, NULL, send_thread, stream);
|
2014-04-02 00:42:12 -07:00
|
|
|
if (ret != 0) {
|
|
|
|
RTMP_Close(&stream->rtmp);
|
2014-07-02 00:20:50 -07:00
|
|
|
warn("Failed to create send thread");
|
2014-05-12 15:30:36 -07:00
|
|
|
return OBS_OUTPUT_ERROR;
|
2014-04-02 00:42:12 -07:00
|
|
|
}
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-18 07:48:27 -08:00
|
|
|
os_atomic_set_bool(&stream->active, true);
|
2016-04-26 20:14:31 -07:00
|
|
|
while (next) {
|
|
|
|
if (!send_meta_data(stream, idx++, &next)) {
|
|
|
|
warn("Disconnected while attempting to connect to "
|
|
|
|
"server.");
|
|
|
|
return OBS_OUTPUT_DISCONNECTED;
|
|
|
|
}
|
|
|
|
}
|
2014-04-01 11:55:18 -07:00
|
|
|
obs_output_begin_data_capture(stream->output, 0);
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
return OBS_OUTPUT_SUCCESS;
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2015-11-11 02:48:33 -08:00
|
|
|
#ifdef _WIN32
|
|
|
|
static void win32_log_interface_type(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
RTMP *rtmp = &stream->rtmp;
|
|
|
|
MIB_IPFORWARDROW route;
|
|
|
|
uint32_t dest_addr, source_addr;
|
|
|
|
char hostname[256];
|
|
|
|
HOSTENT *h;
|
|
|
|
|
|
|
|
if (rtmp->Link.hostname.av_len >= sizeof(hostname) - 1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
strncpy(hostname, rtmp->Link.hostname.av_val, sizeof(hostname));
|
|
|
|
hostname[rtmp->Link.hostname.av_len] = 0;
|
|
|
|
|
|
|
|
h = gethostbyname(hostname);
|
|
|
|
if (!h)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dest_addr = *(uint32_t*)h->h_addr_list[0];
|
|
|
|
|
|
|
|
if (rtmp->m_bindIP.addrLen == 0)
|
|
|
|
source_addr = 0;
|
2016-04-13 02:10:06 +02:00
|
|
|
else if (rtmp->m_bindIP.addr.ss_family == AF_INET)
|
2015-11-11 02:48:33 -08:00
|
|
|
source_addr = (*(struct sockaddr_in*)&rtmp->m_bindIP)
|
|
|
|
.sin_addr.S_un.S_addr;
|
|
|
|
else
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!GetBestRoute(dest_addr, source_addr, &route)) {
|
|
|
|
MIB_IFROW row;
|
|
|
|
memset(&row, 0, sizeof(row));
|
|
|
|
row.dwIndex = route.dwForwardIfIndex;
|
|
|
|
|
|
|
|
if (!GetIfEntry(&row)) {
|
|
|
|
uint32_t speed =row.dwSpeed / 1000000;
|
|
|
|
char *type;
|
|
|
|
struct dstr other = {0};
|
|
|
|
|
|
|
|
if (row.dwType == IF_TYPE_ETHERNET_CSMACD) {
|
|
|
|
type = "ethernet";
|
|
|
|
} else if (row.dwType == IF_TYPE_IEEE80211) {
|
|
|
|
type = "802.11";
|
|
|
|
} else {
|
|
|
|
dstr_printf(&other, "type %lu", row.dwType);
|
|
|
|
type = other.array;
|
|
|
|
}
|
|
|
|
|
|
|
|
info("Interface: %s (%s, %lu mbps)", row.bDescr, type,
|
|
|
|
speed);
|
|
|
|
|
|
|
|
dstr_free(&other);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static int try_connect(struct rtmp_stream *stream)
|
|
|
|
{
|
2014-08-05 13:38:24 -07:00
|
|
|
if (dstr_is_empty(&stream->path)) {
|
2014-07-02 00:20:50 -07:00
|
|
|
warn("URL is empty");
|
2014-05-12 15:30:36 -07:00
|
|
|
return OBS_OUTPUT_BAD_PATH;
|
|
|
|
}
|
|
|
|
|
2014-07-02 00:20:50 -07:00
|
|
|
info("Connecting to RTMP URL %s...", stream->path.array);
|
2014-04-24 21:11:46 -07:00
|
|
|
|
2015-07-11 13:53:07 +09:00
|
|
|
memset(&stream->rtmp.Link, 0, sizeof(stream->rtmp.Link));
|
2015-01-28 20:45:58 -08:00
|
|
|
if (!RTMP_SetupURL(&stream->rtmp, stream->path.array))
|
2014-04-01 11:55:18 -07:00
|
|
|
return OBS_OUTPUT_BAD_PATH;
|
|
|
|
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
RTMP_EnableWrite(&stream->rtmp);
|
|
|
|
|
2015-08-14 17:47:43 -07:00
|
|
|
dstr_copy(&stream->encoder_name, "FMLE/3.0 (compatible; obs-studio/");
|
|
|
|
|
|
|
|
#ifdef HAVE_OBSCONFIG_H
|
|
|
|
dstr_cat(&stream->encoder_name, OBS_VERSION);
|
|
|
|
#else
|
|
|
|
dstr_catf(&stream->encoder_name, "%d.%d.%d",
|
|
|
|
LIBOBS_API_MAJOR_VER,
|
|
|
|
LIBOBS_API_MINOR_VER,
|
|
|
|
LIBOBS_API_PATCH_VER);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
dstr_cat(&stream->encoder_name, "; FMSc/1.0)");
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
set_rtmp_dstr(&stream->rtmp.Link.pubUser, &stream->username);
|
|
|
|
set_rtmp_dstr(&stream->rtmp.Link.pubPasswd, &stream->password);
|
2015-08-14 17:47:43 -07:00
|
|
|
set_rtmp_dstr(&stream->rtmp.Link.flashVer, &stream->encoder_name);
|
2014-04-01 11:55:18 -07:00
|
|
|
stream->rtmp.Link.swfUrl = stream->rtmp.Link.tcUrl;
|
|
|
|
|
2016-07-29 08:30:30 -07:00
|
|
|
if (dstr_is_empty(&stream->bind_ip) ||
|
|
|
|
dstr_cmp(&stream->bind_ip, "default") == 0) {
|
|
|
|
memset(&stream->rtmp.m_bindIP, 0, sizeof(stream->rtmp.m_bindIP));
|
|
|
|
} else {
|
|
|
|
bool success = netif_str_to_addr(&stream->rtmp.m_bindIP.addr,
|
|
|
|
&stream->rtmp.m_bindIP.addrLen,
|
|
|
|
stream->bind_ip.array);
|
|
|
|
if (success)
|
|
|
|
info("Binding to IP");
|
|
|
|
}
|
|
|
|
|
2015-01-28 20:45:58 -08:00
|
|
|
RTMP_AddStream(&stream->rtmp, stream->key.array);
|
|
|
|
|
|
|
|
for (size_t idx = 1;; idx++) {
|
|
|
|
obs_encoder_t *encoder = obs_output_get_audio_encoder(
|
|
|
|
stream->output, idx);
|
|
|
|
const char *encoder_name;
|
|
|
|
|
|
|
|
if (!encoder)
|
|
|
|
break;
|
|
|
|
|
|
|
|
encoder_name = obs_encoder_get_name(encoder);
|
|
|
|
RTMP_AddStream(&stream->rtmp, encoder_name);
|
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
stream->rtmp.m_outChunkSize = 4096;
|
|
|
|
stream->rtmp.m_bSendChunkSizeInfo = true;
|
|
|
|
stream->rtmp.m_bUseNagle = true;
|
|
|
|
|
2015-11-11 02:48:33 -08:00
|
|
|
#ifdef _WIN32
|
|
|
|
win32_log_interface_type(stream);
|
|
|
|
#endif
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!RTMP_Connect(&stream->rtmp, NULL))
|
|
|
|
return OBS_OUTPUT_CONNECT_FAILED;
|
|
|
|
if (!RTMP_ConnectStream(&stream->rtmp, 0))
|
|
|
|
return OBS_OUTPUT_INVALID_STREAM;
|
2014-04-24 21:11:46 -07:00
|
|
|
|
2014-07-02 00:20:50 -07:00
|
|
|
info("Connection to %s successful", stream->path.array);
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
return init_send(stream);
|
2014-04-01 11:55:18 -07:00
|
|
|
}
|
|
|
|
|
2015-11-02 15:52:05 -08:00
|
|
|
static bool init_connect(struct rtmp_stream *stream)
|
2015-11-01 15:15:20 -08:00
|
|
|
{
|
2015-11-02 15:52:05 -08:00
|
|
|
obs_service_t *service;
|
2015-11-01 15:15:20 -08:00
|
|
|
obs_data_t *settings;
|
2016-07-29 08:30:30 -07:00
|
|
|
const char *bind_ip;
|
2015-11-01 15:15:20 -08:00
|
|
|
|
2015-11-02 18:21:31 -08:00
|
|
|
if (stopping(stream))
|
2015-11-18 13:59:13 -08:00
|
|
|
pthread_join(stream->send_thread, NULL);
|
2015-11-01 15:15:20 -08:00
|
|
|
|
2015-11-17 07:51:46 -08:00
|
|
|
free_packets(stream);
|
|
|
|
|
2015-11-02 15:52:05 -08:00
|
|
|
service = obs_output_get_service(stream->output);
|
|
|
|
if (!service)
|
|
|
|
return false;
|
|
|
|
|
2015-11-03 14:09:48 -08:00
|
|
|
os_atomic_set_bool(&stream->disconnected, false);
|
2015-11-01 15:15:20 -08:00
|
|
|
stream->total_bytes_sent = 0;
|
|
|
|
stream->dropped_frames = 0;
|
|
|
|
stream->min_drop_dts_usec= 0;
|
|
|
|
stream->min_priority = 0;
|
|
|
|
|
|
|
|
settings = obs_output_get_settings(stream->output);
|
|
|
|
dstr_copy(&stream->path, obs_service_get_url(service));
|
|
|
|
dstr_copy(&stream->key, obs_service_get_key(service));
|
|
|
|
dstr_copy(&stream->username, obs_service_get_username(service));
|
|
|
|
dstr_copy(&stream->password, obs_service_get_password(service));
|
2016-03-24 13:42:33 -07:00
|
|
|
dstr_depad(&stream->path);
|
|
|
|
dstr_depad(&stream->key);
|
2015-11-01 15:15:20 -08:00
|
|
|
stream->drop_threshold_usec =
|
|
|
|
(int64_t)obs_data_get_int(settings, OPT_DROP_THRESHOLD) * 1000;
|
|
|
|
stream->max_shutdown_time_sec =
|
|
|
|
(int)obs_data_get_int(settings, OPT_MAX_SHUTDOWN_TIME_SEC);
|
2016-07-29 08:30:30 -07:00
|
|
|
|
|
|
|
bind_ip = obs_data_get_string(settings, OPT_BIND_IP);
|
|
|
|
dstr_copy(&stream->bind_ip, bind_ip);
|
|
|
|
|
2015-11-01 15:15:20 -08:00
|
|
|
obs_data_release(settings);
|
2015-11-02 15:52:05 -08:00
|
|
|
return true;
|
2015-11-01 15:15:20 -08:00
|
|
|
}
|
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
static void *connect_thread(void *data)
|
|
|
|
{
|
2014-04-02 00:42:12 -07:00
|
|
|
struct rtmp_stream *stream = data;
|
2015-11-01 15:15:20 -08:00
|
|
|
int ret;
|
|
|
|
|
2015-11-02 13:57:22 -08:00
|
|
|
os_set_thread_name("rtmp-stream: connect_thread");
|
|
|
|
|
2015-11-02 15:52:05 -08:00
|
|
|
if (!init_connect(stream)) {
|
|
|
|
obs_output_signal_stop(stream->output, OBS_OUTPUT_BAD_PATH);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-11-01 15:15:20 -08:00
|
|
|
ret = try_connect(stream);
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2014-04-24 21:11:46 -07:00
|
|
|
if (ret != OBS_OUTPUT_SUCCESS) {
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
obs_output_signal_stop(stream->output, ret);
|
2014-07-02 00:20:50 -07:00
|
|
|
info("Connection to %s failed: %d", stream->path.array, ret);
|
2014-04-24 21:11:46 -07:00
|
|
|
}
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2015-11-02 18:21:31 -08:00
|
|
|
if (!stopping(stream))
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_detach(stream->connect_thread);
|
|
|
|
|
2015-11-03 14:09:48 -08:00
|
|
|
os_atomic_set_bool(&stream->connecting, false);
|
2014-04-01 11:55:18 -07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool rtmp_stream_start(void *data)
|
|
|
|
{
|
|
|
|
struct rtmp_stream *stream = data;
|
2015-11-01 14:57:55 -08:00
|
|
|
|
2014-04-01 11:55:18 -07:00
|
|
|
if (!obs_output_can_begin_data_capture(stream->output, 0))
|
|
|
|
return false;
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
if (!obs_output_initialize_encoders(stream->output, 0))
|
|
|
|
return false;
|
2014-04-01 11:55:18 -07:00
|
|
|
|
2015-11-03 14:09:48 -08:00
|
|
|
os_atomic_set_bool(&stream->connecting, true);
|
2014-04-01 11:55:18 -07:00
|
|
|
return pthread_create(&stream->connect_thread, NULL, connect_thread,
|
Implement RTMP module (still needs drop code)
- Implement the RTMP output module. This time around, we just use a
simple FLV muxer, then just write to the stream with RTMP_Write.
Easy and effective.
- Fix the FLV muxer, the muxer now outputs proper FLV packets.
- Output API:
* When using encoders, automatically interleave encoded packets
before sending it to the output.
* Pair encoders and have them automatically wait for the other to
start to ensure sync.
* Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
because it was a bit confusing, and doing this makes a lot more
sense for outputs that need to stop suddenly (disconnections/etc).
- Encoder API:
* Remove some unnecessary encoder functions from the actual API and
make them internal. Most of the encoder functions are handled
automatically by outputs anyway, so there's no real need to expose
them and end up inadvertently confusing plugin writers.
* Have audio encoders wait for the video encoder to get a frame, then
start at the exact data point that the first video frame starts to
ensure the most accrate sync of video/audio possible.
* Add a required 'frame_size' callback for audio encoders that
returns the expected number of frames desired to encode with. This
way, the libobs encoder API can handle the circular buffering
internally automatically for the encoder modules, so encoder
writers don't have to do it themselves.
- Fix a few bugs in the serializer interface. It was passing the wrong
variable for the data in a few cases.
- If a source has video, make obs_source_update defer the actual update
callback until the tick function is called to prevent threading
issues.
2014-04-07 22:00:10 -07:00
|
|
|
stream) == 0;
|
2013-11-13 06:24:20 -07:00
|
|
|
}
|
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
static inline bool add_packet(struct rtmp_stream *stream,
|
|
|
|
struct encoder_packet *packet)
|
|
|
|
{
|
|
|
|
circlebuf_push_back(&stream->packets, packet,
|
|
|
|
sizeof(struct encoder_packet));
|
|
|
|
stream->last_dts_usec = packet->dts_usec;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline size_t num_buffered_packets(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
return stream->packets.size / sizeof(struct encoder_packet);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void drop_frames(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
struct circlebuf new_buf = {0};
|
|
|
|
int drop_priority = 0;
|
|
|
|
uint64_t last_drop_dts_usec = 0;
|
2014-07-06 17:28:01 -07:00
|
|
|
int num_frames_dropped = 0;
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
2014-07-02 00:20:50 -07:00
|
|
|
debug("Previous packet count: %d", (int)num_buffered_packets(stream));
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
|
|
|
circlebuf_reserve(&new_buf, sizeof(struct encoder_packet) * 8);
|
|
|
|
|
|
|
|
while (stream->packets.size) {
|
|
|
|
struct encoder_packet packet;
|
|
|
|
circlebuf_pop_front(&stream->packets, &packet, sizeof(packet));
|
|
|
|
|
|
|
|
last_drop_dts_usec = packet.dts_usec;
|
|
|
|
|
2015-04-08 05:55:46 -07:00
|
|
|
/* do not drop audio data or video keyframes */
|
|
|
|
if (packet.type == OBS_ENCODER_AUDIO ||
|
|
|
|
packet.drop_priority == OBS_NAL_PRIORITY_HIGHEST) {
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
circlebuf_push_back(&new_buf, &packet, sizeof(packet));
|
|
|
|
|
|
|
|
} else {
|
|
|
|
if (drop_priority < packet.drop_priority)
|
|
|
|
drop_priority = packet.drop_priority;
|
|
|
|
|
2014-07-06 17:28:01 -07:00
|
|
|
num_frames_dropped++;
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
obs_free_encoder_packet(&packet);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
circlebuf_free(&stream->packets);
|
|
|
|
stream->packets = new_buf;
|
|
|
|
stream->min_priority = drop_priority;
|
|
|
|
stream->min_drop_dts_usec = last_drop_dts_usec;
|
|
|
|
|
2014-07-06 17:28:01 -07:00
|
|
|
stream->dropped_frames += num_frames_dropped;
|
2014-07-02 00:20:50 -07:00
|
|
|
debug("New packet count: %d", (int)num_buffered_packets(stream));
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void check_to_drop_frames(struct rtmp_stream *stream)
|
|
|
|
{
|
|
|
|
struct encoder_packet first;
|
|
|
|
int64_t buffer_duration_usec;
|
|
|
|
|
|
|
|
if (num_buffered_packets(stream) < 5)
|
|
|
|
return;
|
|
|
|
|
|
|
|
circlebuf_peek_front(&stream->packets, &first, sizeof(first));
|
|
|
|
|
|
|
|
/* do not drop frames if frames were just dropped within this time */
|
|
|
|
if (first.dts_usec < stream->min_drop_dts_usec)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* if the amount of time stored in the buffered packets waiting to be
|
|
|
|
* sent is higher than threshold, drop frames */
|
|
|
|
buffer_duration_usec = stream->last_dts_usec - first.dts_usec;
|
2014-07-06 17:28:01 -07:00
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
if (buffer_duration_usec > stream->drop_threshold_usec) {
|
|
|
|
drop_frames(stream);
|
2014-07-02 00:20:50 -07:00
|
|
|
debug("dropping %" PRId64 " worth of frames",
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
buffer_duration_usec);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool add_video_packet(struct rtmp_stream *stream,
|
|
|
|
struct encoder_packet *packet)
|
|
|
|
{
|
|
|
|
check_to_drop_frames(stream);
|
|
|
|
|
|
|
|
/* if currently dropping frames, drop packets until it reaches the
|
|
|
|
* desired priority */
|
2014-07-06 17:28:01 -07:00
|
|
|
if (packet->priority < stream->min_priority) {
|
|
|
|
stream->dropped_frames++;
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
return false;
|
2014-07-06 17:28:01 -07:00
|
|
|
} else {
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
stream->min_priority = 0;
|
2014-07-06 17:28:01 -07:00
|
|
|
}
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
|
|
|
return add_packet(stream, packet);
|
|
|
|
}
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
static void rtmp_stream_data(void *data, struct encoder_packet *packet)
|
2013-11-13 06:24:20 -07:00
|
|
|
{
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
struct rtmp_stream *stream = data;
|
2014-04-02 00:42:12 -07:00
|
|
|
struct encoder_packet new_packet;
|
2015-11-18 14:12:04 -08:00
|
|
|
bool added_packet = false;
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2016-06-11 11:42:29 -07:00
|
|
|
if (disconnected(stream) || !active(stream))
|
2015-11-01 14:57:55 -08:00
|
|
|
return;
|
|
|
|
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
if (packet->type == OBS_ENCODER_VIDEO)
|
2014-04-02 00:42:12 -07:00
|
|
|
obs_parse_avc_packet(&new_packet, packet);
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
else
|
|
|
|
obs_duplicate_encoder_packet(&new_packet, packet);
|
2014-04-02 00:42:12 -07:00
|
|
|
|
|
|
|
pthread_mutex_lock(&stream->packets_mutex);
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
2015-11-18 10:30:26 -08:00
|
|
|
if (!disconnected(stream)) {
|
|
|
|
added_packet = (packet->type == OBS_ENCODER_VIDEO) ?
|
|
|
|
add_video_packet(stream, &new_packet) :
|
|
|
|
add_packet(stream, &new_packet);
|
|
|
|
}
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
pthread_mutex_unlock(&stream->packets_mutex);
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
|
|
|
|
if (added_packet)
|
|
|
|
os_sem_post(stream->send_sem);
|
|
|
|
else
|
|
|
|
obs_free_encoder_packet(&new_packet);
|
|
|
|
}
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
static void rtmp_stream_defaults(obs_data_t *defaults)
|
RTMP output: Implement frame drop code
A little bit of history about frame dropping:
I did a large number of experiments with frame dropping in old versions
of OBS1, and it's not an easy thing to deal with. I tried just about
everything from standard i-frame delay, to large buffers, to dumping
packets, to super-unnecessarily-complex things that just ended up
causing more problems than they was worth.
When I did my experiments, I found that the most ideal frame drop system
(in terms of reducing the amount of total data that needed to be
dropped) was in the 0.4xx days where I had a 3 second frame-drop buffer
where I could calculate the actual buffer size in bytes, and then
intellgently choose packets in that buffer to trim it down to a specific
size while minimizing the number of p-frames and i-frames dropped, and
preventing the actual impact of dropped frames on the stream. The
downside of it was that it required too much extra latency, and far too
many people complained about it, so it was removed in favor of the
current system.
The current system I just refer to just as 'packet dumping', which when
combined with low keyframe intervals (like most services use these
days), is the next-best method from my experience. Just dump the buffer
when you reach a threshold of buffering (which I prefer to measure with
time rather than in size), then wait for a new i-frame. Simple,
effective, and reduces the risk of consecutive buffering, while still
having fairly low impact on the stream output due to the low keyframe
interval of services.
By the way, audio will not (and should not ever) be dropped, lest you
end up with syncing issues (among other nasty things) specific to server
implementation.
2014-04-12 04:34:15 -07:00
|
|
|
{
|
2014-07-01 15:08:01 -07:00
|
|
|
obs_data_set_default_int(defaults, OPT_DROP_THRESHOLD, 600);
|
2015-11-01 15:00:01 -08:00
|
|
|
obs_data_set_default_int(defaults, OPT_MAX_SHUTDOWN_TIME_SEC, 5);
|
2016-07-29 08:30:30 -07:00
|
|
|
obs_data_set_default_string(defaults, OPT_BIND_IP, "default");
|
2014-04-02 00:42:12 -07:00
|
|
|
}
|
|
|
|
|
2014-09-29 17:36:13 +02:00
|
|
|
static obs_properties_t *rtmp_stream_properties(void *unused)
|
2014-04-02 00:42:12 -07:00
|
|
|
{
|
2014-09-29 17:36:13 +02:00
|
|
|
UNUSED_PARAMETER(unused);
|
|
|
|
|
2014-09-25 17:44:05 -07:00
|
|
|
obs_properties_t *props = obs_properties_create();
|
2016-07-29 08:30:30 -07:00
|
|
|
struct netif_saddr_data addrs = {0};
|
|
|
|
obs_property_t *p;
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2014-07-01 15:08:01 -07:00
|
|
|
obs_properties_add_int(props, OPT_DROP_THRESHOLD,
|
2014-07-09 22:12:57 -07:00
|
|
|
obs_module_text("RTMPStream.DropThreshold"),
|
|
|
|
200, 10000, 100);
|
2015-11-11 02:48:33 -08:00
|
|
|
|
2016-07-29 08:30:30 -07:00
|
|
|
p = obs_properties_add_list(props, OPT_BIND_IP,
|
|
|
|
obs_module_text("RTMPStream.BindIP"),
|
|
|
|
OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
|
|
|
|
|
|
|
|
obs_property_list_add_string(p, obs_module_text("Default"), "default");
|
|
|
|
|
|
|
|
netif_get_addrs(&addrs);
|
|
|
|
for (size_t i = 0; i < addrs.addrs.num; i++) {
|
|
|
|
struct netif_saddr_item item = addrs.addrs.array[i];
|
|
|
|
obs_property_list_add_string(p, item.name, item.addr);
|
|
|
|
}
|
|
|
|
netif_saddr_data_free(&addrs);
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
return props;
|
2013-11-13 06:24:20 -07:00
|
|
|
}
|
2014-04-02 00:42:12 -07:00
|
|
|
|
2014-07-06 17:28:01 -07:00
|
|
|
static uint64_t rtmp_stream_total_bytes_sent(void *data)
|
|
|
|
{
|
|
|
|
struct rtmp_stream *stream = data;
|
|
|
|
return stream->total_bytes_sent;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int rtmp_stream_dropped_frames(void *data)
|
|
|
|
{
|
|
|
|
struct rtmp_stream *stream = data;
|
|
|
|
return stream->dropped_frames;
|
|
|
|
}
|
|
|
|
|
2014-04-02 00:42:12 -07:00
|
|
|
struct obs_output_info rtmp_output_info = {
|
2014-08-04 21:27:52 -07:00
|
|
|
.id = "rtmp_output",
|
|
|
|
.flags = OBS_OUTPUT_AV |
|
|
|
|
OBS_OUTPUT_ENCODED |
|
2015-01-28 20:45:58 -08:00
|
|
|
OBS_OUTPUT_SERVICE |
|
|
|
|
OBS_OUTPUT_MULTI_TRACK,
|
2014-08-04 21:27:52 -07:00
|
|
|
.get_name = rtmp_stream_getname,
|
|
|
|
.create = rtmp_stream_create,
|
|
|
|
.destroy = rtmp_stream_destroy,
|
|
|
|
.start = rtmp_stream_start,
|
|
|
|
.stop = rtmp_stream_stop,
|
|
|
|
.encoded_packet = rtmp_stream_data,
|
|
|
|
.get_defaults = rtmp_stream_defaults,
|
|
|
|
.get_properties = rtmp_stream_properties,
|
|
|
|
.get_total_bytes = rtmp_stream_total_bytes_sent,
|
|
|
|
.get_dropped_frames = rtmp_stream_dropped_frames
|
2015-01-30 20:13:07 -08:00
|
|
|
};
|