Hello,
I have a video/audio flv stream coming in from the tcp port. What I need to do is read the input stream, edit the input frame and input audio sample, and re-send the edited audio and video to another rtmp server. Video and audio need to be synchronous.
I'm realizing a c++ code using the following pipeline and gstreamer library.
tcpclientsrc host=192.168.16.10 port=5000 ! \
flvdemux name=demux \
flvmux name=mux \
demux.audio ! queue ! appsink name=mysinkaudio \
demux.video ! h264parse ! avdec_h264 ! queue ! appsink name=mysink \
appsrc name=mysrc format=3 is-live=true ! nvh264enc ! h264parse ! queue ! mux.video \
appsrc name=mysrcaudio format=3 ! queue ! mux.audio \
mux. ! rtmpsink location=rtmp://localhost/show/stream sync=false async=false
My callback when I receive a video sample is this:
/* The appsink has received a buffer */
GstFlowReturn EditableStreamCapture::new_sample_video (GstElement *sink, EditableStreamCapture *data) {
auto start = std::chrono::high_resolution_clock::now();
GstSample *sample;
GstFlowReturn ret = GST_FLOW_ERROR;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
if (!data->is_enough_data)
{
g_signal_emit_by_name (data->app_src_video, "push-sample", sample, &ret);
}
gst_sample_unref (sample);
ret = GST_FLOW_OK;
}
auto stop = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
qDebug() << "[EditableStreamCapture]Duration: " << duration.count();
return ret;
}
Audio callback is this:
GstFlowReturn EditableStreamCapture::new_sample_audio (GstElement *sink, EditableStreamCapture *data) {
GstSample *sample;
GstBuffer *buffer;
GstFlowReturn ret;
GstMapInfo info;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
if (!data->is_enough_data)
{
//qDebug () << "*audio ";
g_signal_emit_by_name (data->app_src_audio, "push-sample", sample, &ret);
}
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
Well, my problem is that the output stream is really bad and slow, while the input is very smooth. Currently I'm using nvh264enc for h264 encoding on a nvidia Quadro P600 Mobile.
Maybe something wrong in my pipeline? Video input resolution is 1920 x 1080, with 25 fps. The application runs on a i7, 8th Gen with 12 cores. I tried to remove the audio, but the problem persists. Thank you :)