From williamtroup at gmail.com Fri Jul 1 14:38:20 2016 From: williamtroup at gmail.com (William Troup) Date: Fri, 1 Jul 2016 13:38:20 +0100 Subject: [Libav-user] movflags > faststart Message-ID: I'm trying to use the following code to set faststart: Avutil.av_dict_set(&headerOptions, "movflags", "+faststart", 0); This is passed through as follows: int writeHeaderResult = Avformat.avformat_write_header(formatContext, &headerOptions); The file writes correctly, but its an invalid mp4 that will not play. I'm totally confused and have been searching for days for an answer. The DLLs i'm using are: avcodec-56.dll avdevice-56.dll avfilter-5.dll avformat-56.dll avutil-54.dll postproc-53.dll swresample-1.dll swscale-3.dll -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Fri Jul 1 14:48:03 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 1 Jul 2016 12:48:03 +0000 (UTC) Subject: [Libav-user] movflags > faststart References: Message-ID: William Troup writes: > avformat-56.dll This looks outdated. Carl Eugen From williamtroup at gmail.com Fri Jul 1 15:08:27 2016 From: williamtroup at gmail.com (William Troup) Date: Fri, 1 Jul 2016 14:08:27 +0100 Subject: [Libav-user] movflags > faststart In-Reply-To: References: Message-ID: I've tried this with the latest version of the DLLs, it get the same result. On Fri, Jul 1, 2016 at 1:48 PM, Carl Eugen Hoyos wrote: > William Troup writes: > > > avformat-56.dll > > This looks outdated. > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loveall0926 at gmail.com Tue Jul 5 07:18:46 2016 From: loveall0926 at gmail.com (Hwangho Kim) Date: Tue, 5 Jul 2016 14:18:46 +0900 Subject: [Libav-user] Decoding mpeg4 has delay in 3.1.1 (3.0 works correctly) Message-ID: <7FE00B87-3BD0-4C27-A32E-82CC43171BCB@gmail.com> Hi, First of all please understand my poor english. I’m now making video player with ffmpeg. I’ve been using ffmpeg 3.0 for 1920x1280 on iOS 5S. It worked well (decoded without latency) with ffmpeg 3.0 but lagging occurred when I updated it to 3.1.1 (3.0.2 was also). And build tool for both of them is the same. So what I found that I think it would be the problem was this… avcodec_decode_video2 It takes long and less fired than 3.0 (thread issue”) Anyone who experienced this problem? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangweili at ragile.com Tue Jul 5 07:27:59 2016 From: zhangweili at ragile.com (zhangweili at ragile.com) Date: Tue, 5 Jul 2016 13:27:59 +0800 Subject: [Libav-user] =?gb2312?b?Q291bGQgbm90IGZpbmQgY29kZWMgZm9yIG1wZWc0?= =?gb2312?b?IG9uIHJ0c3Cjvw==?= Message-ID: <2016070513275806920111@ragile.com> Hi all, I use gstreamer make a mp4 file to a RTSP stream, and play it with ffplay.exe on windows. But ffplay report that it could not find video codec for this stream. Below is the debug log. What's wrong with it? ffplay.exe -debug er rtsp://127.0.0.1:8554/test ffplay version N-80906-gd5edb6c Copyright (c) 2003-2016 the FFmpeg developers built with gcc 5.4.0 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 28.100 / 55. 28.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 41.100 / 57. 41.100 libavdevice 57. 0.102 / 57. 0.102 libavfilter 6. 47.100 / 6. 47.100 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 [tcp @ 00000000000cc700] No default whitelist set sq= 0B f=0/0 [rtsp @ 00000000000ccb20] SDP: v=0 o=- 14003480297848734151 1 IN IP4 127.0.0.1 s=Session streamed with GStreamer i=rtsp-server t=0 0 a=tool:GStreamer a=type:broadcast a=control:* a=range:npt=0-308.731666666 m=video 0 RTP/AVP 96 c=IN IP4 0.0.0.0 b=AS:327 a=rtpmap:96 MPEG4-GENERIC/90000 a=framerate:30 a=fmtp:96 streamtype=4;profile-level-id=1;mode=generic;config=000001b001000001b58913000001000000012000c48d8800f50a041e1463000001b2476f6f676c65;sizelength=13;indexlength=3;indexdeltalength=3 a=control:stream=0 m=audio 0 RTP/AVP 97 c=IN IP4 0.0.0.0 b=AS:35 a=rtpmap:97 MP4A-LATM/22050 a=fmtp:97 cpresent=0;config=400027100000000000000000000000000000 a=control:stream=1 [rtsp @ 00000000000ccb20] video codec set to: (null)= 0B f=0/0 [rtsp @ 00000000000ccb20] audio codec set to: aac [rtsp @ 00000000000ccb20] audio samplerate set to: 22050 [rtsp @ 00000000000ccb20] audio channels set to: 1 [rtp @ 00000000000ceee0] No default whitelist set sq= 0B f=0/0 [udp @ 000000000222edc0] No default whitelist set [udp @ 000000000222edc0] end receive buffer size reported is 65536 [udp @ 0000000002240520] No default whitelist set [udp @ 0000000002240520] end receive buffer size reported is 65536 [rtsp @ 00000000000ccb20] setting jitter buffer size to 500 f=0/0 [rtp @ 0000000002250b20] No default whitelist set [udp @ 0000000002252d80] No default whitelist set [udp @ 0000000002252d80] end receive buffer size reported is 65536 [udp @ 0000000002263020] No default whitelist set [udp @ 0000000002263020] end receive buffer size reported is 65536 [rtsp @ 00000000000ccb20] setting jitter buffer size to 500 [rtsp @ 00000000000ccb20] hello state=0vq= 0KB sq= 0B f=0/0 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 2 with DTS 0, packet 3 with DTS 0 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 3 with DTS 0, packet 4 with DTS 0 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 4 with DTS 0, packet 5 with DTS 0 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 65 with DTS 180000, packet 66 with DTS 180000 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 66 with DTS 180000, packet 67 with DTS 180000 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 67 with DTS 180000, packet 68 with DTS 180000 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 128 with DTS 360000, packet 129 with DTS 360000 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 129 with DTS 360000, packet 130 with DTS 360000 [rtsp @ 00000000000ccb20] Non-increasing DTS in stream 0: packet 130 with DTS 360000, packet 131 with DTS 360000 [rtsp @ 00000000000ccb20] max_analyze_duration 5000000 reached at 5015510 microseconds st:1 [rtsp @ 00000000000ccb20] rfps: 29.916667 0.014984 Last message repeated 1 times [rtsp @ 00000000000ccb20] rfps: 30.000000 0.000001 [rtsp @ 00000000000ccb20] rfps: 60.000000 0.000002 [rtsp @ 00000000000ccb20] rfps: 120.000000 0.000010 [rtsp @ 00000000000ccb20] rfps: 240.000000 0.000040 [rtsp @ 00000000000ccb20] rfps: 29.970030 0.001923 Last message repeated 1 times [rtsp @ 00000000000ccb20] rfps: 59.940060 0.007691 Last message repeated 1 times [rtsp @ 00000000000ccb20] Setting avg frame rate based on r frame rate [rtsp @ 00000000000ccb20] Could not find codec parameters for stream 0 (Video: none, 1 reference frame, none): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, rtsp, from 'rtsp://127.0.0.1:8554/test': Metadata:: 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 title : Session streamed with GStreamer comment : rtsp-server Duration: 00:05:08.73, start: 0.000000, bitrate: N/A Stream #0:0, 165, 1/90000: Video: none, 1 reference frame, none, 30 fps, 30 tbr, 90k tbn, 90k tbc Stream #0:1 nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 , 110, 1/22050: Audio: aac (LC), 22050 Hz, mono, fltp detected 4 logical cores [ffplay_abuffer @ 00000000022c8600] Setting 'sample_rate' to value '22050' [ffplay_abuffer @ 00000000022c8600] Setting 'sample_fmt' to value 'fltp' [ffplay_abuffer @ 00000000022c8600] Setting 'channels' to value '1' [ffplay_abuffer @ 00000000022c8600] Setting 'time_base' to value '1/22050' [ffplay_abuffer @ 00000000022c8600] Setting 'channel_layout' to value '0x4' [ffplay_abuffer @ 00000000022c8600] tb:1/22050 samplefmt:fltp samplerate:22050 chlayout:0x4 [ffplay_abuffersink @ 000000000229a940] auto-inserting filter 'auto-inserted resampler 0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink' [AVFilterGraph @ 00000000022d8260] query_formats: 2 queried, 0 merged, 3 already done, 0 delayed [auto-inserted resampler 0 @ 000000000227bd80] [SWR @ 00000000030b6fc0] Using fltp internally between filters [auto-inserted resampler 0 @ 000000000227bd80] ch:1 chl:mono fmt:fltp r:22050Hz -> ch:1 chl:mono fmt:s16 r:22050Hz No codec could be found with id 0 Audio frame changed from rate:22050 ch:1 fmt:fltp layout:mono serial:-1 to rate:22050 ch:1 fmt:fltp layout:mono serial:1 [ffplay_abuffer @ 00000000022a1100] Setting 'sample_rate' to value '22050' [ffplay_abuffer @ 00000000022a1100] Setting 'sample_fmt' to value 'fltp' [ffplay_abuffer @ 00000000022a1100] Setting 'channels' to value '1' [ffplay_abuffer @ 00000000022a1100] Setting 'time_base' to value '1/22050' [ffplay_abuffer @ 00000000022a1100] Setting 'channel_layout' to value '0x4' [ffplay_abuffer @ 00000000022a1100] tb:1/22050 samplefmt:fltp samplerate:22050 chlayout:0x4 [ffplay_abuffersink @ 000000000229a940] auto-inserting filter 'auto-inserted resampler 0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink' [AVFilterGraph @ 00000000022d7d40] query_formats: 2 queried, 0 merged, 3 already done, 0 delayed [auto-inserted resampler 0 @ 00000000022b1a20] [SWR @ 00000000030b6f40] Using fltp internally between filters [auto-inserted resampler 0 @ 00000000022b1a20] ch:1 chl:mono fmt:fltp r:22050Hz -> ch:1 chl:mono fmt:s16 r:22050Hz 9.55 M-A: 0.000 fd= 0 aq= 31KB vq= 0KB sq= 0B f=0/0 zhangweili at ragile.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From loveall0926 at gmail.com Tue Jul 5 07:54:48 2016 From: loveall0926 at gmail.com (Hwangho Kim) Date: Tue, 5 Jul 2016 14:54:48 +0900 Subject: [Libav-user] =?utf-8?q?Could_not_find_codec_for_mpeg4_on_rtsp?= =?utf-8?b?77yf?= In-Reply-To: <2016070513275806920111@ragile.com> References: <2016070513275806920111@ragile.com> Message-ID: <30DA743E-8950-41BC-B865-12453E9D2CEC@gmail.com> > > [rtsp @ 00000000000ccb20] Could not find codec parameters for stream 0 (Video: none, 1 reference frame, none): unknown codec > Consider increasing the value for the 'analyzeduration' and 'probesize' options > Input #0, rtsp, from 'rtsp://127.0.0.1:8554/test': Is rtsp://127.0.0.1:8554/test works correctly? In the log, ffmpeg need more information to check the codec. Please see blow link https://ffmpeg.org/pipermail/ffmpeg-user/2013-March/014297.html https://trac.ffmpeg.org/wiki/StreamingGuide -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangweili at ragile.com Tue Jul 5 08:49:38 2016 From: zhangweili at ragile.com (zhangweili at ragile.com) Date: Tue, 5 Jul 2016 14:49:38 +0800 Subject: [Libav-user] =?utf-8?q?Could_not_find_codec_for_mpeg4_on_rtsp?= =?utf-8?b?77yf?= References: <2016070513275806920111@ragile.com>, <30DA743E-8950-41BC-B865-12453E9D2CEC@gmail.com> Message-ID: <2016070514493678483818@ragile.com> In RTSP SDP section, it had declared the codec parameters m=video 0 RTP/AVP 96 c=IN IP4 0.0.0.0 b=AS:327 a=rtpmap:96 MPEG4-GENERIC/90000 a=framerate:30 a=fmtp:96 streamtype=4;profile-level-id=1;mode=generic;config=000001b001000001b58913000001000000012000c48d8800f50a041e1463000001b2476f6f676c65;sizelength=13;indexlength=3;indexdeltalength=3 a=control:stream=0 Is there something wrong in this section? zhangweili at ragile.com From: Hwangho Kim Date: 2016-07-05 13:54 To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter. Subject: Re: [Libav-user]Could not find codec for mpeg4 on rtsp? [rtsp @ 00000000000ccb20] Could not find codec parameters for stream 0 (Video: none, 1 reference frame, none): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, rtsp, from 'rtsp://127.0.0.1:8554/test': Is rtsp://127.0.0.1:8554/test works correctly? In the log, ffmpeg need more information to check the codec. Please see blow link https://ffmpeg.org/pipermail/ffmpeg-user/2013-March/014297.html https://trac.ffmpeg.org/wiki/StreamingGuide -------------- next part -------------- An HTML attachment was scrubbed... URL: From parth5151 at yahoo.com Tue Jul 5 08:49:00 2016 From: parth5151 at yahoo.com (parth.pancholi) Date: Mon, 4 Jul 2016 23:49:00 -0700 (PDT) Subject: [Libav-user] [livav-users] where can I change read frame size ? Message-ID: <1467701340353-4662324.post@n4.nabble.com> Hi All, I am adding support of NV12 Tiled pixel format in ffmpeg. NV12 Tiled format conversion need bigger frame size than actual given resolution because it adjusts frame data into 64x32 (Tile size) micro blocks in Z and flip Z pattern. see this link for more details about NV12 Tiled format. So my question is Where can I change my frame size or src[0] & src[1] buffer size ?? src[0] - contains Y frame data, src[1] - contains UV interleaved data Below are my actual height and width calculations for NV12 Tiled format. #define ROUND_UP_X(num,x) (((num)+(x-1))&~(x-1)) static int tile_size = 64*32; int wTiles = ROUND_UP_X(width,128)/64; int hTiles = ROUND_UP_X(height,32)/32; int hTiles_UV = ROUND_UP_X(height/2,32)/32; //Size of a single frame in source, source is in NV12Tile format int frame_size_src_Y = (tile_size * wTiles * hTiles); int frame_size_src_UV = (tile_size * wTiles * hTiles_UV); int frame_size_src = frame_size_src_Y + frame_size_src_UV; -- View this message in context: http://libav-users.943685.n4.nabble.com/livav-users-where-can-I-change-read-frame-size-tp4662324.html Sent from the libav-users mailing list archive at Nabble.com. From cehoyos at ag.or.at Tue Jul 5 12:22:28 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 5 Jul 2016 10:22:28 +0000 (UTC) Subject: [Libav-user] Decoding mpeg4 has delay in 3.1.1 (3.0 works correctly) References: <7FE00B87-3BD0-4C27-A32E-82CC43171BCB@gmail.com> Message-ID: Hwangho Kim writes: > It worked well (decoded without latency) with ffmpeg 3.0 but > lagging occurred when I updated it to 3.1.1. Which change introduced the issue you see? Carl Eugen From cehoyos at ag.or.at Tue Jul 5 12:23:49 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 5 Jul 2016 10:23:49 +0000 (UTC) Subject: [Libav-user] [livav-users] where can I change read frame size ? References: <1467701340353-4662324.post@n4.nabble.com> Message-ID: parth.pancholi writes: > NV12 Tiled format conversion need bigger frame size than > actual given resolution because it adjusts frame data > into 64x32 (Tile size) micro blocks Does you code already work for 640x320 input? Fixes are always possible later... Carl Eugen From andreas.akerlund at svt.se Tue Jul 5 14:09:11 2016 From: andreas.akerlund at svt.se (=?iso-8859-1?Q?Andreas_=C5kerlund?=) Date: Tue, 5 Jul 2016 12:09:11 +0000 Subject: [Libav-user] Is there a way to store the correct timecode (not a calculated one) for a frame in the mxf-timecode stream? Message-ID: Hi I'm creating a ingest software that will encode dnxhd files from a SDI-source and can't find any hints on how to specify the timecode for a frame. I did get the "timecode" option to work on the videostream, but I need to be able to store timecodes that might be out of order or from media that might have been recorded, stopped and resumed through out the day (or week). >From what iv'e been able to find in the sources it seem to be hardcoded in mxfenc.c in the function mxf_write_system_item frame = mxf->last_indexed_edit_unit + mxf->edit_units_count; time_code = av_timecode_get_smpte_from_framenum(&mxf->tc, frame); which would indicate that the mxf encoder doesn't provide an interface to supply individual frames with timecode. Have I missed something or are there any known workarounds to inject timecodes? /Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikes at ioindustries.com Mon Jul 4 17:44:14 2016 From: mikes at ioindustries.com (Mike Simpson) Date: Mon, 4 Jul 2016 11:44:14 -0400 Subject: [Libav-user] avcodec_encode_video2 hangs when using Quick Sync h264_qsv encoder Message-ID: When I use the mpeg4 or h264 encoders, I am able to successfully encode images to make a valid AVI file using the API for ffmpeg 3.1.0. However, when I use the Quick Sync encoder (h264_qsv), avcodec_encode_video2 will hang some of the time. I found that when using images that are 1920x1080, it was rare that avcodec_encode_video2 would hang. When using 256x256 images, it was very likely that the function would hang. I have created the test code below that demonstrates the hang of avcodec_encode_video2. The code will create a 1000 frame, 256x256 AVI with a bit rate of 400000. The frames are simply allocated, so the output video should just be green frames. The problem was observed using Windows 7 and Windows 10, using the 32-bit or 64-bit test application. If anyone has any idea on how I can avoid the avcodec_encode_video2 hang I would be very grateful! Thanks in advance for any assistance. |extern "C" { #ifndef __STDC_CONSTANT_MACROS #define __STDC_CONSTANT_MACROS #endif #include "avcodec.h" #include "avformat.h" #include "swscale.h" #include "avutil.h" #include "imgutils.h" #include "opt.h" #include } #include // Globals AVCodec* m_pCodec = NULL; AVStream *m_pStream = NULL; AVOutputFormat* m_pFormat = NULL; AVFormatContext* m_pFormatContext = NULL; AVCodecContext* m_pCodecContext = NULL; AVFrame* m_pFrame = NULL; int m_frameIndex; // Output format AVPixelFormat m_pixType = AV_PIX_FMT_NV12; // Use for mpeg4 //AVPixelFormat m_pixType = AV_PIX_FMT_YUV420P; // Output frame rate int m_frameRate = 30; // Output image dimensions int m_imageWidth = 256; int m_imageHeight = 256; // Number of frames to export int m_frameCount = 1000; // Output file name const char* m_fileName = "c:/test/test.avi"; // Output file type const char* m_fileType = "AVI"; // Codec name used to encode const char* m_encoderName = "h264_qsv"; // use for mpeg4 //const char* m_encoderName = "mpeg4"; // Target bit rate int m_targetBitRate = 400000; void addVideoStream() { m_pStream = avformat_new_stream( m_pFormatContext, m_pCodec ); m_pStream->id = m_pFormatContext->nb_streams - 1; m_pStream->time_base = m_pCodecContext->time_base; m_pStream->codec->pix_fmt = m_pixType; m_pStream->codec->flags = m_pCodecContext->flags; m_pStream->codec->width = m_pCodecContext->width; m_pStream->codec->height = m_pCodecContext->height; m_pStream->codec->time_base = m_pCodecContext->time_base; m_pStream->codec->bit_rate = m_pCodecContext->bit_rate; } AVFrame* allocatePicture( enum AVPixelFormat pix_fmt, int width, int height ) { AVFrame *frame; frame = av_frame_alloc(); if ( !frame ) { return NULL; } frame->format = pix_fmt; frame->width = width; frame->height = height; int checkImage = av_image_alloc( frame->data, frame->linesize, width, height, pix_fmt, 32 ); if ( checkImage < 0 ) { return NULL; } return frame; } bool initialize() { AVRational frameRate; frameRate.den = m_frameRate; frameRate.num = 1; av_register_all(); m_pCodec = avcodec_find_encoder_by_name(m_encoderName); if( !m_pCodec ) { return false; } m_pCodecContext = avcodec_alloc_context3( m_pCodec ); m_pCodecContext->width = m_imageWidth; m_pCodecContext->height = m_imageHeight; m_pCodecContext->time_base = frameRate; m_pCodecContext->gop_size = 0; m_pCodecContext->pix_fmt = m_pixType; m_pCodecContext->codec_id = m_pCodec->id; m_pCodecContext->bit_rate = m_targetBitRate; av_opt_set( m_pCodecContext->priv_data, "+CBR", "", 0 ); return true; } bool startExport() { m_frameIndex = 0; char fakeFileName[512]; int checkAllocContext = avformat_alloc_output_context2( &m_pFormatContext, NULL, m_fileType, fakeFileName ); if ( checkAllocContext < 0 ) { return false; } if ( !m_pFormatContext ) { return false; } m_pFormat = m_pFormatContext->oformat; if ( m_pFormat->video_codec != AV_CODEC_ID_NONE ) { addVideoStream(); int checkOpen = avcodec_open2( m_pCodecContext, m_pCodec, NULL ); if ( checkOpen < 0 ) { return false; } m_pFrame = allocatePicture( m_pCodecContext->pix_fmt, m_pCodecContext->width, m_pCodecContext->height ); if( !m_pFrame ) { return false; } m_pFrame->pts = 0; } int checkOpen = avio_open( &m_pFormatContext->pb, m_fileName, AVIO_FLAG_WRITE ); if ( checkOpen < 0 ) { return false; } av_dict_set( &(m_pFormatContext->metadata), "title", "QS Test", 0 ); int checkHeader = avformat_write_header( m_pFormatContext, NULL ); if ( checkHeader < 0 ) { return false; } return true; } int processFrame( AVPacket& avPacket ) { avPacket.stream_index = 0; avPacket.pts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base ); avPacket.dts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base ); m_pFrame->pts++; int retVal = av_interleaved_write_frame( m_pFormatContext, &avPacket ); return retVal; } bool exportFrame() { int success = 1; int result = 0; AVPacket avPacket; av_init_packet( &avPacket ); avPacket.data = NULL; avPacket.size = 0; fflush(stdout); std::cout << "Before avcodec_encode_video2 for frame: " << m_frameIndex << std::endl; success = avcodec_encode_video2( m_pCodecContext, &avPacket, m_pFrame, &result ); std::cout << "After avcodec_encode_video2 for frame: " << m_frameIndex << std::endl; if( result ) { success = processFrame( avPacket ); } av_packet_unref( &avPacket ); m_frameIndex++; return ( success == 0 ); } void endExport() { int result = 0; int success = 0; if (m_pFrame) { while ( success == 0 ) { AVPacket avPacket; av_init_packet( &avPacket ); avPacket.data = NULL; avPacket.size = 0; fflush(stdout); success = avcodec_encode_video2( m_pCodecContext, &avPacket, NULL, &result ); if( result ) { success = processFrame( avPacket ); } av_packet_unref( &avPacket ); if (!result) { break; } } } if (m_pFormatContext) { av_write_trailer( m_pFormatContext ); if( m_pFrame ) { av_frame_free( &m_pFrame ); } avio_closep( &m_pFormatContext->pb ); avformat_free_context( m_pFormatContext ); m_pFormatContext = NULL; } } void cleanup() { if( m_pFrame || m_pCodecContext ) { if( m_pFrame ) { av_frame_free( &m_pFrame ); } if( m_pCodecContext ) { avcodec_close( m_pCodecContext ); av_free( m_pCodecContext ); } } } int main() { bool success = true; if (initialize()) { if (startExport()) { for (int loop = 0; loop < m_frameCount; loop++) { if (!exportFrame()) { std::cout << "Failed to export frame\n"; success = false; break; } } endExport(); } else { std::cout << "Failed to start export\n"; success = false; } cleanup(); } else { std::cout << "Failed to initialize export\n"; success = false; } if (success) { std::cout << "Successfully exported file\n"; } return 1; } | -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidbarmatz at gmail.com Wed Jul 6 13:47:07 2016 From: davidbarmatz at gmail.com (=?UTF-8?B?15PXldeTINeR16jXntel?=) Date: Wed, 6 Jul 2016 14:47:07 +0300 Subject: [Libav-user] Translation of FFmpeg command line command into C API Message-ID: Hello. right now I'm using FFmpeg as stand-alone application from command line. I'm get media from multiple input and write them into multiple outputs on hard drive. I using the following command (this is for only one input and output): *ffmpeg -i 'udp://xxx.xxx.xxx.xxx:1234?overrun_nonfatal=1&fifo_size=50000000 -map 0 -c copy -vn -f segment -segment_time 30 -ar 8000 -acodec pcm_s32le -ac 1 -strftime 1 /home/Path/On/HardDrive/"T000001_%Y_%m_%d__%H_%M_%S.wav"* I want to write a program using the FFmpeg C API (I want to write output into array instead of hard-drive). How could I know how to re-write this command into program (I already saw and compiled by myself FFmpeg examples)? Maybe you have a similiar example? Thank a lot, DB -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Wed Jul 6 17:18:05 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 6 Jul 2016 15:18:05 +0000 (UTC) Subject: [Libav-user] Translation of FFmpeg command line command into C API References: Message-ID: דוד ברמץ writes: > I want to write a program using the FFmpeg C API Did you already look at the sample code in doc/examples? Carl Eugen From loveall0926 at gmail.com Thu Jul 7 06:15:54 2016 From: loveall0926 at gmail.com (Hwangho Kim) Date: Thu, 7 Jul 2016 13:15:54 +0900 Subject: [Libav-user] Decoding mpeg4 has delay in 3.1.1 (3.0 works correctly) In-Reply-To: References: <7FE00B87-3BD0-4C27-A32E-82CC43171BCB@gmail.com> Message-ID: <6ED7C148-9D7B-4AB7-B9CA-D3B9D09201E0@gmail.com> I build ffmpeg 3.1.1 for iOS (arm64 only, for iOS 8.0 target) and decoding it with this alrorithm - (NSArray *) decodeFrames: (CGFloat) minDuration { if(-1 == _videoStream && -1 == _audioStream) return nil; NSMutableArray *result = [NSMutableArray array]; AVPacket packet; CGFloat decodedDuration = 0; BOOL finished = NO; while(!finished) { if(!_formatCtx) break; /* * reads in a packet and stores it in the AVPacket struct */ if(0 > av_read_frame(_formatCtx, &packet)) { _isEOF = YES; break; } if(packet.stream_index ==_videoStream) { int pktSize = packet.size; while(0 < pktSize) { int gotframe = 0; /* * Decode video frame */ int len = avcodec_decode_video2(_videoCodecCtx, _videoFrame, &gotframe, &packet); if(0 > len) { LoggerVideo(0, @"decode video error, skip packet"); break; } if(gotframe) { …. I only update the ffmpeg version and avcodec_decode_video2 takes double time than 3.0 for 1920x1280 video. Please let me know which things should I try. Thanks. > 2016. 7. 5., 오후 7:22, Carl Eugen Hoyos 작성: > >> mpeg 3.0 but >> lagging occurred when I updated it to 3.1.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Thu Jul 7 08:42:27 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Thu, 7 Jul 2016 06:42:27 +0000 (UTC) Subject: [Libav-user] Decoding mpeg4 has delay in 3.1.1 (3.0 works correctly) References: <7FE00B87-3BD0-4C27-A32E-82CC43171BCB@gmail.com> <6ED7C148-9D7B-4AB7-B9CA-D3B9D09201E0@gmail.com> Message-ID: Hwangho Kim writes: > I only update the ffmpeg version and avcodec_decode_video2 > takes double time than 3.0 for 1920x1280 video. > > Please let me know which things should I try. Please use git bisect to find out which change introduced the issue you see. And please avoid top-posting here, it is considered rude. Carl Eugen From przemyslaw.sobala at gmail.com Thu Jul 7 11:29:49 2016 From: przemyslaw.sobala at gmail.com (=?UTF-8?Q?Przemys=c5=82aw_Sobala?=) Date: Thu, 7 Jul 2016 11:29:49 +0200 Subject: [Libav-user] How to check whether video is interlaced and deinterlace in one pass? Message-ID: Hi I wonder if there's a way to check if video is interlaced (idet?) and then deinterlace (yadif?) in one pass so that progressive video timecode/fps wouldn't be changed? /Przemysław Sobala From ssshukla26 at gmail.com Fri Jul 8 11:32:54 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Fri, 8 Jul 2016 02:32:54 -0700 (PDT) Subject: [Libav-user] Unable to link libavutil with libswscale ! Message-ID: <1467970374678-4662334.post@n4.nabble.com> We are adding *nv12 tile* to *yuv420p* format conversion support in *libswscale*. While adding the support we also found out that the size calculation under *av_image_fill_pointers function* under *libavutil/imgutils.c* file also needed to be changed. Hence we added support under *libavutil* by making two new files *special_format.c* and *special_format.h*, and used it along with *imgutils.c*. This special_format.h is consist of four functions namely, nv12tile_calc_wTiles nv12tile_calc_hTiles nv12tile_calc_boundary_padding nv12tile_calc_plane_size We used this functions by including *special_format.h* inside *nv12tileconversion.c* file (our conversion algo is placed in this file) inside *libswscale* as follows. ... ... *#include "libavutil/special_format.h"* ... ... But on compiling its giving the below errors. ----------------------------------------------------------------------------------- libswscale/libswscale.so: undefined reference to `nv12tile_calc_wTiles' libswscale/libswscale.so: undefined reference to `nv12tile_calc_plane_size' libswscale/libswscale.so: undefined reference to `nv12tile_calc_hTiles' collect2: error: ld returned 1 exit status make: *** [ffplay_g] Error 1 make: *** Waiting for unfinished jobs.... libswscale/libswscale.so: undefined reference to `nv12tile_calc_wTiles' libswscale/libswscale.so: undefined reference to `nv12tile_calc_plane_size' libswscale/libswscale.so: undefined reference to `nv12tile_calc_hTiles' collect2: error: ld returned 1 exit status make: *** [ffprobe_g] Error 1 libswscale/libswscale.so: undefined reference to `nv12tile_calc_wTiles' libswscale/libswscale.so: undefined reference to `nv12tile_calc_plane_size' libswscale/libswscale.so: undefined reference to `nv12tile_calc_hTiles' collect2: error: ld returned 1 exit status make: *** [ffserver_g] Error 1 libswscale/libswscale.so: undefined reference to `nv12tile_calc_wTiles' libswscale/libswscale.so: undefined reference to `nv12tile_calc_plane_size' libswscale/libswscale.so: undefined reference to `nv12tile_calc_hTiles' collect2: error: ld returned 1 exit status make: *** [ffmpeg_g] Error 1 ----------------------------------------------------------------------------------- To solve this we added the lines (in bold as show below) in *Makefile* under *libswscale*, ----------------------------------------------------------------------------------- include $(SUBDIR)../config.mak NAME = swscale HEADERS = swscale.h \ version.h \ OBJS = alphablend.o \ hscale.o \ hscale_fast_bilinear.o \ gamma.o \ input.o \ options.o \ output.o \ rgb2rgb.o \ slice.o \ swscale.o \ swscale_unscaled.o \ utils.o \ yuv2rgb.o \ vscale.o \ nv12tiledconversion.o \ * ./libavutil/special_format.o \* OBJS-$(CONFIG_SHARED) += log2_tab.o # Windows resource file SLIBOBJS-$(HAVE_GNU_WINDRES) += swscaleres.o TESTPROGS = colorspace \ swscale ----------------------------------------------------------------------------------- My question is how can I compile the ffmpeg libraries *without including* special_format.o object reference under libswscale Makefile ? The reason why am asking this questions is that I have seen many files under libswscale using files from under libavutil and they are compiling fine, but am not able to find out why we are facing this errors ! Please help. *Note* :- configuration of ffmpeg is as show below. export SDL_PATH=../sdl/ ./configure --enable-shared --enable-nonfree --enable-pic --enable-gpl --extra-cflags="-I$SDL_PATH/include/SDL/" --extra-ldflags="-L$SDL_PATH/lib/ -lSDL" -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Unable-to-link-libavutil-with-libswscale-tp4662334.html Sent from the libav-users mailing list archive at Nabble.com. From cehoyos at ag.or.at Fri Jul 8 13:36:09 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 8 Jul 2016 11:36:09 +0000 (UTC) Subject: [Libav-user] Unable to link libavutil with libswscale ! References: <1467970374678-4662334.post@n4.nabble.com> Message-ID: ssshukla26 writes: > My question is how can I compile the ffmpeg libraries > *without including* special_format.o object reference > under libswscale Makefile ? Please provide the patch you have prepared so far so we can understand your issue more easily. Thank you, Carl Eugen From leonardonahra at gmail.com Fri Jul 8 17:59:15 2016 From: leonardonahra at gmail.com (Leonardo Nahra) Date: Fri, 8 Jul 2016 12:59:15 -0300 Subject: [Libav-user] RTP streaming input and output setting local port fails socket bind Message-ID: Hello, How can I use the same local port for input and output of RTP streams and avoid the UDP error below? [udp @ 0e5efd00] bind failed: Error number -10048 occurred I have one AVFormatContext for output and one for intput, my guess is that each AVFormatContext creates its own socket and binds to the same port. I think the correct way is to use the same AVFormatContext for both input and output, but doing that crashes the application without giving a hint of what the error is. Thanks, Nahra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryaowe at gmail.com Fri Jul 8 20:32:21 2016 From: ryaowe at gmail.com (Ryan Owen) Date: Fri, 8 Jul 2016 12:32:21 -0600 Subject: [Libav-user] Missing audio frames in mpegts muxer output Message-ID: I'm attempting to build something that will let me manipulate audio and video frames as I receive them over a socket connection, then send them back out over a different socket. It is long-running, so it might modify hours worth of audio/video. For this reason, I want it all to happen in memory, processing data as it arrives and writing data out as it completes. The data coming in is MPEG2 TS with H.264 and AAC+ADTS. I use libavformat to demux it into encoded audio and video frames. From there, my code decides if it wants to modify the frames, delete them, or just pass them through. From there, the encoded frames get fed back into libavformat to be muxed back into MPEG TS format. The challenge I'm coming across is that when I close out a stream, the resulting TS data is missing the last couple of ADTS frames. I've verified that I am in fact sending all of the frames to the muxer. They're just not all making it into the output for some reason. I set up libavformat to mux to TS and call a callback method to receive the muxed data: /* ic is the input context. I just want to copy the streams from it */ int setup_muxer(AVFormatContext *ic, int (*write_callback)(void *, uint8_t *, int)) { int i; int buf_size = 8192; unsigned char *buf; AVFormatContext *oc = NULL; AVOutputFormat *fmt = av_guess_format("mpegts", NULL, NULL); avformat_alloc_output_context2(&oc, fmt, NULL, NULL); if (oc == NULL) return 1; buf = (unsigned char *)av_malloc(buf_size); if (buf == NULL) return 2; oc->pb = avio_alloc_context(buf, buf_size, 1, NULL, NULL, write_callback, NULL); if (oc->pb == NULL) return 3; /*Grab all the streams from the input and add them to the output*/ for (i = 0; i < ic->nb_streams; i++) { AVStream *in_stream = ic->streams[i]; AVCodec *codec = avcodec_find_encoder(in_stream->codec->codec_id); AVStream *out_stream = avformat_new_stream(oc, codec); avcodec_parameters_from_context(out_stream->codecpar, in_stream->codec); /*Copy common fields Some of this may not be necessary anymore with codecpar, but ffmpeg.c still uses it*/ out_stream->codec->codec_id = in_stream->codec->codec_id; out_stream->codec->codec_type = in_stream->codec->codec_type; out_stream->codec->bit_rate = in_stream->codec->bit_rate; out_stream->codec->extradata = av_memdup(in_stream->codec->extradata, in_stream->codec->extradata_size); out_stream->codec->extradata_size = in_stream->codec->extradata_size; out_stream->time_base.den = in_stream->time_base.den; out_stream->time_base.num = in_stream->time_base.num; /*copy audio and video specific fields*/ if (in_stream->codec->codec_type == AVMEDIA_TYPE_VIDEO) { out_stream->codec->width = in_stream->codec->width; out_stream->codec->height = in_stream->codec->height; out_stream->codec->pix_fmt = in_stream->codec->pix_fmt; } else { out_stream->codec->sample_fmt = in_stream->codec->sample_fmt; out_stream->codec->sample_rate = in_stream->codec->sample_rate; out_stream->codec->channels = in_stream->codec->channels; } if (oc->oformat->flags & AVFMT_GLOBALHEADER) out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER; } return avformat_write_header(oc, NULL); } I then write frames out with: av_interleaved_write_frame(oc, pkt); And then when I close out a stream, I do: av_write_trailer(oc); for (i = 0; i < oc->nb_streams; i++) { AVStream *stream = oc->streams[i]; avcodec_close(stream->codec); } I've also tried adding: av_interleaved_write_frame(oc, NULL); But I still end up one or two ADTS frames short. Is there something else I should be flushing? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From adaheemus at hotmail.com Fri Jul 8 03:19:09 2016 From: adaheemus at hotmail.com (Masen Baz) Date: Fri, 8 Jul 2016 04:19:09 +0300 Subject: [Libav-user] avstream.codecpar Message-ID: Dear all When encoding a live stream to udp , ffmpeg says to replace AVstream.codec with AVSTREAM.codecpar , The question is how to do it? Please help -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggarra13 at gmail.com Sat Jul 9 02:47:32 2016 From: ggarra13 at gmail.com (Gonzalo) Date: Fri, 8 Jul 2016 21:47:32 -0300 Subject: [Libav-user] avstream.codecpar In-Reply-To: References: Message-ID: <578049A4.4000704@gmail.com> El 07/07/16 a las 22:19, Masen Baz escribió: > Dear all > > When encoding a live stream to udp , ffmpeg says to replace > AVstream.codec with AVSTREAM.codecpar , > > The question is how to do it? > > Please help My suggestion is: don't do it yet, as codecpar support seems broken. I recently posted a bug which shows that fps are not kept properly. If you want some sample code on how to do it easily, check doc/examples/muxing.c -- Gonzalo Garramuño ggarra13 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssshukla26 at gmail.com Mon Jul 11 00:05:20 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Sun, 10 Jul 2016 15:05:20 -0700 (PDT) Subject: [Libav-user] Unable to link libavutil with libswscale ! In-Reply-To: References: <1467970374678-4662334.post@n4.nabble.com> Message-ID: <1468188320381-4662340.post@n4.nabble.com> Attaching the patch. PFA. 0001-NV12-Tile-pixel-format-support-added.patch I have just changed the file name *nv12tiledconversion.c* to *nv12tiled2nv12.c*. Note :- We are planning to upstream this patch when everything falls right in its place. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Unable-to-link-libavutil-with-libswscale-tp4662334p4662340.html Sent from the libav-users mailing list archive at Nabble.com. From fayuso83 at yahoo.es Mon Jul 11 11:31:49 2016 From: fayuso83 at yahoo.es (=?UTF-8?Q?Ferm=C3=ADn_Ayuso_M=C3=A1rquez?=) Date: Mon, 11 Jul 2016 09:31:49 +0000 (UTC) Subject: [Libav-user] Linking error with FFmpeg 3.1.1 References: <1263920186.1328318.1468229509088.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1263920186.1328318.1468229509088.JavaMail.yahoo@mail.yahoo.com> Hello! I've just compiled new version of FFmpeg (3.1.1). I'm using the resulting libraries to decode video from ip cameras. Compilation of FFmpeg was OK, but when I try to use them in my own program, I always have multiple linking errors like this:Error    1    error LNK2019: unresolved external symbol _FreeContextBuffer at 4 referenced in function _tls_shutdown_client    C:\***\libavformat.lib(tls_schannel.o) My configure line is: ./configure --toolchain=msvc --yasmexe='../dependencies/yasm/yasm.exe' --prefix=ffmpeg/ --disable-doc --disable-ffmpeg --disable-ffplay --disable-ffprobe --disable-ffserver --disable-avdevice --disable-encoders I tried './configure --toolchain=msvc' too, but linking errors persist. If I use the --disable-network option, the linking errors disappear, but I can't use functions like "avformat_open_input" (my code use it). Can anyone help me? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.leppkes at gmail.com Mon Jul 11 12:00:49 2016 From: h.leppkes at gmail.com (Hendrik Leppkes) Date: Mon, 11 Jul 2016 12:00:49 +0200 Subject: [Libav-user] Linking error with FFmpeg 3.1.1 In-Reply-To: <1263920186.1328318.1468229509088.JavaMail.yahoo@mail.yahoo.com> References: <1263920186.1328318.1468229509088.JavaMail.yahoo.ref@mail.yahoo.com> <1263920186.1328318.1468229509088.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Jul 11, 2016 at 11:31 AM, Fermín Ayuso Márquez wrote: > Hello! > > I've just compiled new version of FFmpeg (3.1.1). I'm using the resulting > libraries to decode video from ip cameras. Compilation of FFmpeg was OK, but > when I try to use them in my own program, I always have multiple linking > errors like this: > Error 1 error LNK2019: unresolved external symbol _FreeContextBuffer at 4 > referenced in function _tls_shutdown_client > C:\***\libavformat.lib(tls_schannel.o) > > My configure line is: > ./configure --toolchain=msvc --yasmexe='../dependencies/yasm/yasm.exe' > --prefix=ffmpeg/ --disable-doc --disable-ffmpeg --disable-ffplay > --disable-ffprobe --disable-ffserver --disable-avdevice --disable-encoders > > I tried './configure --toolchain=msvc' too, but linking errors persist. > > If I use the --disable-network option, the linking errors disappear, but I > can't use functions like "avformat_open_input" (my code use it). > > Can anyone help me? > If you are using static linking, you need to add the additional libraries FFmpeg might need, in this case "Secur32.lib" (a Microsoft library for security functions) - Hendrik From fayuso83 at yahoo.es Mon Jul 11 12:24:00 2016 From: fayuso83 at yahoo.es (=?UTF-8?Q?Ferm=C3=ADn_Ayuso_M=C3=A1rquez?=) Date: Mon, 11 Jul 2016 10:24:00 +0000 (UTC) Subject: [Libav-user] Linking error with FFmpeg 3.1.1 In-Reply-To: References: <1263920186.1328318.1468229509088.JavaMail.yahoo.ref@mail.yahoo.com> <1263920186.1328318.1468229509088.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1219573856.1365586.1468232640604.JavaMail.yahoo@mail.yahoo.com> Thanks! That's worked. Is this behavior caused for any new change? In the past, I didn't use the 'Secur32.lib' and all were working OK. El Lunes 11 de julio de 2016 12:08, Hendrik Leppkes escribió: On Mon, Jul 11, 2016 at 11:31 AM, Fermín Ayuso Márquez wrote: > Hello! > > I've just compiled new version of FFmpeg (3.1.1). I'm using the resulting > libraries to decode video from ip cameras. Compilation of FFmpeg was OK, but > when I try to use them in my own program, I always have multiple linking > errors like this: > Error    1    error LNK2019: unresolved external symbol _FreeContextBuffer at 4 > referenced in function _tls_shutdown_client > C:\***\libavformat.lib(tls_schannel.o) > > My configure line is: > ./configure --toolchain=msvc --yasmexe='../dependencies/yasm/yasm.exe' > --prefix=ffmpeg/ --disable-doc --disable-ffmpeg --disable-ffplay > --disable-ffprobe --disable-ffserver --disable-avdevice --disable-encoders > > I tried './configure --toolchain=msvc' too, but linking errors persist. > > If I use the --disable-network option, the linking errors disappear, but I > can't use functions like "avformat_open_input" (my code use it). > > Can anyone help me? > If you are using static linking, you need to add the additional libraries FFmpeg might need, in this case "Secur32.lib" (a Microsoft library for security functions) - Hendrik _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryaowe at gmail.com Mon Jul 11 21:10:43 2016 From: ryaowe at gmail.com (Ryan Owen) Date: Mon, 11 Jul 2016 13:10:43 -0600 Subject: [Libav-user] Enable multiple ADTS frames per PES frame? Message-ID: I'm trying to do an all in-memory demux, mess around with data, then remux of a TS stream. I'm finding that the output TS data has a PES frame for every ADTS frame, which makes for some extra overhead. The ffmepg command doesn't behave this way, so it seems like I must be missing an option or something. A simplified version of my code is attached. I've stripped the error checking to make it super simple. I also changed the in-memory reading/writing to just read/write from a file to make testing easier. I'm building against the latest from git. Is there an option to have the muxer handle bundling ADTS frames together for me? Or do I need to do it myself before passing it to av_interleaved_write_frame? Also, is there anything that seems off with the way I'm using the libavformat API? (aside form no error checking; that's intentional to make the example code easier to read) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: demux_then_mux.c Type: text/x-csrc Size: 4336 bytes Desc: not available URL: From applemax82 at 163.com Tue Jul 12 04:10:08 2016 From: applemax82 at 163.com (qw) Date: Tue, 12 Jul 2016 10:10:08 +0800 (CST) Subject: [Libav-user] does ffmpeg support AEC? Message-ID: <5dec3fe3.4335.155dcde5ea3.Coremail.applemax82@163.com> Hi, Does ffmpeg support AEC, i.e. Acoustic Echo Cancellation? Or Does ffmpeg has third-party dependency lib that supports AEC? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From applemax82 at 163.com Tue Jul 12 05:58:29 2016 From: applemax82 at 163.com (qw) Date: Tue, 12 Jul 2016 11:58:29 +0800 (CST) Subject: [Libav-user] which rtmpdump is beter for fmpeg's native rtmp lib and rtmpdump Message-ID: <72ffabb.6d27.155dd4191f0.Coremail.applemax82@163.com> Hi, I found ffmpeg support two rtmp libs, where one is ffmpeg's native rtmp lib, and the other is rtmpdump. Which is better? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From pepavo at gmail.com Tue Jul 12 14:12:45 2016 From: pepavo at gmail.com (Josef Vosyka) Date: Tue, 12 Jul 2016 14:12:45 +0200 Subject: [Libav-user] Mix audio with video sample API calls Message-ID: I'm successfully using libavcodec.a in my project for video encoding. Now I need to add audio track to existing video. It works great from command line using this: ffmpeg -i raw_sequence.mp4 -i voices_1.cif -c copy -map 0:v:0 -map 1:a:0 -shortest out.mp4 I can not find how to achieve this same thing by making API calls. I even tried as desperate option to compile ffmpeg program, rename main() and call it internally. This is however failing due to a lot of dependencies. 1. could you help me with a hit how to make the API calls? 2. is the ffmpeg inclusion really stupid or would you give me a hit for this too? It is quite elegant if you consider that there are a lot of other use-cases when you need to understand the API calls and this way you "only" need to understand the command line options. From williamtroup at gmail.com Tue Jul 12 14:38:30 2016 From: williamtroup at gmail.com (William Troup) Date: Tue, 12 Jul 2016 13:38:30 +0100 Subject: [Libav-user] Mix audio with video sample API calls In-Reply-To: References: Message-ID: I've had the same problem. Look at these two questions I raised I figured out: http://stackoverflow.com/questions/37806882/mix-pcm-data-from-two-decoded-ffmpeg-avframe-objects http://stackoverflow.com/questions/37570129/increase-decrease-audio-volume-using-ffmpeg Using a filter graph and amix filter will do the job and won't effect proformance that much. > On 12 Jul 2016, at 13:12, Josef Vosyka wrote: > > I'm successfully using libavcodec.a in my project for video encoding. > Now I need to add audio track to existing video. > It works great from command line using this: > > ffmpeg -i raw_sequence.mp4 -i voices_1.cif -c copy -map 0:v:0 > -map 1:a:0 -shortest out.mp4 > > I can not find how to achieve this same thing by making API calls. > I even tried as desperate option to compile ffmpeg program, rename > main() and call it internally. > This is however failing due to a lot of dependencies. > > 1. could you help me with a hit how to make the API calls? > > 2. is the ffmpeg inclusion really stupid or would you give me a hit > for this too? It is quite elegant if you consider that there are a lot > of other use-cases when you need to understand the API calls and this > way you "only" need to understand the command line options. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at nsup.org Tue Jul 12 15:16:21 2016 From: george at nsup.org (Nicolas George) Date: Tue, 12 Jul 2016 15:16:21 +0200 Subject: [Libav-user] Mix audio with video sample API calls In-Reply-To: References: Message-ID: <20160712131621.GA4059089@phare.normalesup.org> Le quintidi 25 messidor, an CCXXIV, Josef Vosyka a écrit : > I'm successfully using libavcodec.a in my project for video encoding. > Now I need to add audio track to existing video. > It works great from command line using this: > > ffmpeg -i raw_sequence.mp4 -i voices_1.cif -c copy -map 0:v:0 > -map 1:a:0 -shortest out.mp4 > > I can not find how to achieve this same thing by making API calls. The API calls allow to read inputs and write outputs. Anything in between is the responsibility of your application. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From robertot1 at libero.it Tue Jul 12 16:24:50 2016 From: robertot1 at libero.it (robertot1 at libero.it) Date: Tue, 12 Jul 2016 16:24:50 +0200 (CEST) Subject: [Libav-user] Delay to capture frames from live stream Message-ID: <1395033695.12894451468333490758.JavaMail.httpd@webmail-31.iol.local> Hi everyone, I’m trying to capture live stream from usb camera with raspberry pi 2.I installed ffmpeg and I’m using this code to get live stream from camera.I have some problems:- the framerate decreases because every ~22 frames there is a big delay (~950ms), it’s how if the buffer become full and then it’s emptied, or another possibility is that every ~22 frames the camera adjusts some parameters for contrast, brightness, ecc…- from details camera I could capture the frames with 30fps with 640x480 resolution, but, without to consider the delay, the difference in ms between frames is 44ms, then ~23fps, why? The usb camera is: ELP-USB30W02M-L36, I use 640x480 resolution with format YUY2.Thanks a lot for yours helps. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------#include #include #include #include #include typedef struct Timestamp { int seconds; int useconds; } Timestamp; #define ONE_SECOND_IN_USEC 1000000 int difference_timestamp(Timestamp timestamp1, Timestamp timestamp2) { int difference = 0; //in usec if (timestamp1.seconds > timestamp2.seconds) { difference = (timestamp1.seconds - timestamp2.seconds) * ONE_SECOND_IN_USEC; if (timestamp1.useconds > timestamp2.useconds) { difference += timestamp1.useconds - timestamp2.useconds; } else { difference += timestamp2.useconds - timestamp1.useconds; difference -= ONE_SECOND_IN_USEC; } } else { difference = (timestamp2.seconds - timestamp1.seconds) * ONE_SECOND_IN_USEC; if (timestamp1.useconds > timestamp2.useconds) { difference += timestamp1.useconds - timestamp2.useconds; difference -= ONE_SECOND_IN_USEC; } else { difference += timestamp2.useconds - timestamp1.useconds; } } return difference; } void get_current_time(Timestamp* timestamp) { struct timeval tv; gettimeofday(&tv, NULL); timestamp->seconds = (int) (tv.tv_sec); timestamp->useconds = (int) (tv.tv_usec); } int main(int argc, char *argv[]) { avdevice_register_all(); avcodec_register_all(); const char *filenameSrc = "/dev/video0"; AVCodecContext *pCodecCtx; AVFormatContext *pFormatCtx = avformat_alloc_context(); AVCodec * pCodec; AVInputFormat *iformat = av_find_input_format("dshow"); AVFrame *pFrame, *pFrameRGB; AVCodecParameters *pCodecPrm = NULL; if (avformat_open_input(&pFormatCtx, filenameSrc, iformat, NULL) != 0) return -12; if (avformat_find_stream_info(pFormatCtx, NULL) < 0) return -13; av_dump_format(pFormatCtx, 0, filenameSrc, 0); int videoStream = 1; int i; for (i = 0; i < pFormatCtx->nb_streams; i++) { if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { videoStream = i; break; } } // Get a pointer to the codec context for the video stream pCodecPrm = pFormatCtx->streams[videoStream]->codecpar; if (videoStream == -1) return -14; // Find the decoder for the video stream pCodec = avcodec_find_decoder(pCodecPrm->codec_id); if (pCodec == NULL) return -15; //codec not found pCodecCtx = avcodec_alloc_context3(pCodec); pCodecCtx->bit_rate = pCodecPrm->bit_rate; pCodecCtx->width = pCodecPrm->width; pCodecCtx->height = pCodecPrm->height; pCodecCtx->pix_fmt = AV_PIX_FMT_YUYV422; AVDictionary *codec_options = NULL; if (avcodec_open2(pCodecCtx, pCodec, &codec_options) < 0) return -16; pFrame = av_frame_alloc(); pFrameRGB = av_frame_alloc(); enum AVPixelFormat pFormat = AV_PIX_FMT_BGR24; int numBytes = av_image_get_buffer_size(pFormat, pCodecPrm->width, pCodecPrm->height, 1); uint8_t *buffer = NULL; buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t)); av_image_fill_arrays(pFrameRGB->data, pFrameRGB->linesize, buffer, pFormat, pCodecPrm->width, pCodecPrm->height, 1); int res = 0, diff; AVPacket packet; Timestamp timestamp_prec, timestamp_curr; get_current_time(&timestamp_prec); sleep(10); while (res >= 0) { res = av_read_frame(pFormatCtx, &packet); get_current_time(&timestamp_curr); diff = difference_timestamp(timestamp_prec, timestamp_curr) / 1000; //diff in ms printf("T_prec:%d.%d\tT_curr:%d.%d\ndiff:%d\n", timestamp_prec.seconds, timestamp_prec.useconds, timestamp_curr.seconds, timestamp_curr.useconds, diff); fflush(stdout); if (packet.stream_index == videoStream) { avcodec_send_packet(pCodecCtx, &packet); avcodec_receive_frame(pCodecCtx, pFrame); struct SwsContext * img_convert_ctx; img_convert_ctx = sws_getCachedContext(NULL, pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL); sws_scale(img_convert_ctx, (uint8_t const * const *) pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); av_packet_unref(&packet); sws_freeContext(img_convert_ctx); } timestamp_prec = timestamp_curr; } av_packet_unref(&packet); avcodec_close(pCodecCtx); av_free(pFrame); av_free(pFrameRGB); avformat_close_input(&pFormatCtx); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From applemax82 at 163.com Wed Jul 13 11:03:30 2016 From: applemax82 at 163.com (qw) Date: Wed, 13 Jul 2016 17:03:30 +0800 (CST) Subject: [Libav-user] one questions about the usage of av_frame_ref() and av_frame_unref() Message-ID: <24f80c89.c0fa.155e37f2e30.Coremail.applemax82@163.com> Hi, I have one question about the usage of av_frame_ref() and av_frame_unref(). For avcodec_decode_video2(), the following url is its description: http://ffmpeg.org/doxygen/3.1/group__lavc__decoding.html#ga3ac51525b7ad8bca4ced9f3446e96532 When AVCodecContext.refcounted_frames is set to 1, the frame is reference counted and the returned reference belongs to the caller. The caller must release the frame using av_frame_unref() when the frame is no longer needed. For example, avcodec_decode_video2() is used to get one decoded frame, i.e. decoded_frame. If av_frame_ref(ref_frame, decoded_frame) is invoked, how to unreference two frames? Can I use two threads to invoke av_frame_unref() to unreference two frames respectively? Is av_frame_ref() thread-safe? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssshukla26 at gmail.com Wed Jul 13 12:09:16 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Wed, 13 Jul 2016 03:09:16 -0700 (PDT) Subject: [Libav-user] Failed to open codec in av_find_stream_info ? Message-ID: <1468404556561-4662352.post@n4.nabble.com> Hi, we are adding a custom decoder for qualcomm chipset into ffmpeg. We have made a linux application which decodes the *h264* file into *nv12 tiled* yuv raw data file successfully. Our aim is to patch ffmpeg with the custom decoder and to upstream the same, so that anyone on qualcomm chipset are able to use the hardware decoding capability. Including opensource projects like chromium. I have implemented the custom decoder in ffmpeg, but when I use the decoder with ffmpeg pipe the commands fails as follows, Please help. $ *ffmpeg -f h264 -c:v h264_qhw -i no_mans_sky_1080p_10sec.h264 -vcodec rawvideo -pix_fmt nv12_tiled -s 1920x1080 -r 30 output.yuv -loglevel debug * ffmpeg version N-80912-gcc42fe8 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.9.2 (crosstool-NG linaro-1.13.1-4.9-2014.09 - Linaro GCC 4.9-2014.09) 20140904 (prerelease) configuration: --prefix=comark_SBC_ffmpeg/ --enable-shared --enable-nonfree --enable-pic --enable-gpl --enable-cross-compile --cross-prefix=arm-linux-gnueabihf- --arch=arm --target-os=linux --extra-cflags=-I../sdl/source/SDL-1.2.15/comark_SBC_SDL//include/SDL/ --extra-ldflags='-L../sdl/source/SDL-1.2.15/comark_SBC_SDL//lib/ -lSDL' libavutil 55. 28.100 / 55. 28.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 41.100 / 57. 41.100 libavdevice 57. 0.102 / 57. 0.102 libavfilter 6. 47.100 / 6. 47.100 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 Splitting the commandline. Reading option '-f' ... matched as option 'f' (force format) with argument 'h264'. Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'h264_qhw'. Reading option '-i' ... matched as input file with argument 'no_mans_sky_1080p_10sec.h264'. Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'rawvideo'. Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'nv12_tiled'. Reading option '-s' ... matched as option 's' (set frame size (WxH or abbreviation)) with argument '1920x1080'. Reading option '-r' ... matched as option 'r' (set frame rate (Hz value, fraction or abbreviation)) with argument '30'. Reading option 'output.yuv' ... matched as output file. Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'. Finished splitting the commandline. Parsing a group of options: global . Applying option loglevel (set logging level) with argument debug. Successfully parsed a group of options. Parsing a group of options: input file no_mans_sky_1080p_10sec.h264. Applying option f (force format) with argument h264. Applying option c:v (codec name) with argument h264_qhw. Successfully parsed a group of options. Opening an input file: no_mans_sky_1080p_10sec.h264. [file @ 0x4a970] Setting default whitelist 'file,crypto' [h264 @ 0x4a280] Before avformat_find_stream_info() pos: 0 bytes read:32768 seeks:0 nb_streams:1 Initialized Params De-Initialized Decoder De-Initialized Params *[h264 @ 0x4a280] Failed to open codec in av_find_stream_info* [NULL @ 0x53430] user data:"x264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00" Initialized Params De-Initialized Decoder De-Initialized Params [h264 @ 0x4a280] max_analyze_duration 5000000 reached at 5040000 microseconds st:0 *[h264 @ 0x4a280] Could not find codec parameters for stream 0 (Video: h264 (High), 1 reference frame, none): unspecified size Consider increasing the value for the 'analyzeduration' and 'probesize' options* [h264 @ 0x4a280] After avformat_find_stream_info() pos: 1509376 bytes read:1540096 seeks:0 frames:65 Input #0, h264, from 'no_mans_sky_1080p_10sec.h264': Duration: N/A, bitrate: N/A Stream #0:0, 65, 1/1200000: Video: h264 (High), 1 reference frame, none, 12.50 fps, 25 tbr, 1200k tbn, 25 tbc Successfully opened the file. Parsing a group of options: output file output.yuv. Applying option vcodec (force video codec ('copy' to copy stream)) with argument rawvideo. Applying option pix_fmt (set pixel format) with argument nv12_tiled. Applying option s (set frame size (WxH or abbreviation)) with argument 1920x1080. Applying option r (set frame rate (Hz value, fraction or abbreviation)) with argument 30. Successfully parsed a group of options. Opening an output file: output.yuv. [file @ 0x1ea3c0] Setting default whitelist 'file,crypto' Successfully opened the file. detected 4 logical cores [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'video_size' to value '0x0' *[buffer @ 0x1f3040] Unable to parse option value "0x0" as image size* [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'pix_fmt' to value '-1' *[buffer @ 0x1f3040] Unable to parse option value "-1" as pixel format* [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'time_base' to value '1/1200000' [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'pixel_aspect' to value '0/1' [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'sws_param' to value 'flags=2' [graph 0 input from stream 0:0 @ 0x1f2fb0] Setting 'frame_rate' to value '25/1' *[buffer @ 0x1f3040] Unable to parse option value "0x0" as image size* *[buffer @ 0x1f3040] Error setting option video_size to value 0x0.* *[graph 0 input from stream 0:0 @ 0x1f2fb0] Error applying options to the filter. Error opening filters!* [AVIOContext @ 0x1f2450] Statistics: 0 seeks, 0 writeouts [AVIOContext @ 0x52aa0] Statistics: 1540096 bytes read, 0 seeks ----------------------------------------------------------------------------------------------------------------- Note : "*h264_qhw*" is the name of our custom decoder. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Failed-to-open-codec-in-av-find-stream-info-tp4662352.html Sent from the libav-users mailing list archive at Nabble.com. From ssshukla26 at gmail.com Wed Jul 13 14:16:06 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Wed, 13 Jul 2016 05:16:06 -0700 (PDT) Subject: [Libav-user] Failed to open codec in av_find_stream_info ? In-Reply-To: <1468404556561-4662352.post@n4.nabble.com> References: <1468404556561-4662352.post@n4.nabble.com> Message-ID: <1468412166421-4662353.post@n4.nabble.com> The problem is with init function of decoder, its declaration is as follow, *int qhw_decode_init (AVCodecContext *avctx);* am getting avctx->width as zero (0) and avctx->heigth as zero(0). So I initialize my decoder with constant values, as decoder->width=1920 and decoder->height=1080. After that I am getting following error, *Assertion avctx->internal->buffer_frame->buf[0] failed at libavcodec/utils.c:2772* Please help. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Failed-to-open-codec-in-av-find-stream-info-tp4662352p4662353.html Sent from the libav-users mailing list archive at Nabble.com. From ssshukla26 at gmail.com Thu Jul 14 15:37:50 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Thu, 14 Jul 2016 06:37:50 -0700 (PDT) Subject: [Libav-user] Where to define output format for a decoder ? Message-ID: <1468503470755-4662354.post@n4.nabble.com> I am implementing a custom decoder for qualcomm chipset. This custom decoder output is nv12 tiled format. We have already patched ffmpeg with nv12 tile format along with software conversion from nv12 tile to yuv420p under libswscale. My question is how can I specify output format for a decoder ? Is *pix_fmts* member of *struct AVCodec* is the place where its needed to be added ! If not, then where ? Please help. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Where-to-define-output-format-for-a-decoder-tp4662354.html Sent from the libav-users mailing list archive at Nabble.com. From ssshukla26 at gmail.com Fri Jul 15 06:44:21 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Thu, 14 Jul 2016 21:44:21 -0700 (PDT) Subject: [Libav-user] How to use different decoders under demuxing_decoding.c example ? ? Message-ID: <1468557861605-4662355.post@n4.nabble.com> Hi, I have added a custom h264 decoder for qualcomm chipset under libavcodec, with code ID "*AV_CODEC_ID_H264*" and name "*h264_qhw*". I want to test it's working with ffmpeg api's, am not able to see where its failing under ffmpeg command line. So I opted for *demuxing_decoding.c* example and it's working fine with a h264 file. But its using the default h264 decoder. How can I make sure that the *demuxing_decoding.c* example uses "*h264_qhw*" decoder instead of using default "*h264*". Please guyz help. Am so much stuck here. Am very new to api's of ffmpeg. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-How-to-use-different-decoders-under-demuxing-decoding-c-example-tp4662355.html Sent from the libav-users mailing list archive at Nabble.com. From david2456 at gmail.com Fri Jul 15 08:49:26 2016 From: david2456 at gmail.com (David Nguyen) Date: Fri, 15 Jul 2016 08:49:26 +0200 Subject: [Libav-user] Memory leak using AVStream Message-ID: Hi, I am upgrading from FFmpeg 2.8.4 to 3.1.1, and changing code to not use AVStream.codec anymore. By doing so, I get a memory leak with valgrind, here's the trace ==15481== at 0x4A07306: memalign (vg_replace_malloc.c:532) ==15481== by 0x4A0735F: posix_memalign (vg_replace_malloc.c:660) ==15481== by 0x6EA3E48: av_malloc (mem.c:97) ==15481== by 0x6EA4165: av_mallocz (mem.c:254) ==15481== by 0x599D3E8: init_context_defaults (options.c:127) ==15481== by 0x599D540: avcodec_alloc_context3 (options.c:163) ==15481== by 0x6BDED26: avformat_new_stream (utils.c:4098) Seems like AVCodecContext.priv_data is not being freed. Looking at the code, the AVStream.codec is still being allocated by you and freed here https://www.ffmpeg.org/doxygen/3.1/libavformat_2utils_8c_source.html at line 3980. Because I am experiencing a leak, shouldn't that line be avcodec_free_context(&st->codec) or avcodec_close(st->codec) as it is an AVCodecContext ? I used to close st->codec myself before this deprecation. What am I missing here? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssshukla26 at gmail.com Fri Jul 15 16:56:19 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Fri, 15 Jul 2016 07:56:19 -0700 (PDT) Subject: [Libav-user] How to process incomming AVFrame pointer in a decoder ? Message-ID: <1468594579346-4662357.post@n4.nabble.com> Hi, I am implementing the custom decoder of qualcomm chipset into ffmpeg, have a working linux application for the same. Every thing is working fine except one thing, that is I am unable to fill decoded data into in coming AVFrame pointer. My implementation of decoder has the following function to decode frames, *int qhw_decode_frame (AVCodecContext *avctx, void *pframe, int *got_frame, AVPacket *avpkt);* First of all the do_decode call was asserting with the following error *Assertion avctx->internal->buffer_frame->buf[0] failed at libavcodec/utils.c:2772* So I used *av_frame_get_buffer* function to allocate memory to data buffers, but still the pipe (as follows) isn't working. *$ ffmpeg -f h264 -c:v h264_qhw -i no_mans_sky_1080p_10sec.h264 -vcodec rawvideo -pix_fmt nv12_tiled -s 1920x1080 -r 30 output.yuv -loglevel debug * Note: I confirm my decoder is working by *dumping* the first decode frame on a file, as the decode function is only called once and then it hang up, just hanp up and do nothing. Please help am so much stuck, please guyz need some insight on how to process and fill the incoming AVFrame pointer from do_decode function. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-How-to-process-incomming-AVFrame-pointer-in-a-decoder-tp4662357.html Sent from the libav-users mailing list archive at Nabble.com. From applemax82 at 163.com Sun Jul 17 08:10:13 2016 From: applemax82 at 163.com (qw) Date: Sun, 17 Jul 2016 14:10:13 +0800 (CST) Subject: [Libav-user] what's the general setting for network reading and writing? Message-ID: <5c860169.2d39.155f779f743.Coremail.applemax82@163.com> Hi, If input or output is set to file, it's easy to set AVFormatContext. But for network application, there are many issues that should be considered, such as network delay, no network connection, and corrupted data. How to set AVFormatContext to deal with those general network issues? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From straycodemonkey at gmail.com Sun Jul 17 09:36:54 2016 From: straycodemonkey at gmail.com (straycodemonkey) Date: Sun, 17 Jul 2016 00:36:54 -0700 (PDT) Subject: [Libav-user] av_buffersink_get_frame incompatible with sws_scale Message-ID: <1468741014690-4662359.post@n4.nabble.com> I'm using av_buffersink_get_frame to perform deinterlacing of video frames using the yadif filter. I then pass the resulting filtered onto a further process to composite various elements together to create a final frame. During this process sws_scale is used to resize / convert the filtered frame, but in this case the sws_scale function is complaining about a bad src pointer because the data in the deinterlaced, filtered frame is not 16 byte aligned. I guess my question is, is this problem avoidable in any way? And if not, is it a known problem that may be addressed in the future? I'm using ffmpeg 3.1.1 taken from 11th July, 2016. -- View this message in context: http://libav-users.943685.n4.nabble.com/av-buffersink-get-frame-incompatible-with-sws-scale-tp4662359.html Sent from the libav-users mailing list archive at Nabble.com. From applemax82 at 163.com Sun Jul 17 12:24:45 2016 From: applemax82 at 163.com (qw) Date: Sun, 17 Jul 2016 18:24:45 +0800 (CST) Subject: [Libav-user] how to build ffmpeg lib for android and ios platform Message-ID: <4af24351.3849.155f8630086.Coremail.applemax82@163.com> Hi, I know the steps of building ffmpeg lib on linux. However, how to build ffmpeg lib for android and ios platform? Thanks! Regards andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From loveall0926 at gmail.com Sun Jul 17 14:40:55 2016 From: loveall0926 at gmail.com (Hwangho Kim) Date: Sun, 17 Jul 2016 21:40:55 +0900 Subject: [Libav-user] how to build ffmpeg lib for android and ios platform In-Reply-To: <4af24351.3849.155f8630086.Coremail.applemax82@163.com> References: <4af24351.3849.155f8630086.Coremail.applemax82@163.com> Message-ID: 2016-07-17 19:24 GMT+09:00 qw : > Hi, > > I know the steps of building ffmpeg lib on linux. However, how to build > ffmpeg lib for android and ios platform? > > Thanks! > > Regards > > andrew > > > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > Hi, I use below script and works fine. https://github.com/kewlbear/FFmpeg-iOS-build-script -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Sun Jul 17 21:50:43 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sun, 17 Jul 2016 19:50:43 +0000 (UTC) Subject: [Libav-user] how to build ffmpeg lib for android and ios platform References: <4af24351.3849.155f8630086.Coremail.applemax82@163.com> Message-ID: qw writes: > Hi,I know the steps of building ffmpeg lib on linux. However, > how to build ffmpeg lib for android and ios platform? FFmpeg is supposed to build on both Android and ios just as on Linux, if it does not work, please report back. Carl Eugen From cehoyos at ag.or.at Sun Jul 17 21:52:13 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sun, 17 Jul 2016 19:52:13 +0000 (UTC) Subject: [Libav-user] how to build ffmpeg lib for android and ios platform References: <4af24351.3849.155f8630086.Coremail.applemax82@163.com> Message-ID: Hwangho Kim writes: > https://github.com/kewlbear/FFmpeg-iOS-build-script Looks broken. Please report if FFmpeg compilation out-of-the-box for ios or Android fail. Carl Eugen From rizvan.kuliev at ru.axxonsoft.com Mon Jul 18 16:24:26 2016 From: rizvan.kuliev at ru.axxonsoft.com (rizvan.kuliev) Date: Mon, 18 Jul 2016 17:24:26 +0300 Subject: [Libav-user] swr_convert_frame returns -1 while converting u8 to s16 audio format Message-ID: Hi all! I'm trying to use *swresample *library to resample audio format from AV_SAMPLE_FMT_U8 to AV_SAMPLE_FMT_S16. My file is: />ffmpeg -i D:\cam_md_start_1.wav/ /ffmpeg version 0.11.5 Copyright (c) 2000-2014 the FFmpeg developers built on Sep 26 2014 01:10:12 with gcc 4.9-win32 (GCC) configuration: --enable-memalign-hack --arch=x86_64 --target-os=mingw32 --cross-prefix=x86_64-w64-mingw32- --disable-ffprobe --disable-ffplay --build-suffix=-ovs-3.1 --enable-shared --disable-static --enable-libass --prefix=/home/gzh/sdk/mingw-x86_64/ffmpeg-ovs-3.1 libavutil 54. 7.100 / 54. 7.100 libavcodec 56. 1.100 / 56. 1.100 libavformat 56. 4.101 / 56. 4.101 libavdevice 56. 0.100 / 56. 0.100 libavfilter 5. 1.100 / 5. 1.100 libswscale 3. 0.100 / 3. 0.100 libswresample 1. 1.100 / 1. 1.100 Guessed Channel Layout for Input Stream #0.0 : mono Input #0, wav, from 'D:\cam_md_start_1.wav': Duration: 00:00:01.49, bitrate: 176 kb/s Stream #0:0: Audio: *pcm_u8* ([1][0][0][0] / 0x0001), 22050 Hz, 1 channels, u8, 176 kb/s At least one output file must be specified d:\ngp.sdk64\ffmpeg-ovs-3.1\bin>/ The code looks like this: /std::unique_ptr m_swr;// // m_swr.reset(swr_alloc());// // av_opt_set_int(m_swr.get(), "in_channel_layout", frame->channel_layout, 0);// // av_opt_set_int(m_swr.get(), "out_channel_layout", frame->channel_layout, 0);// // av_opt_set_int(m_swr.get(), "in_sample_rate", frame->sample_rate, 0);// // av_opt_set_int(m_swr.get(), "out_sample_rate", frame->sample_rate, 0);// // av_opt_set_sample_fmt(m_swr.get(), "in_sample_fmt", (AVSampleFormat)frame->format, 0);// // av_opt_set_sample_fmt(m_swr.get(), "out_sample_fmt", AV_SAMPLE_FMT_S16, 0);// // swr_init(m_swr.get());// //std::unique_ptr converted(av_frame_alloc());// // converted->channel_layout = frame->channel_layout;// // converted->sample_rate = frame->sample_rate;// // converted->format = AV_SAMPLE_FMT_S16;// // int err = swr_convert_frame(m_swr.get(), converted.get(), frame.get());/ The problem is that swr_convert_frame returns value -1 and doesn't convert the frame. I have attached input frame as frame.txt and output as converted.txt. Please, could someone help me to understand why swr_convert_frame fails to convert format? -- AxxonSoft Rizvan Kuliev Programmer rizvan.kuliev at axxonsoft.com CONFIDENTIALITY NOTICE -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cioamlnhbpmblfmb.png Type: image/png Size: 4158 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: daedfhgehlcdekgi.png Type: image/png Size: 919 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jomfjhiimogighgh.png Type: image/png Size: 806 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: djjkkmlpjkfgbakk.png Type: image/png Size: 741 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pmdplakeckjiandi.png Type: image/png Size: 597 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jieimlliankgeocn.png Type: image/png Size: 699 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ficadgdeclopnbpa.png Type: image/png Size: 963 bytes Desc: not available URL: -------------- next part -------------- - frame unique_ptr {data=0x00000256dc92d260 {0x00000256dc938c60 "}~~~~~~}}}~~~~~~~~~~~~~~~~~~ЂЂЂЂ~~~~~~~~~~~~~~}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~}~~~~~}~~~~~~~~~~~~~ЂЂЂЂЂЂ..., ...} ...} std::unique_ptr & - [ptr] 0x00000256dc92d260 {data=0x00000256dc92d260 {0x00000256dc938c60 "}~~~~~~}}}~~~~~~~~~~~~~~~~~~ЂЂЂЂ~~~~~~~~~~~~~~}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~}~~~~~}~~~~~~~~~~~~~ЂЂЂЂЂЂ..., ...} ...} AVFrame * + data 0x00000256dc92d260 {0x00000256dc938c60 "}~~~~~~}}}~~~~~~~~~~~~~~~~~~ЂЂЂЂ~~~~~~~~~~~~~~}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~}~~~~~}~~~~~~~~~~~~~ЂЂЂЂЂЂ..., ...} unsigned char *[8] + linesize 0x00000256dc92d2a0 {4096, 0, 0, 0, 0, 0, 0, 0} int[8] + extended_data 0x00000256dc92d260 {0x00000256dc938c60 "}~~~~~~}}}~~~~~~~~~~~~~~~~~~ЂЂЂЂ~~~~~~~~~~~~~~}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~}~~~~~}~~~~~~~~~~~~~ЂЂЂЂЂЂ...} unsigned char * * width 0 int height 0 int nb_samples 4096 int format 0 int key_frame 1 int pict_type AV_PICTURE_TYPE_NONE (0) AVPictureType + base 0x00000256dc92d2e0 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} unsigned char *[8] + sample_aspect_ratio {num=0 den=1 } AVRational pts -9223372036854775808 __int64 pkt_pts 0 __int64 pkt_dts 0 __int64 coded_picture_number 0 int display_picture_number 0 int quality 0 int reference 0 int + qscale_table 0x0000000000000000 char * qstride 0 int qscale_type 0 int + mbskip_table 0x0000000000000000 unsigned char * + motion_val 0x00000256dc92d368 {0x0000000000000000 {???, ???}, 0x0000000000000000 {???, ???}} short[2] *[2] + mb_type 0x0000000000000000 {???} unsigned int * + dct_coeff 0x0000000000000000 {???} short * + ref_index 0x00000256dc92d388 {0x0000000000000000 , 0x0000000000000000 } char *[2] opaque 0x0000000000000000 void * + error 0x00000256dc92d3a0 {0, 0, 0, 0, 0, 0, 0, 0} unsigned __int64[8] type 1 int repeat_pict 0 int interlaced_frame 0 int top_field_first 0 int palette_has_changed 0 int buffer_hints 0 int + pan_scan 0x0000000000000000 AVPanScan * reordered_opaque -9223372036854775808 __int64 hwaccel_picture_private 0x0000000000000000 void * + owner 0x0000000000000000 AVCodecContext * thread_opaque 0x0000000000000000 void * motion_subsample_log2 0 '\0' unsigned char sample_rate 22050 int channel_layout 0 unsigned __int64 + buf 0x00000256dc92d430 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} AVBufferRef *[8] + extended_buf 0x0000000000000000 {???} AVBufferRef * * nb_extended_buf 0 int + side_data 0x0000000000000000 {???} AVFrameSideData * * nb_side_data 0 int flags 0 int color_range AVCOL_RANGE_UNSPECIFIED (0) AVColorRange color_primaries AVCOL_PRI_UNSPECIFIED (2) AVColorPrimaries color_trc AVCOL_TRC_UNSPECIFIED (2) AVColorTransferCharacteristic colorspace AVCOL_SPC_UNSPECIFIED (2) AVColorSpace chroma_location AVCHROMA_LOC_UNSPECIFIED (0) AVChromaLocation best_effort_timestamp 0 __int64 pkt_pos -1 __int64 pkt_duration 0 __int64 metadata 0x0000000000000000 AVDictionary * decode_error_flags 0 int channels 1 int pkt_size 4096 int + qp_table_buf 0x0000000000000000 AVBufferRef * -------------- next part -------------- + frame unique_ptr {data=0x00000256dc92d260 {0x00000256dc92aec0 "}~~~~~~}}}~~~~~~~~~~~~~~~~~~ЂЂЂЂ~~~~~~~~~~~~~~}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~}~~~~~}~~~~~~~~~~~~~ЂЂЂЂЂЂ..., ...} ...} std::unique_ptr & formatContext identifier "formatContext" is undefined codecContext identifier "codecContext" is undefined m_codecContext identifier "m_codecContext" is undefined - converted unique_ptr {data=0x00000256dc939c80 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} ...} std::unique_ptr - [ptr] 0x00000256dc939c80 {data=0x00000256dc939c80 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} ...} AVFrame * + data 0x00000256dc939c80 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} unsigned char *[8] + linesize 0x00000256dc939cc0 {0, 0, 0, 0, 0, 0, 0, 0} int[8] + extended_data 0x00000256dc939c80 {0x0000000000000000 } unsigned char * * width 0 int height 0 int nb_samples 0 int format 1 int key_frame 1 int pict_type AV_PICTURE_TYPE_NONE (0) AVPictureType + base 0x00000256dc939d00 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} unsigned char *[8] + sample_aspect_ratio {num=0 den=1 } AVRational pts -9223372036854775808 __int64 pkt_pts -9223372036854775808 __int64 pkt_dts -9223372036854775808 __int64 coded_picture_number 0 int display_picture_number 0 int quality 0 int reference 0 int + qscale_table 0x0000000000000000 char * qstride 0 int qscale_type 0 int + mbskip_table 0x0000000000000000 unsigned char * + motion_val 0x00000256dc939d88 {0x0000000000000000 {???, ???}, 0x0000000000000000 {???, ???}} short[2] *[2] + mb_type 0x0000000000000000 {???} unsigned int * + dct_coeff 0x0000000000000000 {???} short * + ref_index 0x00000256dc939da8 {0x0000000000000000 , 0x0000000000000000 } char *[2] opaque 0x0000000000000000 void * + error 0x00000256dc939dc0 {0, 0, 0, 0, 0, 0, 0, 0} unsigned __int64[8] type 0 int repeat_pict 0 int interlaced_frame 0 int top_field_first 0 int palette_has_changed 0 int buffer_hints 0 int + pan_scan 0x0000000000000000 AVPanScan * reordered_opaque 0 __int64 hwaccel_picture_private 0x0000000000000000 void * + owner 0x0000000000000000 AVCodecContext * thread_opaque 0x0000000000000000 void * motion_subsample_log2 0 '\0' unsigned char sample_rate 22050 int channel_layout 0 unsigned __int64 + buf 0x00000256dc939e50 {0x0000000000000000 , 0x0000000000000000 , 0x0000000000000000 , ...} AVBufferRef *[8] + extended_buf 0x0000000000000000 {???} AVBufferRef * * nb_extended_buf 0 int + side_data 0x0000000000000000 {???} AVFrameSideData * * nb_side_data 0 int flags 0 int color_range AVCOL_RANGE_UNSPECIFIED (0) AVColorRange color_primaries AVCOL_PRI_UNSPECIFIED (2) AVColorPrimaries color_trc AVCOL_TRC_UNSPECIFIED (2) AVColorTransferCharacteristic colorspace AVCOL_SPC_UNSPECIFIED (2) AVColorSpace chroma_location AVCHROMA_LOC_UNSPECIFIED (0) AVChromaLocation best_effort_timestamp -9223372036854775808 __int64 pkt_pos -1 __int64 pkt_duration 0 __int64 metadata 0x0000000000000000 AVDictionary * decode_error_flags 0 int channels 0 int pkt_size -1 int + qp_table_buf 0x0000000000000000 AVBufferRef * From ssshukla26 at gmail.com Mon Jul 18 16:57:57 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Mon, 18 Jul 2016 07:57:57 -0700 (PDT) Subject: [Libav-user] How to process incomming AVFrame pointer in a decoder ? In-Reply-To: <1468594579346-4662357.post@n4.nabble.com> References: <1468594579346-4662357.post@n4.nabble.com> Message-ID: <1468853877237-4662365.post@n4.nabble.com> Hi guyz, What I found out is that the decoder is getting *avpkt->size* as *0 (zero)* under the below function after some 67 frames and hence it closing abruptly. *int qhw_decode_frame (AVCodecContext *avctx, void *pframe, int *got_frame, AVPacket *avpkt);* Can anyone please help. Am so much stuck. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-How-to-process-incomming-AVFrame-pointer-in-a-decoder-tp4662357p4662365.html Sent from the libav-users mailing list archive at Nabble.com. From cehoyos at ag.or.at Mon Jul 18 17:42:59 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 18 Jul 2016 15:42:59 +0000 (UTC) Subject: [Libav-user] =?utf-8?q?swr=5Fconvert=5Fframe_returns_-1_while_con?= =?utf-8?q?verting_u8_to_s16_audio_format?= References: Message-ID: writes: > /ffmpeg version 0.11.5 Copyright (c) 2000-2014 the FFmpeg developers This is not supported for a long time. Sorry, Carl Eugen From pepavo at gmail.com Tue Jul 19 22:16:41 2016 From: pepavo at gmail.com (Josef Vosyka) Date: Tue, 19 Jul 2016 22:16:41 +0200 Subject: [Libav-user] ffmpeg command line for video rendered from set of image snapshots Message-ID: What is the correct parameter for ffmpeg cmd line with libopenh264 to make a video from image snapshots. I used for example this: ffmpeg -start_number 1 -framerate 2 -i img_%d.jpg -c:v h264 -profile:v baseline -r 25 -s 208:160 movie.mp4 I've tried: - various jpeg, png sets of images - many other variants of cmd with/without -profile, -r, -s, ... -preset is not recognized ... LOG shows that proper encoder libopenh264 is recognized. The error message is: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Always this same error for all possible parameter variations. Appreciate your help, --Josef Vosyka From cehoyos at ag.or.at Wed Jul 20 10:50:02 2016 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 20 Jul 2016 08:50:02 +0000 (UTC) Subject: [Libav-user] ffmpeg command line for video rendered from set of image snapshots References: Message-ID: Josef Vosyka writes: > ffmpeg -start_number 1 -framerate 2 -i img_%d.jpg > -c:v h264 -profile:v baseline -r 25 -s 208:160 movie.mp4 Complete, uncut console output missing. Generally, this mailing list is for questions using the libraries, ffmpeg-user is for questions regarding command line usage. Carl Eugen From applemax82 at 163.com Wed Jul 20 11:57:51 2016 From: applemax82 at 163.com (qw) Date: Wed, 20 Jul 2016 17:57:51 +0800 (CST) Subject: [Libav-user] One question about timeout in rtmp Message-ID: <4daa5eac.c2e6.15607bd734c.Coremail.applemax82@163.com> Hi, I use native rtmp in ffmpeg 3.1.1 to demuxer rtmp stream from nginx server. avformat_open_input() is used to open input rtmp stream. Native rtmp plugin supports 'timeout' option as shown below: -timeout .D...... Maximum timeout (in seconds) to wait for incoming connections. -1 is infinite. Implies -rtmp_listen 1 (from INT_MIN to INT_MAX) (default -1) I add timeout of 10 seconds to rtmp option set, and call avformat_open_input(). AVDictionary *format_opts = NULL; av_dict_set(&format_opts, "timeout", "10", 0); avformat_open_input(fmt-ctx, rtmp-url, NULL, &format_opts); But avformat_open_input() will return immediately with error messages as follows: [rtmp @ 0x8ff9a0] Cannot open connection tcp://localhost:1935?listen&listen_timeout=10000 Cannot open input rtmp url: rtmp://localhost:1935/live1/abc! Fail to call openInputLiveStream()! nginx server didn't provide live streaming service. It was expected that avformat_open_input() would return in 10 seconds with error mesage. But why did the function return immediately? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From applemax82 at 163.com Wed Jul 20 13:11:39 2016 From: applemax82 at 163.com (qw) Date: Wed, 20 Jul 2016 19:11:39 +0800 (CST) Subject: [Libav-user] how to deal with network timeout Message-ID: <3abe85d9.d43b.15608010445.Coremail.applemax82@163.com> Hi, I'm making program to read and write rtmp stream to server. Many ffmpeg functions are related in network operation, such as connection, read and write: avformat_open_input(), avformat_find_stream_info(), av_read_frame(), and av_interleaved_write_frame(). If network is not good, avformat_open_input() and av_read_frame() will wait indefinitely. How to deal with indefinite-waiting issue? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From applemax82 at 163.com Wed Jul 20 13:32:21 2016 From: applemax82 at 163.com (qw) Date: Wed, 20 Jul 2016 19:32:21 +0800 (CST) Subject: [Libav-user] one question about network blocking issue Message-ID: <67f1ea6.d7ae.1560813f8f8.Coremail.applemax82@163.com> Hi, I'm making program to read and write rtmp av stream. If network is bad, some ffmpeg functions will block indefinitely. The rtmp application will use the following functions, including avformat_open_input(), avformat_find_stream_info(), avformat_alloc_output_context2(), avio_open(), avformat_write_header(), av_read_frame(), av_interleaved_write_frame(), av_write_trailer(), avformat_close_input(), and avformat_free_context(). The above functions are used in ffmpeg and transcoding binary programs. Which function has indefinite-block issue? How to avoid indefinite-block issue? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssshukla26 at gmail.com Wed Jul 20 12:55:22 2016 From: ssshukla26 at gmail.com (ssshukla26) Date: Wed, 20 Jul 2016 03:55:22 -0700 (PDT) Subject: [Libav-user] decoding for stream 0 failed ? Message-ID: <1469012122340-4662372.post@n4.nabble.com> Hi, I am implementing a custom decoder of qualcomm chipset into ffmpeg, have a working linux application for the same. I am able to run decoder, but its closing with aN error as show below. Note: The output format is nv12 tiled, we have already added support for nv12 tile to yuv420p conversion in libswscale. 0001-NV12-Tile-pixel-format-support-added.patch # *ffmpeg -f h264 -c:v h264_qhw -i no_mans_sky_1080p_10sec.h264 -pix_fmt nv12_tiled -s 1920x1080 -r 30 out_1080p.yuv* ffmpeg version N-80917-gc5096bc Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.9.2 (crosstool-NG linaro-1.13.1-4.9-2014.09 - Linaro GCC 4.9-2014.09) 20140904 (prerelease) configuration: --prefix=comark_SBC_ffmpeg/ --enable-shared --enable-nonfree --enable-pic --enable-gpl --enable-cross-compile --cross-prefix=arm-linux-gnueabihf- --arch=arm --target-os=linux --extra-cflags=-I../sdl/source/SDL-1.2.15/comark_SBC_SDL//include/SDL/ --extra-ldflags='-L../sdl/source/SDL-1.2.15/comark_SBC_SDL//lib/ -lSDL' libavutil 55. 28.100 / 55. 28.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 41.100 / 57. 41.100 libavdevice 57. 0.102 / 57. 0.102 libavfilter 6. 47.100 / 6. 47.100 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 Initialized Params @ 328 in init_params : width = 1920 & height = 1080 Initialized Decoder @ 680 in vdec_set_buffer_requirement : Input Buffer mincount=1 maxcount=32 actualcount=3 ,buffer_size=2097152 ,alignment=2048 @ 703 in vdec_set_buffer_requirement : Output Buffer mincount=6 maxcount=32 actualcount=10 ,buffer_size=3137536 ,alignment=8192 @ 910 in vdec_alloc_h264_mv : Entered vdec_alloc_h264_mv act_width: 1920, act_height: 1080, size: 5570560, alignment 8192 Start Decoder Buffers freed for reconfiguration @ 910 in vdec_alloc_h264_mv : Entered vdec_alloc_h264_mv act_width: 1920, act_height: 1080, size: 5570560, alignment 8192 Buffers allocated for reconfiguration *[h264 @ 0x4a260] decoding for stream 0 failed* Stop Decoder De-Initialized Decoder De-Initialized Params *[h264 @ 0x4a260] Could not find codec parameters for stream 0 (Video: h264 (High), none, 1920x1080): unspecified pixel format *Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, h264, from 'no_mans_sky_1080p_10sec.h264': Duration: N/A, bitrate: N/A Stream #0:0: Video: h264 (High), none, 1920x1080, 12.50 fps, 25 tbr, 1200k tbn, 25 tbc File 'out_1080p.yuv' already exists. Overwrite ? [y/N] n Not overwriting - exiting Please help. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-decoding-for-stream-0-failed-tp4662372.html Sent from the libav-users mailing list archive at Nabble.com. From pepavo at gmail.com Wed Jul 20 15:18:48 2016 From: pepavo at gmail.com (Josef Vosyka) Date: Wed, 20 Jul 2016 15:18:48 +0200 Subject: [Libav-user] ffmpeg command line for video rendered from set of image snapshots In-Reply-To: References: Message-ID: >> ffmpeg -start_number 1 -framerate 2 -i img_%d.jpg >> -c:v h264 -profile:v baseline -r 25 -s 208:160 movie.mp4 > > Complete, uncut console output missing. Working syntax: string img_mask = imagesRootPath + "/img_%7d.jpg"; cmd_str = string_format("ffmpeg -start_number 1 -framerate %d -i %s -b:v 132k -r 25 %s", framerate, img_mask.c_str(), out.c_str()); > Generally, this mailing list is for questions using the > libraries, ffmpeg-user is for questions regarding > command line usage. Sorry about that. Will use correct forum next time. From puneet.cse.iitd at gmail.com Wed Jul 20 22:46:49 2016 From: puneet.cse.iitd at gmail.com (Puneet Kapoor) Date: Thu, 21 Jul 2016 02:16:49 +0530 Subject: [Libav-user] Fast parsing of H.264 video stream using libavcodec Message-ID: Hi, I am using Libav* libraries to decode the frames from the byte stream data I am receiving from another thread. I have been able to successfully parse the byte stream and render video correctly using the OpenCV API. *Basic flow in my code using libav is like:* *init() { * *avcodec_find_decoder(AV_CODEC_ID_H264) ;* *avcodec_alloc_context3(codec);* *avcodec_open2(codec_context, codec, NULL)* *picture = av_frame_alloc();pictureBGR = av_frame_alloc();parser = av_parser_init(AV_CODEC_ID_H264);codec_context->thread_count = 2; // tried to increase threads}readFrame() {int len = av_parser_parse2(parser, codec_context, &data, &size, &buffer[0], buffer.size(), 0, 0, AV_NOPTS_VALUE); ...int len_dec = avcodec_decode_video2(codec_context, picture, &got_picture, &pkt); // takes most time (~77%)...iResult = sws_scale(...)* *}* The problem I am facing is related to performance of my decoding code. My program manages to do 10 frames per second (fps) for a 720p video. When I save the same video stream as a file and use OpenCV code to parse it, it is able to parse the same video at around 24 fps. I did basic profiling of my code and found that most of the time is going in decoding call "*avcodec_decode_video2()*" (~77% time) and SWS rescaling call "*sws_scale()*" (~18% time). I haven't profiled the OpenCV code, but one difference I noticed is that OpenCV code uses multiple cores from *top* and *time* command statistics(shared below). And, my code runs single threaded. My question here is: How can I improve the performance of my code ? By making it multithreaded ? Are there any library specific tweaks that can help ? *Time command statistics* *For OpenCV code:* real 0m17.496s user 0m48.030s sys 0m2.410s *For My code using Libav library:* real 0m39.478s user 0m39.380s sys 0m0.100s Thanks Puneet -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggarra13 at gmail.com Wed Jul 20 23:44:21 2016 From: ggarra13 at gmail.com (Gonzalo) Date: Wed, 20 Jul 2016 18:44:21 -0300 Subject: [Libav-user] Fast parsing of H.264 video stream using libavcodec In-Reply-To: References: Message-ID: <578FF0B5.5020100@gmail.com> El 20/07/16 a las 17:46, Puneet Kapoor escribió: > > By making it multithreaded ? > Yes. -- Gonzalo Garramuño ggarra13 at gmail.com From puneet.cse.iitd at gmail.com Wed Jul 20 23:58:20 2016 From: puneet.cse.iitd at gmail.com (Puneet Kapoor) Date: Thu, 21 Jul 2016 03:28:20 +0530 Subject: [Libav-user] Fast parsing of H.264 video stream using libavcodec In-Reply-To: <578FF0B5.5020100@gmail.com> References: <578FF0B5.5020100@gmail.com> Message-ID: I am new to H264 decoding and using this library, so pardon me if my questions seem too naive. Isn't the libav function avcodec_decode_video2() internally multi-threaded to be able to use multiple cores itself? Do we need to enable it somehow ? If you could share some examples of multi-threaded libav user code, it would be helpful. Thanks On Jul 21, 2016 3:14 AM, "Gonzalo" wrote: > > > El 20/07/16 a las 17:46, Puneet Kapoor escribió: > >> >> By making it multithreaded ? >> >> Yes. > > -- > Gonzalo Garramuño > ggarra13 at gmail.com > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggarra13 at gmail.com Thu Jul 21 01:17:02 2016 From: ggarra13 at gmail.com (Gonzalo) Date: Wed, 20 Jul 2016 20:17:02 -0300 Subject: [Libav-user] Fast parsing of H.264 video stream using libavcodec In-Reply-To: References: <578FF0B5.5020100@gmail.com> Message-ID: <5790066E.4040105@gmail.com> El 20/07/16 a las 18:58, Puneet Kapoor escribió: > > If you could share some examples of multi-threaded libav user code, it > would be helpful. > > Thanks > Google for "dranger ffmpeg". The original page is a tutorial page, but it is a tad out of date. Some other repositories contain updated code. Some demuxers may support multithreaded operation, albeit for most operations you gain little. For that, you can use: AVDictionary* info = NULL; av_dict_set(&info, "threads", "auto", 0); avcodec_open2( video_ctx, video_codec, &info ); -- Gonzalo Garramuño ggarra13 at gmail.com From puneet.cse.iitd at gmail.com Thu Jul 21 10:32:12 2016 From: puneet.cse.iitd at gmail.com (Puneet Kapoor) Date: Thu, 21 Jul 2016 14:02:12 +0530 Subject: [Libav-user] Fast parsing of H.264 video stream using libavcodec In-Reply-To: <5790066E.4040105@gmail.com> References: <578FF0B5.5020100@gmail.com> <5790066E.4040105@gmail.com> Message-ID: Thanks a lot for the tutorial link and suggestion. I tried your suggestion of adding AVDictionary and it worked very well. I got good performance improvement, basically from 39 sec to 14 secs for the same video clip so pretty impressive ! Also, found the problem with my code was that the line "*codec_context->thread_count = 2;*" had to be called before opening the codec "*avcodec_open2()*", so when I move it up it works too. Thats how they have done it in OpenCV code as well. But I find your way much better as it automatically decides the threads based on the machine. Cheers On Thu, Jul 21, 2016 at 4:47 AM, Gonzalo wrote: > > > El 20/07/16 a las 18:58, Puneet Kapoor escribió: > >> >> If you could share some examples of multi-threaded libav user code, it >> would be helpful. >> >> Thanks >> >> Google for "dranger ffmpeg". The original page is a tutorial page, but > it is a tad out of date. Some other repositories contain updated code. > > Some demuxers may support multithreaded operation, albeit for most > operations you gain little. For that, you can use: > > AVDictionary* info = NULL; > av_dict_set(&info, "threads", "auto", 0); > > avcodec_open2( video_ctx, video_codec, &info ); > > > -- > Gonzalo Garramuño > ggarra13 at gmail.com > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anatas.torsten at gmx.de Mon Jul 25 12:49:16 2016 From: anatas.torsten at gmx.de (torte lehmann) Date: Mon, 25 Jul 2016 11:49:16 +0200 Subject: [Libav-user] Question to the "new" libav api Message-ID: An HTML attachment was scrubbed... URL: From straycodemonkey at gmail.com Tue Jul 26 05:58:16 2016 From: straycodemonkey at gmail.com (straycodemonkey) Date: Mon, 25 Jul 2016 19:58:16 -0700 (PDT) Subject: [Libav-user] Question to the "new" libav api In-Reply-To: References: Message-ID: <1469501896990-4662380.post@n4.nabble.com> I had this trouble also with the new API. In the end I had to revert my upgrade changes and wait for the API to become more stable. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Question-to-the-new-libav-api-tp4662379p4662380.html Sent from the libav-users mailing list archive at Nabble.com. From wadkes93 at gmail.com Wed Jul 27 13:47:41 2016 From: wadkes93 at gmail.com (Ankush Wadke) Date: Wed, 27 Jul 2016 16:17:41 +0530 Subject: [Libav-user] 4k video decoding planner data Message-ID: Hi everybody, I am stuck with a problem to which I am not able to figure out what exactly is wrong. I am decoding a 4k video which gives me a YUV420P output frame. In order to convert it to UYVY I wrote a piece of code. which works fine for first frame, then second frame has a small green line and the third frame has the colors totally displaced. I am getting the correct frames sometimes but i am not able to figure out what is wrong with the other frames. It would be really helpful if someone would point me to something tat would help. hers the code i am using for conversion, void Convert_yuv420_to_yuv422(const AVFrame &videoBuffer, uint8_t * opBuffer, const int const &Width, const int const &Height) { uint8_t *buffer_uyvy_1strow = opBuffer, *buffer_uyvy_2ndrow = opBuffer+(2*Width); uint8_t *y_1strow = videoBuffer.data[0], *y_2ndrow = videoBuffer.data[0]+Width; uint8_t * u_ptr = videoBuffer.data[1]; uint8_t * v_ptr = videoBuffer.data[2]; for (unsigned int i_conv_ht =0; i_conv_ht From Harry at gps-laptimer.de Wed Jul 27 17:21:01 2016 From: Harry at gps-laptimer.de (Harry) Date: Wed, 27 Jul 2016 15:21:01 +0100 Subject: [Libav-user] 4k video decoding planner data In-Reply-To: References: Message-ID: Green borders are a wrong offset / starting address for the UV planes. Harry Sent from my smartphone > On 27 Jul 2016, at 11:47, Ankush Wadke wrote: > > Hi everybody, > I am stuck with a problem to which I am not able to figure out what exactly is wrong. > I am decoding a 4k video which gives me a YUV420P output frame. In order to convert it to UYVY I wrote a piece of code. which works fine for first frame, then second frame has a small green line and the third frame has the colors totally displaced. I am getting the correct frames sometimes but i am not able to figure out what is wrong with the other frames. > It would be really helpful if someone would point me to something tat would help. > > hers the code i am using for conversion, > > void Convert_yuv420_to_yuv422(const AVFrame &videoBuffer, uint8_t * opBuffer, const int const &Width, const int const &Height) > { > uint8_t *buffer_uyvy_1strow = opBuffer, > *buffer_uyvy_2ndrow = opBuffer+(2*Width); > > uint8_t *y_1strow = videoBuffer.data[0], > *y_2ndrow = videoBuffer.data[0]+Width; > > uint8_t * u_ptr = videoBuffer.data[1]; > uint8_t * v_ptr = videoBuffer.data[2]; > > for (unsigned int i_conv_ht =0; i_conv_ht for (unsigned int i_conv_wt =0; i_conv_wt > *buffer_uyvy_1strow++ = *u_ptr; > *buffer_uyvy_1strow++ = *y_1strow++; > *buffer_uyvy_1strow++ = *v_ptr; > *buffer_uyvy_1strow++ = *y_1strow++; > > *buffer_uyvy_2ndrow++ = *u_ptr; > *buffer_uyvy_2ndrow++ = *y_2ndrow++; > *buffer_uyvy_2ndrow++ = *v_ptr; > *buffer_uyvy_2ndrow++ = *y_2ndrow++; > > ++u_ptr; > ++v_ptr; > } > y_1strow += Width; > y_2ndrow += Width; > > buffer_uyvy_1strow += FFALIGN((2*Width),1); > buffer_uyvy_2ndrow += FFALIGN((2*Width),1); > } > } > > Regards, > Ankush > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From wadkes93 at gmail.com Thu Jul 28 06:05:22 2016 From: wadkes93 at gmail.com (Ankush Wadke) Date: Thu, 28 Jul 2016 08:35:22 +0530 Subject: [Libav-user] 4k video decoding planner data In-Reply-To: References: Message-ID: Thanks for the reply Harry. And yes it has something to do with the offset of U and V planes but it works perfectly fine for a FHD stream i.e. 1920x1080 but dosent seem to for a 3840x2160 stream. Does planer data require alignment to be considered? On Wed, Jul 27, 2016 at 7:51 PM, Harry wrote: > Green borders are a wrong offset / starting address for the UV planes. > > Harry > > Sent from my smartphone > > On 27 Jul 2016, at 11:47, Ankush Wadke wrote: > > Hi everybody, > I am stuck with a problem to which I am not able to figure out what > exactly is wrong. > I am decoding a 4k video which gives me a YUV420P output frame. In > order to convert it to UYVY I wrote a piece of code. which works fine for > first frame, then second frame has a small green line and the third frame > has the colors totally displaced. I am getting the correct frames sometimes > but i am not able to figure out what is wrong with the other frames. > It would be really helpful if someone would point me to something tat > would help. > > hers the code i am using for conversion, > > void Convert_yuv420_to_yuv422(const AVFrame &videoBuffer, uint8_t * > opBuffer, const int const &Width, const int const &Height) > { > uint8_t *buffer_uyvy_1strow = opBuffer, > *buffer_uyvy_2ndrow = opBuffer+(2*Width); > > uint8_t *y_1strow = videoBuffer.data[0], > *y_2ndrow = videoBuffer.data[0]+Width; > > uint8_t * u_ptr = videoBuffer.data[1]; > uint8_t * v_ptr = videoBuffer.data[2]; > > for (unsigned int i_conv_ht =0; i_conv_ht for (unsigned int i_conv_wt =0; i_conv_wt *buffer_uyvy_1strow++ = *u_ptr; > *buffer_uyvy_1strow++ = *y_1strow++; > *buffer_uyvy_1strow++ = *v_ptr; > *buffer_uyvy_1strow++ = *y_1strow++; > > *buffer_uyvy_2ndrow++ = *u_ptr; > *buffer_uyvy_2ndrow++ = *y_2ndrow++; > *buffer_uyvy_2ndrow++ = *v_ptr; > *buffer_uyvy_2ndrow++ = *y_2ndrow++; > > ++u_ptr; > ++v_ptr; > } > y_1strow += Width; > y_2ndrow += Width; > > buffer_uyvy_1strow += FFALIGN((2*Width),1); > buffer_uyvy_2ndrow += FFALIGN((2*Width),1); > } > } > > Regards, > Ankush > > - > - > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -- *Ankush Wadke* Mob: 9673898604 Email: wadkes93 at gmail.com - - in.linkedin.com/pub/ankush-wadke/36/868/981/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From halfelf.ronin at gmail.com Thu Jul 28 13:46:45 2016 From: halfelf.ronin at gmail.com (Shu Wang) Date: Thu, 28 Jul 2016 18:46:45 +0800 Subject: [Libav-user] How ffmpeg insert/drop frames when applying pts filter? Message-ID: Hello everyone, I'd like to use libav* to speed up/slow down videos. The same function in command line: ffmpeg -i somevideo.mp4 -vf 'setpts=3*PTS' frames_%04d.jpg ffmpeg -i somevideo.mp4 -vf 'setpts=0.1*PTS' frames_%04d.jpg Since the "-vf" option means "video filter", I started testing with examples/filtering_video.c, and configured my filter graph with the same filter str like above : // I just modified the codes from examples/filtering_video.c avfilter_graph_parse_ptr(filter_graph, "setpts=0.1*PTS", &inputs, &outputs, NULL); avfilter_graph_config(filter_graph, NULL); Turns out this will only set the pts field of AVFrame. However the `ffmpeg` command-line tool will insert/drop some frames to keep fps unchanged. Maybe it just drop some frames evenly when speeding up, but how does it generate new frames when slow down? Much appreciated if anyone could show me where in the source ffmpeg inserts/drop those frames, or how to do it on my own. -- Regards, Shu Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From donmoir at comcast.net Thu Jul 28 18:59:56 2016 From: donmoir at comcast.net (Don Moir) Date: Thu, 28 Jul 2016 11:59:56 -0400 Subject: [Libav-user] 4k video decoding planner data In-Reply-To: References: Message-ID: <6498faa2-e320-efbf-806e-9c99b4bdf562@comcast.net> >y_1strow += Width; >y_2ndrow += Width; Check this to see if it is your problem. Appears you should be using the pitch for this and not the width. Normally width is width of video and pitch is the number of bytes per line and can be greater than width. On 7/27/2016 11:05 PM, Ankush Wadke wrote: > Thanks for the reply Harry. > And yes it has something to do with the offset of U and V planes but > it works perfectly fine for a FHD stream i.e. 1920x1080 but dosent > seem to for a 3840x2160 stream. Does planer data require alignment to > be considered? > > On Wed, Jul 27, 2016 at 7:51 PM, Harry > wrote: > > Green borders are a wrong offset / starting address for the UV planes. > > Harry > > Sent from my smartphone > > On 27 Jul 2016, at 11:47, Ankush Wadke > wrote: > >> Hi everybody, >> I am stuck with a problem to which I am not able to figure >> out what exactly is wrong. >> I am decoding a 4k video which gives me a YUV420P output >> frame. In order to convert it to UYVY I wrote a piece of code. >> which works fine for first frame, then second frame has a small >> green line and the third frame has the colors totally displaced. >> I am getting the correct frames sometimes but i am not able to >> figure out what is wrong with the other frames. >> It would be really helpful if someone would point me to >> something tat would help. >> >> hers the code i am using for conversion, >> >> void Convert_yuv420_to_yuv422(const AVFrame &videoBuffer, uint8_t >> * opBuffer, const int const &Width, const int const &Height) >> { >> uint8_t *buffer_uyvy_1strow = opBuffer, >> *buffer_uyvy_2ndrow = opBuffer+(2*Width); >> >> uint8_t*y_1strow = videoBuffer.data[0], >> *y_2ndrow = videoBuffer.data[0]+Width; >> >> uint8_t * u_ptr = videoBuffer.data[1]; >> uint8_t * v_ptr = videoBuffer.data[2]; >> >> for (unsigned int i_conv_ht =0; i_conv_ht> for (unsigned int i_conv_wt =0; i_conv_wt> *buffer_uyvy_1strow++ = *u_ptr; >> *buffer_uyvy_1strow++ = *y_1strow++; >> *buffer_uyvy_1strow++ = *v_ptr; >> *buffer_uyvy_1strow++ = *y_1strow++; >> >> *buffer_uyvy_2ndrow++ = *u_ptr; >> *buffer_uyvy_2ndrow++ = *y_2ndrow++; >> *buffer_uyvy_2ndrow++ = *v_ptr; >> *buffer_uyvy_2ndrow++ = *y_2ndrow++; >> >> ++u_ptr; >> ++v_ptr; >> } >> y_1strow += Width; >> y_2ndrow += Width; >> >> buffer_uyvy_1strow += FFALIGN((2*Width),1); >> buffer_uyvy_2ndrow += FFALIGN((2*Width),1); >> } >> } >> >> Regards, >> Ankush >> >> * >> >> * >> >> >> _______________________________________________ >> Libav-user mailing list >> Libav-user at ffmpeg.org >> http://ffmpeg.org/mailman/listinfo/libav-user > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > > > > -- > *Ankush Wadke > > * > Mob: 9673898604 > Email: wadkes93 at gmail.com > > * > > * in.linkedin.com/pub/ankush-wadke/36/868/981/ > > > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From applemax82 at 163.com Sun Jul 31 13:25:01 2016 From: applemax82 at 163.com (qw) Date: Sun, 31 Jul 2016 18:25:01 +0800 (CST) Subject: [Libav-user] questions about indefinite waiting for incoming rtmp stream Message-ID: <4d29d5e7.425a.156407c46d5.Coremail.applemax82@163.com> Hi, I use avformat_open_input(), and avformat_find_stream_info() to open rtmp stream, and use av_read_frame() to read av packet. If there is no incoming rtmp stream, avformat_open_input() and avformat_find_stream_info() will wait indefinitely. If incoming rtmp stream is terminated in the middle, av_read_frame() will wait indefinitely. Is there some method for ffmpeg lib to effective handle this sort of indefinite waiting in case of network issue? Thanks! Regards Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From gudenaua at gmail.com Thu Jul 28 19:42:43 2016 From: gudenaua at gmail.com (gudenau .) Date: Thu, 28 Jul 2016 11:42:43 -0500 Subject: [Libav-user] Encode and stream? Message-ID: I have a buffer that contains an image that is 640x480 that will be updated up to 60 times a second. I would like to encode each frame of this and stream it to a second computer, how could I do this with libav in such a way that there is low latency and a lossless image? Could I do this in such a way that it does not need to send updates at 60 hz when it is not needed? -------------- next part -------------- An HTML attachment was scrubbed... URL: