[Ffmpeg-devel-irc] ffmpeg.log.20171029

burek burek021 at gmail.com
Mon Oct 30 03:05:01 EET 2017


[03:45:31 CET] <dreamp> Hi there, how are you? I'm trying to build a simple decoder using libav but I couldn't without using deprecated methods to feed AVPacket ( av_read_frame )
[03:45:55 CET] <dreamp> Here's what I'm trying to build
[03:45:57 CET] <dreamp> https://gist.github.com/leandromoreira/818962406b4cc53f44fbd7ab6422ad4b#file-basic_decoder-c-L31
[03:46:56 CET] <dreamp> I used to feed Packets by looping with   while(av_read_frame(pFormatCtx, &packet)>=0) { avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);}
[03:47:55 CET] <dreamp> av_read_frame feed packet from AVFormatContext and then I call avcodec_decode_video2 to feed my frame... but this is now deprecated
[03:48:19 CET] <dreamp> and I tried to use the new ways: avcodec_send_packet and avcodec_receive_frame
[03:48:28 CET] <dreamp> but I couldn't :(
[03:49:47 CET] <dreamp> It keeps failing on receiving frames, in fact I don't see how ( in this new way ) the connection between AVFormatContext and the packets being formed.
[04:01:27 CET] <c_14> https://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/decode_video.c
[04:01:51 CET] <c_14> You can't just feed it an empty packet, that won't work
[04:01:55 CET] <c_14> You need to parse the packet first
[04:02:06 CET] <c_14> (check the call for av_parser_parse2)
[04:07:02 CET] <c_14> actually
[04:07:09 CET] <c_14> can't you just use av_read_frame to fill the packet?
[04:08:09 CET] <c_14> I don't see any deprecation guards on that
[04:08:25 CET] <c_14> And it'll fill the pkt before the call to send_packet
[04:08:48 CET] <c_14> ^dreamp
[04:09:28 CET] <dreamp> c_14: thanks I'll take a look
[04:10:22 CET] <c_14> And if you get EAGAIN you have to send a new packet with send_packet
[04:10:28 CET] <c_14> erroring out there isn't the correct action
[04:10:37 CET] <c_14> Since some decoders will buffer packets etc
[04:11:35 CET] <dreamp> c_14: I saw this example earlier =/ I'm using AVFormatContext and this example seems to be passing the data from the file itself to the  `av_parser_parse2`
[04:11:58 CET] <c_14> yeah, which is why I changed my mind and mentioned just using av_read_frame to fill the packet
[04:12:59 CET] <dreamp> c_14: thanks a lot =)
[04:13:22 CET] <dreamp> I made a smaller version which loops using `av_read_frame`
[04:13:23 CET] <dreamp> https://gist.github.com/leandromoreira/818962406b4cc53f44fbd7ab6422ad4b
[04:13:45 CET] <dreamp> now I'm getting
[04:13:49 CET] <dreamp> [h264 @ 0x7fbe5d802c00] No start code is found.
[04:13:56 CET] <dreamp> [h264 @ 0x7fbe5d802c00] Error splitting the input into NAL units.
[04:13:59 CET] <dreamp> error sending packet
[04:14:08 CET] <c_14> can you open the video with ffmpeg/ffplay or something?
[04:14:25 CET] <dreamp> Yes, I downloaded the big bunny
[04:14:30 CET] <dreamp> sample to work on.
[04:16:02 CET] <c_14> Is the call to avcodec_open2 returning success or error?
[04:16:44 CET] <dreamp> yes ;) I'll provide the full solution
[04:16:50 CET] <dreamp> (witch all checks)
[04:17:30 CET] <dreamp> *with https://gist.github.com/leandromoreira/131cfd8de665404b1cff6442d150ffe1
[04:18:14 CET] <dreamp> the only output for this code is the one I just paste
[04:19:38 CET] <dreamp> I also tested with a small video I made (using ffmpeg itself with 3 png)
[04:20:44 CET] <dreamp> and the error is the same I think it needs a step before av_read_frame but I couldn't find it on the documentation, I'm gonna check on source code.
[04:29:55 CET] <dreamp> c_14: I think I fixed =D thank you very much!
[04:30:14 CET] <dreamp> It seems it was very silly
[04:30:25 CET] <c_14> what was it?
[04:30:29 CET] <dreamp> response = avcodec_receive_frame(pCodecContext, pPacket);
[04:30:49 CET] <dreamp> I think I should be receiving frames from the context not sending...
[04:31:00 CET] <dreamp> now at least it goes until the end
[04:31:09 CET] <dreamp> with no major issue
[06:12:28 CET] <_Vi> Why I specify `-c copy` (or `-c:v copy`), but FFmpeg still transcodes `vp9 (native) -> vp9 (libvpx-vp9)`)?
[06:16:47 CET] <_Vi> Answer: multiple input files, but missing `-i` option before the second one.
[06:28:19 CET] <_Vi> Can `-o` option be added to FFmpeg meaning "specify output file here and overwrite only this file without confirmation"?  `-y` and missed `-i` before input do not play well together.
[06:40:34 CET] <te-te_> i want to make a audio stream system, which get some audio of wav from embedded device and send to android app.
[06:41:30 CET] <te-te_> can i adopt ffmpeg?
[06:43:45 CET] <te-te_> i hope the system can play aduio when the embedded device smaple data of audio.
[06:53:36 CET] <c3r1c3-Win> te-te_: Why not use a proper audio streaming server? Something like icecast?
[07:29:26 CET] <pupp> I tried to merge video with audio, but now audio is 6 seconds longer. How can I extend video with 6 seconds of white screen?
[08:46:07 CET] <adgtl> Hello folks
[08:46:43 CET] <adgtl> I have 2 files first is .mp4 (h264) and other is .opus... I want to create a combined mp4 file out of it... so I am just doing
[08:48:09 CET] <adgtl> ffmpeg -i input.mp4 -i input.opus output.mp4
[08:49:11 CET] <adgtl> here is the log https://gist.github.com/352b571d8ec66a1a5ae7e3c9a4c4fa97
[08:49:42 CET] <adgtl> pls let me know if it right way
[08:49:53 CET] <adgtl> thank you
[08:50:15 CET] <utack> and you need "copy" as codec for both videos
[08:50:28 CET] <utack> -c:a copy -c:v copy
[08:51:41 CET] <utack> but seriously mkvmerge GUI would be easier unless it absolutely needs to be mp4..drag and drop and done
[08:53:46 CET] <adgtl> utack hmm
[08:54:03 CET] <adgtl> utack what is :a and :v?
[08:54:18 CET] <utack> audio and video
[08:54:31 CET] <utack> you want to copy both from the source file, without re-encode i guess
[08:54:35 CET] <adgtl> utack do you mean       ffmpeg -i input.mp4 -i input.opus  -c:a copy -c:v copy -strict -2 output.mp4?
[08:54:52 CET] <adgtl> utack right.. could you confirm if above command is right
[08:55:23 CET] <utack> i have no idea..i'd need to test it. does it do the right thing?
[08:58:23 CET] <adgtl> utack I just tried this
[08:58:24 CET] <adgtl> ffmpeg -i output.mp4 -i output.opus -map 0:v -map 1:a -codec copy -strict -2 zing444.mp4
[08:58:44 CET] <adgtl> but I miss audio there also ..video gets black in between
[08:59:13 CET] <adgtl> but ffmpeg -i input.mp4 -i input.opus -strict -2 output.mp4 seems fine
[08:59:35 CET] <utack> yeah maybe you need to map...no idea about that. i was just pointing out that in any case you need to tell it to copy the streams isntead of re-encode
[09:00:08 CET] <adgtl> utack here is the log when I do use codec copy https://gist.github.com/anildigital/725d35a07767dff992de00cda4a04b47
[09:00:22 CET] <adgtl> okay
[09:00:51 CET] <utack> looks correct, what was wrong with hte result?
[09:00:56 CET] <utack>   Stream #0:0 -> #0:0 (copy)
[09:00:56 CET] <utack> Stream #1:0 -> #0:1 (copy)
[09:01:00 CET] <utack> isn't that what you wanted?
[09:01:06 CET] <utack> both streams copied to one output file?
[11:42:07 CET] <diqidoq> how to convert wav 32bit float / 96000 khz into a opus file playable in whatsapp?
[11:44:07 CET] <diqidoq> ffmpeg -i session.wav -a:c libopus -b:a 256k session.opus ? is this correct or can I go higher in bitrate ?
[11:52:39 CET] <sfan5> diqidoq: according to ffmpeg -h encoder=libopus 96kHz or 32-bit float is not supported by either ffmpeg or libopus
[11:52:53 CET] <sfan5> but in theory that command is correct
[11:54:01 CET] <sfan5> if you don't mind encoding 16-bit 48kHz you can just use that
[11:54:36 CET] <diqidoq> sfan5: before or while converting to opus?
[11:55:02 CET] <sfan5> ffmpeg will downsample automatically
[11:55:14 CET] <diqidoq> sfan5: I was not sure if you where referring to the original file or to the end result (support)
[11:55:30 CET] <sfan5> the end result will be 16-bit 48000Hz
[11:55:51 CET] <diqidoq> sfan5: true. tested.
[11:56:01 CET] <diqidoq> sfan5: ok, then I am fine  I think
[11:56:12 CET] <diqidoq> sfan5: thank you so much for chiming in!
[11:56:26 CET] <sfan5> sure no problem
[11:56:42 CET] <diqidoq> sfan5: Greetings from stormy Berlin ;)
[11:57:08 CET] <sfan5> way less stormy here in NRW thankfully
[11:57:20 CET] <diqidoq> sfan5: haah :)
[11:57:45 CET] <diqidoq> sfan5: Dann einen schönen Sonntag wünsche ich Euch. :)
[11:57:55 CET] <sfan5> gleichfalls :)
[11:57:58 CET] <diqidoq> :)
[13:00:16 CET] <lynn-dance> I am compiling chromaprint and it needs the ffmpeg libraries.... what *.h files need to be copied to /usr/include
[13:09:30 CET] <JEEB> lynn-dance: make install installs things properly. during configure you can set prefix with --prefix=/your/prefix , and when you build the other thing you can set PKG_CONFIG_PATH=/your/prefix/lib/pkgconfig
[13:12:26 CET] <lynn-dance> i doesn't install header files they need to be copied ..... CMake Error at src/cmd/CMakeLists.txt:27 (add_library):
[13:16:22 CET] <JEEB> all public headers get installed with make install
[13:17:58 CET] <JEEB> in theory it's possible it got broken but it is highly unlikely. there are other things that are more likely such as the other thing requiring an older FFmpeg or something. check that other library's actual failure point
[15:02:40 CET] <livingbeef> I made a gif using ffmpeg (with paletteuse) and now ffprobe insists that pix_fmt=bgra, even though ffmpeg's gif encoder only claims to support  rgb8 bgr8 rgb4_byte bgr4_byte gray pal8. Any idea what's happening here?
[15:05:51 CET] <furq> it says that about all gifs
[15:05:55 CET] <furq> you can probably just ignore it
[15:08:18 CET] <livingbeef> Is that a bug or feature? It's bit of a problem when I wnat to find out the pixel format.
[15:10:49 CET] <furq> i'm pretty sure gif is always pal8
[15:11:55 CET] <sfan5> i remember reading a blogpost about true color gifs but not sure how that worked
[15:12:32 CET] <furq> you can have multiple blocks
[15:12:39 CET] <furq> https://upload.wikimedia.org/wikipedia/commons/a/aa/SmallFullColourGIF.gif
[15:12:54 CET] <furq> i don't think ffmpeg will let you do anything like that though
[15:13:20 CET] <sfan5> then that's technically not pal8 is it?
[15:13:35 CET] <furq> each block is pal8
[15:13:45 CET] <furq> idk how you'd describe the pixel format of the entire image or if it's useful to do so
[15:14:12 CET] <furq> but that's probably why ffmpeg interprets it as bgra
[15:15:21 CET] <livingbeef> Hmm... well, I didn't need it for anything important anywhy. Just curious.
[16:04:12 CET] <CoreX> keep getting this on compile https://pastebin.com/raw/GdEaactM
[16:04:20 CET] <CoreX> anybody think they can solve it
[16:19:05 CET] <relaxed> CoreX: is this a recent git checkout of libx264? which ffmpeg version?
[16:21:36 CET] <CoreX> relaxed compiled from sandbox never had a problem till now and ffmpeg is 2.7
[16:22:48 CET] <CoreX> i can compile x264.exe fine but when compile of ffmpeg it just doesnt want to get past that
[16:24:39 CET] <furq> why are you building ffmpeg 2.7
[16:25:03 CET] <CoreX> everything i have configured for it is fine
[16:25:21 CET] <furq> that's more than two years old
[16:25:39 CET] <furq> i wouldn't be surprised if newest versions of external libs don't work properly with it
[16:26:52 CET] <relaxed> you could checkout x264 from around 2.7's release if you really need that ffmpeg version
[16:48:54 CET] <iive> CoreX: to me this error looks like your libx264 is too old
[16:50:05 CET] <iive> CoreX:  x264_bitdepth is constant defined in the system installed x264.h file.
[16:50:29 CET] <iive> CoreX:  x264_bit_depth
[16:50:56 CET] <CoreX> yeah cleared everything out from lib/local folders where ever i could find x264.* and done it all again
[16:51:21 CET] <CoreX> compile is in the t's now past l's
[20:29:01 CET] <nick1234> hi, I am trying to convert a webm video to mp4 using -qscale 0 but the filesize reduces to almost 1/3. can anyone help me to convert the video without reducing the quality
[20:29:30 CET] <durandal_1707> post full command
[20:30:02 CET] <nick1234> ffmpeg -i "1.webm" -qscale 0 "1.mp4"
[20:33:04 CET] <nick1234> it also says in output : "Please use -q:a or -q:v, -qscale is ambiguous". does it have anything to do with the issue?
[20:37:08 CET] <DHE> if you're just converting formats, use "-c copy" instead of "-qscale 0"
[20:37:25 CET] <DHE> though chances are you'll have a codec error somewhere. then you'll need to choose replacement codecs and options for them
[20:39:17 CET] <nick1234> DHE, ok, I just wanted to concat my 6 seperate video files, 4 of them mp4 and 2 webm. coulnt do so. so then i tried to convert the webm to mp4 but i get the quality loss. Is there a way to concat all of them ?
[20:39:49 CET] <nick1234> i tried -filter complex but it gives error
[20:44:28 CET] <nick1234> and btw the -c copy is also giving error: "Could not find tag for codec vp8 in stream #0, codec not currently supported in container"
[21:10:04 CET] <DHE> yep, mp4 doesn't support vp8. which also tells me that your existing mp4 files will have mixed up codecs
[21:14:21 CET] <ErAzOr> Hi together. I try to add a logo to my transcoded channel (by vaapi)
[21:14:32 CET] <ErAzOr> this is the command I'm using:
[21:14:44 CET] <ErAzOr> "/usr/bin/ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i $URL -vf 'format=nv12,hwupload,scale_vaapi=w=1280:h=720 [0:v]; movie=/home/hts/HD.png [1:v]; [0:v][1:v] overlay=0:0 [out]' -c:v h264_vaapi -b:v 4000k -minrate 4000k -maxrate 4000k -bufsize 4000k -preset fast -c:a copy -f mpegts pipe:1"
[21:15:08 CET] <ErAzOr> But this is the error I get: Impossible to convert between the formats supported by the filter 'Parsed_scale_vaapi_2' and the filter 'auto-inserted scaler 1'
[21:15:34 CET] <jkqxz> The overlay filter doesn't support GPU-side frames.  You need to do the overlay before the upload.
[21:15:37 CET] <ErAzOr> Can anyone give me a hint on how to fix this? :)
[21:20:07 CET] <ErAzOr> hmm sorry I can't follow you. Can you provide me an example?
[21:22:42 CET] <jkqxz> The hwupload filter is uploading frames to the GPU, but then they're being given to the overlay filter.  It can only work on frames in CPU memory, hence your error.
[21:23:23 CET] <jkqxz> If you need the scale before the upload then you probably want to just use the normal scale filter for that, then upload the result for the encoder.
[21:23:45 CET] <sfan5>  -vf 'movie=/home/hts/HD.png [1:v]; [0:v][1:v] overlay=0:0,format=nv12,hwupload,scale_vaapi=w=1280:h=720 [out]'
[21:23:52 CET] <sfan5> should work unless i made a mistake
[21:25:55 CET] <ErAzOr> Works! Thank you so much :)
[22:28:55 CET] <geri> hi
[22:29:33 CET] <geri> i capture an image from my screen with Height=1800 Width=2880, the size of 1 frame is 20736000
[22:29:45 CET] <geri> so i wonder how to compress the data in realtime and write to disk
[22:30:25 CET] <Cracki> ^ 30 fps
[22:30:58 CET] <Cracki> guessing codec recommendations are in order
[22:35:33 CET] <furq> geri: -c:v ffv1
[22:35:39 CET] <furq> if that's too slow, try utvideo or ffvhuff
[22:35:55 CET] <JEEB> or lossless x264 in ultrafast or so
[22:36:05 CET] <JEEB> depends on which is actually fastest while giving you lossless
[22:36:16 CET] <JEEB> (i never recalled ffv1 being too fast)
[22:36:39 CET] <JEEB> utvideo recently got some patches on AVX2 SIMD which might have made it somewhat faster, but I haven't checked if they got merged nor benchmarked them yet
[22:36:53 CET] <furq> utvideo is still much worse compression though isn't it
[22:37:06 CET] <JEEB> yes, I mean the point was to make it simple
[22:38:05 CET] <JEEB> it's basically some byteswaps, some sort of prediction function and huffman coding
[22:38:16 CET] <Cracki> copying context: geri has an eyetracker and wants to record what the gaze falls on, alongside the gaze information
[22:38:54 CET] <Cracki> so I'm guessing lossless (or full res) isn't even needed
[22:40:29 CET] <JEEB> how much res you need is a separate thing
[22:40:47 CET] <JEEB> but generally if you're under time constraints having one pass of lossless at first makes sense, unless you have size constraints
[22:41:12 CET] <JEEB> so that you can always go back to the original if a re-encode is not good enough for any processing you have in mind
[22:41:17 CET] <Cracki> ^
[22:56:51 CET] <geri> furq: is that part of opencv?
[23:00:08 CET] <Cracki> you're mixing projects
[23:00:14 CET] <geri> i refer to X264
[23:00:22 CET] <geri> video codec
[23:06:00 CET] <geri> what is lossless x264 in ultrafast?
[23:06:15 CET] <geri> i try to do it with c++
[23:06:26 CET] <geri> passing an usigned char*
[23:06:43 CET] <geri> and not using command line to call ffmpeg
[23:46:57 CET] <ZeroWalker> if i want to livestream for say survailance, what options do i have, is it RTP or?
[23:47:34 CET] <Cracki> live viewing, on demand viewing, ...?
[23:48:24 CET] <Cracki> what endpoints?
[23:49:51 CET] <ZeroWalker> on demand i guess, it would be accessible from 1 device at a time currently
[23:50:05 CET] <ZeroWalker> endpoints?
[23:50:42 CET] <ZeroWalker> the server would be a orange pi, and the device that will be used to watch is an android mobile (though that Might change, but let's go with that for now)
[23:55:00 CET] <furq> rtmp would probably work
[23:55:07 CET] <furq> although maybe the latency would be too high
[23:55:49 CET] <ZeroWalker> what kind of latency are we talking about?
[23:56:01 CET] <ZeroWalker> seconds, minutes?
[00:00:00 CET] --- Mon Oct 30 2017


More information about the Ffmpeg-devel-irc mailing list