[Ffmpeg-devel-irc] ffmpeg.log.20170215

burek burek021 at gmail.com
Thu Feb 16 03:05:01 EET 2017


[00:00:19 CET] <vlad__> what is the easiest way to do this, given that I can control both the dump format and the ffmpeg command
[00:00:43 CET] <faLUCE> vlad__: then you don't want to use libav API ?
[00:01:32 CET] <vlad__> faLUCE: actually the program does have a codepath that uses libav to write to an avi video
[00:01:56 CET] <vlad__> streaming from that doesn't work since ffmpeg quits once it hits the end of the video
[00:02:36 CET] <faLUCE> vlad__: which stream type? http, rtp or what?
[00:03:06 CET] <vlad__> rtmp
[00:03:54 CET] <faLUCE> vlad__: https://ffmpeg.org/ffmpeg-protocols.html#rtmp
[00:05:40 CET] <vlad__> faLUCE: my issue isn't the actual streaming, that I can do (from a file)
[00:05:50 CET] <arog> usr/bin/ld: libavcodec/mqc.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
[00:05:58 CET] <arog> ./configure --prefix="/usr/local/" --pkg-config-flags="--static" --extra-cflags="-I/usr/local/include" --extra-ldflags="-L/usr/local/lib" --bindir="/usr/local/bin" --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libx264 --enable-pic -fPIC --enable-nonfree --enable-shared
[00:05:59 CET] <arog> Unknown option "-fPIC".
[00:06:02 CET] <arog> y
[00:06:03 CET] <arog> gg
[00:06:06 CET] <arog> why can you not be satisfied?
[00:07:18 CET] <faLUCE> arog: you have only to prived --enable-pic
[00:07:21 CET] <faLUCE> provide
[00:07:41 CET] <faLUCE> (delete -fPIC from the configure command)
[00:08:02 CET] <vlad__> faLUCE: I'd prefer not to do anything too complicated on the C++ side
[00:08:21 CET] <vlad__> like messing around with the libav api
[00:08:26 CET] <vlad__> unless you think that's the best course of action
[00:08:38 CET] <faLUCE>  vlad__ you only have to wrap the ffmpeg command, without callind libav directly
[00:08:40 CET] <arog> i did faLUCE, but that didn't work when i did make
[00:09:00 CET] <vlad__> I was hoping that there would be some dumb way to dump the frames in a way that ffmpeg can understand
[00:09:03 CET] <faLUCE> arog: try to disable ffserver in configure
[00:09:48 CET] <vlad__> I'm not sure what you mean by "wrapping" the command?
[00:10:23 CET] <faLUCE> arog: in addition, delete --enable-shared too
[00:10:30 CET] <arog> nah
[00:10:33 CET] <arog> i need to build wwith shared
[00:10:39 CET] <arog> since i am including just ffmpeg in my c++ app
[00:10:47 CET] <arog> and i dont want ot link every single dependency library
[00:11:04 CET] <arog> i just want ot include libavcodec
[00:11:23 CET] <faLUCE> arog: in you c++ application, you only have to execute ffmpeg executable
[00:11:38 CET] <faLUCE> vlad__: : in you c++ application, you only have to execute ffmpeg executable
[00:12:19 CET] <faLUCE> arog: did you try to disable ffserver?
[00:12:23 CET] <vlad__> faLUCE: why execute it inside instead of outside (like I am now)?
[00:12:39 CET] <vlad__> would I execute it on each frame?
[00:13:05 CET] <faLUCE> vlad__: because it's easier... otherwise you have to call libav
[00:14:10 CET] <vlad__> faLUCE: ok... so in my DumpFrame() method, I instead of writing a png I should call ffmpeg somehow?
[00:15:00 CET] <faLUCE> vlad__: it depends. If the ffmpeg command does all that you want, the answer is "yes". Otherwise 1) use another, proper ffmpeg command 2) (much harder) use libav
[00:17:23 CET] <arog> faLUCE:  im using libav :)
[00:18:01 CET] <vlad__> faLUCE: I dont follow. I don't know what ffmpeg command you want me to run, so I can't tell you if it does all I want
[00:18:08 CET] <faLUCE> arog: sorry, my previous answer was for vlad__
[00:18:14 CET] <arog> ah ok
[00:20:58 CET] <vlad__> faLUCE: nvm I think this answers my question: http://stackoverflow.com/questions/5825173/pipe-raw-opencv-images-to-ffmpeg
[00:22:02 CET] <vlad__> thanks for helping :)
[00:29:08 CET] <llogan> are you outputting rawvideo?
[00:31:30 CET] <arog> no god damn it!!! kepstin -- i installed ffmpeg with shared-enabled
[00:31:34 CET] <arog> and it is still failing for libz
[00:31:37 CET] <arog> /usr/local/lib/libavcodec.a(cscd.o): undefined reference to symbol 'uncompress'
[00:31:38 CET] <arog> /lib/x86_64-linux-gnu/libz.so.1: error adding symbols: DSO missing from command liney
[00:32:08 CET] <arog> oh wait
[00:32:10 CET] <arog> heh
[00:32:22 CET] <vlad__> llogan: no, but I'd like to :)
[00:32:27 CET] <vlad__> llogan: do you know how to do it in c++?
[00:33:01 CET] <arog> i think cmake is defaulting to the static library instead of the shared library
[00:33:27 CET] <vlad__> llogan: I am editing an existing program that just dumps the frames as pngs, so I'd like to not introduce too much new stuff/dependencies
[00:36:06 CET] <arog> okay ill just build ffmpeg without static
[00:36:48 CET] <arog> vlad are you using opencv too?
[00:36:54 CET] <arog> I think I am doing somehing very similar to what you are doing
[00:37:08 CET] <arog> im trying to create a mp4 from a bunch of images that are being captured by a webcam (after processing)
[00:37:34 CET] <arog> the issue I am running into which you may run into as well is that ffmpeg by default uses a lot of CPU, so you may need to use nvenc instead
[00:37:45 CET] <vlad__> arog: I am not using opencv
[00:38:11 CET] <vlad__> and I'd rather avoid introducing opencv as a dependency
[00:38:44 CET] <arog> how are you capturing your webcam data then
[00:38:48 CET] <arog> or are you not capturing webcam
[00:38:51 CET] <arog> and just converting pngs
[00:38:54 CET] <vlad__> arog: won't be able to use nvenc since I don't have nvidia
[00:38:57 CET] <arog> :)
[00:38:59 CET] <arog> heh
[00:39:01 CET] <vlad__> the frames are not from a webcam
[00:39:10 CET] <arog> well anyways thats what im doing
[00:39:12 CET] <vlad__> they are from a game emulator
[00:39:15 CET] <arog> trying to at least
[00:39:42 CET] <arog> running into issues to find the nvidia encoder, but hopefully once i get this built + linked properly I should get the encoding done fairly easily
[00:39:50 CET] <arog> take a look at this exmaple:
[00:39:58 CET] <arog> specifically video_encode_example()
[00:40:40 CET] <arog> it's not too difficult to use libavcodec if this example is to be believed
[00:41:35 CET] <vlad__> arog: what example?
[00:42:04 CET] <arog> oops
[00:42:07 CET] <arog> https://ffmpeg.org/doxygen/trunk/api-example_8c-source.html
[00:42:08 CET] <arog> here you go
[00:43:15 CET] <llogan> vlad__: if it's png it should be autodetected by ffmpeg, so all you should need is: ./vladapp - | ffmpeg -i -
[00:43:55 CET] <llogan> or "ffmpeg -i pipe:" or "ffmpeg -i pipe:0"
[00:45:47 CET] <arog> woohoo!!! it is finding nvenc_h264
[00:45:54 CET] <arog> onto the encoding part now, hope it isnt too difficult
[00:46:13 CET] <vlad__> llogan: vladapp is writing to png files, not stdout. I'd like to change that to a named pipe (assuming ffmpeg can handle named pipes)
[00:47:03 CET] <arog> can someone explain what bitrate means? how is that different than fps
[00:48:12 CET] <arog> and how can i figure out my bitrate given an image size + fps
[00:51:33 CET] <llogan> vlad__: that should work too. note that you'll have to use the -framerate input option if you don't want to use the default value of 25
[00:51:53 CET] <llogan> convienient timing
[00:53:47 CET] <llogan> vlad__: you missed my last message: "named pipe should work too. note that you'll have to use the -framerate input option if you don't want to use the default value of 25"
[00:54:52 CET] <vlad__> llogan: internet just got back...
[00:55:30 CET] <vlad__> did I miss anything else?
[00:55:39 CET] <llogan> ehh... no.
[00:59:28 CET] <vlad__> llogan: do you know if there's a simpler way to write the frame (without png)?
[01:03:39 CET] <vlad__> the frame data is already in a pretty digestible format
[01:04:23 CET] <vlad__> I have the AVFrame.{data, linesize, format, width, height}
[01:09:42 CET] <llogan> raw video is simpler
[01:13:48 CET] <vlad__> llogan: ok, so how do I write the frame? the png writing code is pretty long and complicated
[01:15:12 CET] <llogan> i don't know. never had to do that.
[01:15:44 CET] <vlad__> :(
[01:16:21 CET] <vlad__> I guess the question is: what image formats does ffmpeg support for input streaming?
[01:18:50 CET] <llogan> a better question is what does it not support. see "ffmpeg -formats"
[01:19:52 CET] <vlad__> llogan: and for any supported format, can ffmpeg take a stream of such images?
[01:22:08 CET] <vlad__> well I'll just try it the png way...
[01:26:20 CET] <llogan> vlad__: probably. you may have to help it out if it doesn't autodetect (using image2pipe input options -pixel_format, -framerate, -video_size, etc)
[01:31:41 CET] <vlad__> well here's a hack that does what I want: make a fifo, pipe the pngs into the fifo with tail -f, and pipe into ffmpeg with cat
[01:31:47 CET] <vlad__> time to see if it works
[01:36:17 CET] <vlad__> actually, is there a way to cut the last step by telling ffmpeg where the pipe is?
[01:36:32 CET] <vlad__> or does ffmpeg only know about stdin
[01:36:47 CET] <JEEB> just set input to the pipe?
[01:37:05 CET] <JEEB> also I think there's the pipe protocol?
[01:37:37 CET] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#pipe
[01:37:38 CET] <JEEB> yup
[01:38:10 CET] <llogan> he didn't seem to like that idea previously
[01:39:38 CET] <vlad__> llogan: the problem wasn't the pipe, it was that I didn't know how to write the images
[01:42:31 CET] <vlad__> the hack is to use tail -f to see when there is a new image, and write it to the pipe
[01:55:52 CET] <xtina> furq: i am still fighting with pipe buffers and frame drops. i've found that it's my audio pipe dropping frames. my max pipe buffer size is 1MB, and my audio bitrate is only 24kbps. So it seems almost impossible that I could fill up the audio pipe buffer, as that would be 41 seconds of buffering?
[01:56:19 CET] <xtina> what do you think
[01:57:03 CET] <xtina> for anyone else: I'm streaming audio/video via ffmpeg to the internet. i'm experiencing issues losing audio frames. for both audio and video i'm writing and reading from named pipes.
[01:57:09 CET] <xtina> my pipe buffers are 1MB in size
[01:57:24 CET] <xtina> and yet i'm still overrunning arecord and losing audio frames
[01:57:48 CET] <xtina> even though on my stream, i only lose about 1 or 2 seconds of audio.
[02:17:33 CET] <thebombzen> ohey, lavc has a native opus encoder now
[02:17:40 CET] <thebombzen> is it useable or should I still use libopus for everything ever
[02:24:05 CET] <arog> hey
[02:24:17 CET] <arog> for AVFrame, line_size is that the number of bytes for all channels per row?
[02:24:30 CET] <arog> so for example RGB24 would be 3 * width?
[02:54:28 CET] <arog> how do i install libswsscale
[03:03:01 CET] <llamapixel> sudo apt-get install libswscale would be my first guess
[03:06:01 CET] <arog> nah its there
[03:06:06 CET] <arog> i had a mistake in my cmake
[03:06:10 CET] <arog> i ahve a question about avformat
[03:06:19 CET] <arog> how do i specify it to use nvenc instead of the default x264?
[03:07:24 CET] <arog> oh i figured it out
[03:07:25 CET] <arog> tnever mind
[03:08:41 CET] <arog> [NULL @ 0x2443820] Unable to find a suitable output format for 'webcam.mp4'
[03:08:42 CET] <arog> fail to avformat_alloc_output_context2(webcam.mp4): ret=-22
[03:08:44 CET] <arog> any idea?
[03:27:39 CET] <thebombzen> arog: try explicitly setting the format
[03:27:58 CET] <thebombzen> given that you're programming it, there's really no reason NOT to force avformat to use mp4
[03:28:27 CET] <arog> i did that thebombzen
[03:28:37 CET] <arog> AVFormatContext* outctx = nullptr;
[03:28:38 CET] <arog>   AVOutputFormat* outputFormat = NULL;
[03:28:40 CET] <arog>   outputFormat = av_guess_format("mp4", NULL, NULL);
[03:28:42 CET] <thebombzen> well then why couldn't it find a suitable output format?
[03:28:42 CET] <arog>   ret = avformat_alloc_output_context2(&outctx, outputFormat, nullptr,
[03:28:44 CET] <arog>                                        file_name.c_str());
[03:28:47 CET] <arog> i have no idea
[03:29:04 CET] <thebombzen> well, you see, you made it guess the format
[03:29:06 CET] <thebombzen> why don't you just
[03:29:08 CET] <thebombzen> tell it mp4
[03:29:18 CET] <thebombzen> instead of saying "guess based on the mp4 filename"
[03:29:31 CET] <thebombzen> that sounds like a really awkward way of just telling it mp4
[03:29:55 CET] <arog> i did that too
[03:30:17 CET] <thebombzen> does'nt look like it from your error message
[03:30:23 CET] <arog> AVOutputFormat outputFormat; outputFormat.video_codec = AV_CODEC_ID_H264;
[03:30:31 CET] <thebombzen> you know what
[03:30:33 CET] <arog> is that all I have to do
[03:30:34 CET] <thebombzen> instead of pasting oneliners
[03:30:40 CET] <thebombzen> why don't you use a paste service
[03:30:44 CET] <arog> hold on
[03:30:51 CET] <thebombzen> because how else do we help you
[03:31:43 CET] <arog> http://pastebin.com/YRjQFGaN
[03:31:51 CET] <arog> you cna see i commented the other section
[03:31:56 CET] <arog> i tried switching both of them
[03:32:49 CET] <arog> i wonder if I need to put more things in that avoutputformat struct
[03:33:01 CET] <thebombzen> at this point I can't help you because I'm not a C(++) programmer but now you've provided enough information for someone who is
[03:37:15 CET] <arog> home time
[03:37:17 CET] <arog> will look tomorrow
[05:41:26 CET] <xtina> Hi. I'm trying to understand multithreading in ffmpeg.
[05:41:48 CET] <xtina> I'm livestreaming from my Pi Zero with ffmpeg. sometimes it seems that ffmpeg lags, causing me to lose frames of video and/or audio
[05:42:16 CET] <xtina> I see that ffmpeg has a multi-thread param. Could this help me at all, given that my Zero is single-core?
[05:45:53 CET] <xtina> If not, is there any coding I can do on top of ffmpeg to prevent frame loss?
[07:14:22 CET] <thebombzen> is there any way to use -ss and -t in timebase units?
[07:14:31 CET] <thebombzen> rather than seconds?
[07:14:58 CET] <thebombzen> or do I have to divide by the timebase?
[07:25:37 CET] <vlad__> xtina: did you ever fix the fifo buffer size problem?
[07:25:47 CET] <vlad__> I think I'm about to run into it
[07:29:05 CET] <xtina> vlad__: not really
[09:43:43 CET] <stirner> I'm interesting in transcoding audio using the FFmpeg C libraries. Where's the best place to start?
[09:43:55 CET] <stirner> I've tried in the past but I always get lost in the documentation
[10:00:02 CET] <vlad__> hey, using libav from c++, and I'm getting "Codec for stream 0 does not use global headers but container format requires global headers"
[10:46:37 CET] <piem> howdy. looking at ffmpeg.zeranoe.com dev packages, i can't seem to find .pc files for use with pkg-config. how would you cross compile a project depending on ffmpeg with mingw-w64?
[10:52:38 CET] <cry0> when running "filtering_audio.c", I get the error: "more samples than frame size (avcodec_encode_audio2)", although the afifo filter is in the chain. What can I do?
[10:53:11 CET] <durandal_1707> where is afifo in chain?
[10:59:02 CET] <cry0> my chain looks like this: "in": abuffer, buffer, abuffersink, buffersink, afifo, fifo, NULL
[15:04:48 CET] <Moonlightning> I'm trying to record the display with ffmpeg and x11grab. It works, but I get jpeg-esque artifacts in the output starting just a few frames in. Any ideas?
[15:04:53 CET] <Moonlightning> ffmpeg -hide_banner -f x11grab -s 1366x768 -i :0 foo.mp4  # this is my command line
[15:09:17 CET] <kepstin> Moonlightning: with that command line you're getting the default codec and settings for mp4, which is normally libx264 with crf 23. Yes, you will get noticable artifacts with that codec/setting.
[15:09:33 CET] <kepstin> solution: change codec settings, possibly pick a different codec.
[15:29:07 CET] <Moonlightning> kepstin: Thanks! A quick test with `-c:v ffvhuff` came out nice and clean. Do you know of any guides on picking and tuning codecs?
[15:29:16 CET] Action: Moonlightning has absolutely no idea... >.>;
[15:32:57 CET] <furq> your only viable choices for capturing are the lossless codecs (rawvideo, huffyuv, ffvhuff, ffv1) and x264
[15:33:13 CET] <furq> everything else is either crap quality or too slow
[15:33:20 CET] <DHE> if you don't specify a codec I think it defaults to mpeg2 at 1 megabit bitrate. so yeah that'd look like garbage
[15:33:30 CET] <furq> mp4 defaults to x264
[15:33:50 CET] <furq> although you should probably be using mpegts for this, in which case you'll need to specify
[15:34:33 CET] <furq> libx264rgb or 4:4:4 libx264 at a low-ish crf should look fine
[15:34:49 CET] <furq> 4:2:0 will look bad for general desktop capturing
[15:35:40 CET] <Moonlightning> I want lossy, right? I'd like decently small files; the upload speed is pretty slow here :P
[15:36:05 CET] <furq> well you can always reencode later if you go lossless
[15:36:27 CET] <Moonlightning> I also don't reaaally have enough disk space to run a recording for very long if it's huge
[15:36:53 CET] <furq> try -pix_fmt yuv444p -c:v libx264 -crf 18 foo.ts
[15:37:17 CET] <furq> although if these are going to youtube then they'll conver it to yuv420p anyway
[15:37:21 CET] <furq> +t
[15:37:53 CET] <Moonlightning> They're not; I'll just be distributing them directly. I don't think nginx reencodes the videos that it serves. :D
[15:38:18 CET] <furq> well there's also a browser compatibility issue with yuv444p
[15:38:43 CET] <Moonlightning> There is? :(
[15:38:44 CET] <furq> i think newish desktop firefox and chrome can both play it now but i have no idea what safari or mobile browsers will make of it
[15:40:34 CET] <furq> also if you do need browser support then the codec decision is made for you
[15:40:46 CET] <furq> h264 is the only halfway decent thing that works everywhere
[15:45:14 CET] <Moonlightning> % ffmpeg -hide_banner -f x11grab -s 1366x768 -i :0 -pix_fmt yuv444p -c:v libx264 -crf 18 foo.ts
[15:45:14 CET] <Moonlightning> Unrecognized option 'crf'.
[15:45:25 CET] <Moonlightning> typo?
[15:45:48 CET] <furq> uh
[15:45:56 CET] <furq> what ffmpeg version is that
[15:46:10 CET] <Moonlightning> 2.8.10
[15:46:18 CET] <furq> huh
[15:46:21 CET] <furq> well that should work
[15:46:25 CET] <Moonlightning> built from source; want the whole config copypasta?
[15:46:36 CET] <kepstin> assuming you actually have libx264 enabled, that should work
[15:46:38 CET] <furq> does it say "--enable-libx264"
[15:46:40 CET] <furq> yeah.
[15:46:52 CET] <furq> although i thought -crf was mapped anyway
[15:46:57 CET] <Moonlightning> Nope; it's disabled.
[15:47:01 CET] <Moonlightning> welp, time to rebuild!
[15:47:04 CET] Action: Moonlightning pokes at the useflags
[15:47:07 CET] <furq> lol
[15:47:12 CET] <kepstin> the only codecs that use the '-crf' avoption are libx264 and libvpx, i think?
[15:47:12 CET] <furq> no wonder your video looked like shit then
[15:47:28 CET] <furq> idk what ffmpeg picks for mp4 if x264 isn't available
[15:47:31 CET] <furq> probably mpeg4
[15:47:50 CET] <furq> kepstin: x265 as well?
[15:47:55 CET] <kepstin> ah, right
[15:47:58 CET] Action: Moonlightning tries to remember how long ffmpeg took to emerge the first time
[15:48:09 CET] <kepstin> i keep forgetting about x265 since it's almost unusably slow ;)
[15:48:19 CET] <furq> although i thought any non-private codec option was global
[15:48:37 CET] <kepstin> I think it's just implemented as a private codec option separately on all of them?
[15:48:41 CET] <furq> maybe
[15:49:09 CET] <kepstin> the 'crf' name doesn't even make sense on libvpx, I guess it was chosen to be familiar to x264 users :/
[15:49:22 CET] <Moonlightning>  - - x264                   : Enable h264 encoding using x264
[15:49:26 CET] <Moonlightning> disabled? whaaaat? :p
[15:49:27 CET] <furq> Moonlightning: you might want to try without -pix_fmt yuv444p
[15:49:32 CET] <Moonlightning> Hmm?
[15:49:37 CET] <furq> since that probably wasn't the reason your video looked bad
[15:49:45 CET] <furq> i mean after rebuilding, obviously
[15:49:53 CET] <Moonlightning> yeah, gimme a minute to get that started
[15:50:14 CET] <furq> you probably also want to enable libfdk-aac
[15:50:17 CET] <furq> if there's an option for that
[15:50:24 CET] <furq> assuming you want sound, of course
[15:51:17 CET] <Moonlightning>  - - fdk                    : Use external fdk-aac library for AAC encoding
[15:51:42 CET] <Moonlightning> Anything else while I'm at it? ^.^
[15:52:14 CET] <furq> depends what you want really
[15:52:58 CET] <furq> you probably want gnutls and/or openssl if you're doing any streaming stuff
[15:54:12 CET] <Moonlightning> Actually...I am planning to stream using ffmpeg soon, yeah
[15:54:18 CET] <furq> https://www.johnvansickle.com/ffmpeg/
[15:54:22 CET] <furq> you might just want to use that tbh
[15:54:31 CET] <Moonlightning> 'd like to get simple recording working first though :p
[15:54:52 CET] <furq> that has more or less everything you'd want
[15:57:03 CET] <kepstin> hmm, if you're talking about use flags and have ffmpeg 2.8, you must be using gentoo (un)stable
[15:57:24 CET] <kepstin> might want to enable ~arch so you get newer ffmpeg
[15:57:55 CET] <Moonlightning> is 2.8.10 that old?
[15:58:04 CET] <furq> yeah i guess a gentoo user is never going to use a binary that was built on someone else's computer
[15:58:09 CET] <Moonlightning> X3
[15:58:34 CET] <furq> even if it makes no difference because ffmpeg and x264 detect cpu features at runtime
[15:59:21 CET] <furq> 2.8 isn't that old (freebsd was stuck on it until about a month ago) but 3.x has a lot of improvements
[15:59:43 CET] Action: Moonlightning hmms. Currently doesn't have unstable anything.
[15:59:58 CET] <Moonlightning> ...eww, fdk wants license changes.
[16:00:19 CET] <kepstin> fdk is gpl-incompatible, yeah. so an ffmpeg built with it is non-redistributable
[16:00:20 CET] <furq> one of the 3.x improvements is a builtin gpl-compatible aac encoder
[16:00:26 CET] <furq> which doesn't suck
[16:00:34 CET] <furq> there is one in 2.x but it's pretty bad
[16:02:40 CET] Action: Moonlightning looks at the changelog. Understands none of it.
[16:03:12 CET] <Moonlightning> I guess I'll jump to 3 for the aac encoder though?
[16:04:53 CET] <kepstin> fdk is still a better aac encoder, but the ffmpeg one is quite usable and it's at least better than using mp3 :)
[16:05:27 CET] <furq> or faac
[16:05:40 CET] <kepstin> or libvo-aacenc
[16:05:43 CET] <furq> which i seem to remember turned out to be gpl incompatible anyway
[16:10:46 CET] <Moonlightning> https://bugs.gentoo.org/show_bug.cgi?id=608868
[16:10:48 CET] <Moonlightning> oh dear
[16:12:49 CET] <pbos> $ ffmpeg -format rawvideo -s 1850x1110 -framerate 15 -i combined2x_short_1850_1110.yuv padded.yuv -> Codec AVOption format (Codec Format) specified for input file #0 (combined2x_short_1850_1110.yuv) is not a decoding option.
[16:12:54 CET] <pbos> am I wielding rawvideo wrong?
[16:13:02 CET] <pbos> I thought this would essentially be a stream copy
[16:13:05 CET] <furq> -c:v rawvideo
[16:13:11 CET] <pbos> thx
[16:13:18 CET] <kepstin> Moonlightning: 2.8 is still a supported branch, there's a 2.8.11 that probably has the relevant fixes?
[16:13:19 CET] <furq> although i don't think you need it there
[16:13:28 CET] <furq> it should autodetect from the filename
[16:13:42 CET] <Moonlightning> kepstin: ...wait, those vulnerabilities exist in 2.8?
[16:13:51 CET] <kepstin> Moonlightning: which is not available yet in gentoo stable because bureaucracy i guess
[16:14:26 CET] <Moonlightning> well, that settles it; I'm jumping to 3 :P
[16:14:29 CET] <kepstin> Moonlightning: a lot of the code doesn't change between releases, so many (but not all) security issues in newer versions also apply to older ones
[16:15:36 CET] <kepstin> oh, hey, they just pushed 3.2 to gentoo stable amd64
[16:15:49 CET] <kepstin> so you don't even have to unmask it
[16:16:04 CET] <kepstin> unless you're using a 32bit arch, in which case you should buy a new computer and reinstall 64bit
[16:17:38 CET] <furq> how would you reinstall an os on a new computer
[16:18:09 CET] <kepstin> i dunno, you might have moved the hard drive over?
[16:18:25 CET] <jarkko> how do i enable 10bit at compile time x265?
[16:18:57 CET] <furq> -DHIGH_BIT_DEPTH=ON
[16:20:02 CET] <jarkko> Unknown option "-DHIGH_BIT_DEPTH=ON"
[16:20:14 CET] <jarkko> i need more flags?
[16:20:31 CET] <furq> where are you specifying that
[16:20:38 CET] <jarkko> ./configure
[16:21:04 CET] <furq> that's an x265 compile option
[16:21:22 CET] <jarkko> how i compile it correctly then
[16:21:33 CET] <furq> rebuild x265 with -DHIGH_BIT_DEPTH=ON
[16:21:52 CET] <jarkko> i understood that part but
[16:22:02 CET] <pbos> Is there a vf pad mode that smears video at the edge rather than fill with a color?
[16:22:11 CET] <pbos> not sure if this is better for compression but
[16:23:00 CET] <kepstin> pbos: would probably be worse, at least if the edge aligns to a codec block. Flat color can be encoded as simple DC blocks.
[16:23:07 CET] <furq> yeah it'll be worse for compression
[16:23:21 CET] <pbos> edge lines do not align to codec block, I'm padding to codec block size (8ths)
[16:23:28 CET] <furq> you could probably do it with scale, boxblur and overlay
[16:23:35 CET] <Moonlightning> > source distro
[16:23:39 CET] <Moonlightning> > move the disk to a new computer
[16:23:44 CET] <Moonlightning> now /why/ would you do /that?/ :p
[16:24:05 CET] <Moonlightning> kepstin: /just?/ As in, in the last few minutes? I don't see it.
[16:24:09 CET] <kepstin> Moonlightning: I've only done it with storage disks on my raid, not with OS disk, so yeah :)
[16:24:21 CET] <kepstin> Moonlightning: shows up in https://packages.gentoo.org/packages/media-video/ffmpeg
[16:24:27 CET] Action: Moonlightning reloads
[16:24:32 CET] <Moonlightning> oh, so they did!
[16:25:03 CET] Action: kepstin doesn't use gentoo any more, those were the days...
[16:25:22 CET] <JEEB> hmm, does filter_complex in ffmpeg cli by chance end up resetting the language metadata?
[16:25:43 CET] <Moonlightning> This FraunhoferFDK license says /for Android/ an awful lot >.>
[16:25:54 CET] <JEEB> because it came out of the android OSS project
[16:26:08 CET] <JEEB> it was the second AAC encoder Google bought and used for android
[16:26:19 CET] <JEEB> first was libvo-aacenc which was... not too good
[16:27:59 CET] <Moonlightning> ...okay, so what am I missing about this FraunhoferFDK license? How come it's not @GPL-COMPATIBLE or @FREE?
[16:28:24 CET] <jarkko> how exactly i compile x265 again with that 10bit flag?
[16:28:38 CET] <kepstin> jarkko: when you run cmake, add that to the command line
[16:29:03 CET] <jarkko> so far i have only used make...
[16:29:32 CET] <kepstin> hmm, I thought they used cmake as the build system, does it ship with pregenerated makefiles or something?
[16:30:24 CET] <JEEB> it has cmake
[16:30:37 CET] <JEEB> cmake creates the Makefiles and the config
[16:30:42 CET] <JEEB> then you use make to build
[16:31:10 CET] <pbos> Moonlightning: Is it
[16:31:14 CET] <jarkko> so how do i do this? i have ffmpeg cloned into hard disk...cd into directory and then what?
[16:31:21 CET] <pbos> Is it not..?
[16:31:28 CET] <JEEB> FFmpeg is FFmpeg, libx265 is x265
[16:31:34 CET] <Moonlightning> pbos: https://lists.gt.net/gentoo/dev/257675#257675
[16:31:34 CET] <JEEB> you need to rebuild *x265*
[16:31:41 CET] <jarkko> yes but how exactly
[16:32:02 CET] <jarkko> i dont know where to put that flag
[16:32:08 CET] <kepstin> jarkko: first you go into the directory with the x265 source code, not the ffmpeg directory - that would be a good start
[16:32:08 CET] Action: Moonlightning just rebuilds without fdk for now
[16:32:16 CET] <pbos> Moonlightning: Is that dissimilar from patent restrictions imposed by openh264?
[16:32:24 CET] <jarkko> so i need to wget that too
[16:32:32 CET] <Moonlightning> pbos: apparently, because--
[16:32:33 CET] <pbos> (I am way not a lawyer)
[16:32:45 CET] <JEEB> `hg clone https://bitbucket.org/multicoreware/x265 && cd x265/build && cmake -DTHAT_DEFINE=1 ../src && make`
[16:32:54 CET] <Moonlightning> pbos: wait, what ffmpeg configureflag introduces a dependency on openh264?
[16:32:59 CET] <JEEB> I think there was a `build` directory in x265 already :P
[16:33:07 CET] <furq> Moonlightning: probably --enable-openh264
[16:33:22 CET] <kepstin> nah, I think openh264 is prefectly copyright license compatible with GPL (it's MIT or something?)
[16:33:23 CET] <JEEB> the wiki's FFmpeg build article probably does it better
[16:33:25 CET] <Moonlightning> pbos: never mind; there's an openh264 useflag, and it's disabled
[16:33:41 CET] <kepstin> the only issue is that openh264 doesn't include the patent grant unless you use cisco's prebuilt binaries
[16:33:50 CET] <furq> oh really
[16:33:53 CET] <Moonlightning> so I'm rebuilding without openh264 and without fdk, for now :p
[16:34:10 CET] <furq> you don't want openh264 anyway, but it's gpl compatible
[16:34:11 CET] <kepstin> no reason to use openh264 unless you need that patent grant; x264 is better in every other way
[16:34:32 CET] <pbos> kepstin: what's the real difference? is it that you can't distribute it vs you can't use it?
[16:35:13 CET] <pbos> or that you can't charge for the fdk?
[16:36:01 CET] <kepstin> you cannot distribute ffmpeg built with fdk, because the combination of licenses doesn't work, so you don't have permission to redistribute the combined work
[16:36:07 CET] <kepstin> patents don't come into that at all
[16:36:59 CET] <pbos> where's the incompatibility?
[16:37:23 CET] <kepstin> pbos: hire a lawyer to read both licenses if you want to know
[16:37:33 CET] <pbos> mostly curious
[16:38:16 CET] <Moonlightning> ffmpeg is never sold, is it?
[16:38:39 CET] <kepstin> looks like the deal is that the fdk license only allows distribution "as authorized by patent licenses", which is an additional restriction that's gpl-incompatible
[16:38:46 CET] <kepstin> but don't take that as legal advice ^^ :)
[16:38:56 CET] <pbos> never would :)
[16:39:07 CET] <kepstin> Moonlightning: nothing stopping you from selling ffmpeg, it's perfectly legal to do so if you follow the license terms.
[16:39:28 CET] <kepstin> (it's mostly sold by being included with some larger piece of software that's sold)
[16:39:48 CET] <Moonlightning> oh, wow, x264 built quickly!
[16:40:10 CET] <jarkko> JEEB: stuck again. i cloned the x265 stuff. what exactly i need to do next?
[16:40:36 CET] <kepstin> jarkko: follow the build instructions, which I believe are located in build/README.txt
[16:40:53 CET] <kepstin> jarkko: but add the extra option to the cmake line
[16:42:08 CET] <Moonlightning> What's -crf do?
[16:42:33 CET] <pbos> fixed qp, I guess
[16:43:20 CET] <kepstin> crf applies a magic number in x264's rate control algorithm to give you a vbr file with roughly constant visual quality throughout
[16:43:27 CET] <pbos> supposed to be constant quality (for some shoddy measure of quality)
[16:43:34 CET] <kepstin> not related to fixed qp at all (which uses -qp)
[16:44:00 CET] <pbos> how come the ranges for vpx and x264 are the codec's qp ranges?
[16:44:23 CET] <kepstin> that's just how the number is scaled (and vpx works differently, the vpx option shouldn't really be called crf...)
[16:44:32 CET] <pbos> thx
[16:45:06 CET] <Moonlightning> Aha, okay. ^.^
[16:46:08 CET] <jarkko> i think i got it now
[16:46:15 CET] <jarkko> wasnt so easy
[16:46:30 CET] <jarkko> i had to do cmake ../../source and add the flag
[16:47:07 CET] <kepstin> x264's psy optimizations are good enough that its measure of "visual quality" tends to be better than most other codecs.
[16:47:35 CET] <jarkko> now the next issue how do i use this new compiled thing?
[17:00:54 CET] <Moonlightning> oh, wow, ffmpeg finished building!
[17:01:13 CET] <Moonlightning> That was only twenty minutes!
[17:01:57 CET] <Moonlightning> ...I have to be going, though. I'll be along later to see about, uh
[17:02:35 CET] <Moonlightning> kepstin, furq: Thanks for all the help! ^.^
[17:03:48 CET] <jarkko> any suggestions on x265 flags
[17:03:55 CET] <jarkko> or do you use only defaults
[17:04:31 CET] <TikityTik> jarkko: which flags you mean ones like if you want baseline?
[17:12:09 CET] <kuroro> furq: how come your so nice and helpful :)
[17:13:15 CET] <furq> i'm just a beautiful person
[17:13:24 CET] <kuroro> haha
[17:13:48 CET] <kuroro> btw, what do you guys think of discord as an alternative to IRC?
[17:58:03 CET] <vans163> is there any benefit of 30 fps vs 29.997?
[17:58:28 CET] <vans163> I see some examples setting fps to 30000 (frame rate) / 1001 (frame density)
[17:59:08 CET] <vans163> (this is for an encoder)
[17:59:10 CET] <furq> density?
[17:59:30 CET] <vans163> yea density or denominator, not sure if other term names for it
[17:59:34 CET] <furq> 30000/1001 = 29.97
[18:00:03 CET] <vans163> ah density is the wrong term
[18:00:17 CET] <vans163> so yea then 29.97 encoder rate vs 30 encoder rate
[18:00:41 CET] <furq> 29.97 is the ntsc framerate
[18:01:01 CET] <furq> if your source isn't ntsc and you don't plan on broadcasting the video on tv then it doesn't matter
[18:01:22 CET] <vans163> furq: ahh i got it now why they set that.  thanks
[18:02:32 CET] <kepstin> the /1001 part is because of a stupid hack they did to add colour to ntsc, and it's still with us today in digital video when it's not needed at all :(
[18:02:45 CET] <bencoh> :)
[18:04:25 CET] <furq> stupid is a bit harsh
[18:04:32 CET] <furq> it is stupid that we're still using it 64 years later though
[18:05:02 CET] <kepstin> oh, alright, the hack itself was kind of clever
[18:05:21 CET] <kepstin> they managed to add colour while still having the signal be usable on a b&w tv
[18:08:12 CET] <furq> i would say "you should have just used pal" but then you'd have had to wait another 14 years
[18:13:45 CET] <bencoh> image quality'd have been a bit better though ;)
[19:12:27 CET] <gurki> im a bit confused. ffmpeg will tell me
[19:12:35 CET] <gurki> Could not find codec parameters for stream 0 (Video: h264, none(tv, bt709/unknown/unknown, progressive), 1920x818): unspecified pixel format
[19:12:49 CET] <gurki> when i try to convert some .mkv
[19:13:08 CET] <gurki> weird thing is: i created this file using ffmpegs own split metheod, with the exact call being
[19:13:35 CET] <gurki> ffmpeg -loglevel error -i $filename -threads $numberofthreadsonhost -vcodec co    py -f segment -segment_time $partlength in%04d.mkv
[19:13:51 CET] <gurki> now one could argue that the initial file is broken in some way but ... well. it doesnt appear to be
[19:13:57 CET] <gurki> i can even convert the initial file using ffmpeg
[19:14:08 CET] <gurki> any ideas how to solve/debug this?
[19:15:08 CET] <gurki> id take a strong guess that my way of splitting the file up is not the brightest idea i ever had
[19:15:25 CET] <vlad_> any ideas how to fix " Codec for stream 0 does not use global headers but container format requires global headers"?
[19:24:36 CET] <flux> vlad_, you need to set a flag to the codec. doc/examples/*.c has an example doing it.
[19:25:00 CET] <flux> well, I guess there could be codecs that don't support global headers, in that case I don't knot.
[19:29:19 CET] <vlad_> flux: format is av_guess_format("flv", nullptr, nullptr)
[19:36:05 CET] <vlad_> flux: nvm "stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;" fixed it
[19:50:19 CET] <gurki> i fixed it. gurki cannot into his own bash scripts. my fault, not ffmpegs.
[21:08:29 CET] <arog> hello
[21:09:43 CET] <arog> http://pastebin.com/YRjQFGaN -- for some reason in my code it is unable to find a format for webcam.mp4
[21:09:46 CET] <arog> any idea whats going on
[21:09:49 CET] <arog> failing at line 22
[21:10:40 CET] <arog> [NULL @ 0x19a3820] Unable to find a suitable output format for 'webcam.mp4'
[21:12:42 CET] <arog> fixed it
[21:12:48 CET] <arog> i was only calling avcodec_register_all
[21:12:48 CET] <shincodex> Jeeb
[21:12:53 CET] <shincodex> Someone is like
[21:12:54 CET] <arog> i need to change it to av_register_all
[21:12:56 CET] <arog> :D
[21:13:25 CET] <shincodex> or any foo
[21:13:26 CET] <shincodex> int err = avformat_open_input(&formatContext, openPath.c_str(), inputFormat, &options);
[21:13:31 CET] <shincodex> some jack off
[21:13:37 CET] <shincodex> wants to cause that to break out
[21:13:40 CET] <shincodex> from another thread
[21:13:52 CET] <shincodex> Timeout 1 s not gud enough
[21:14:05 CET] <shincodex> i swear it uses non blocking
[21:14:16 CET] <shincodex> they want to abandoned avformat open input from another thread
[21:14:18 CET] <shincodex> gracefully
[21:14:36 CET] <shincodex> Any ideas
[21:14:50 CET] <shincodex> like av_murder_lastrequest();
[21:18:36 CET] <shincodex> Nobody? anybody?
[21:18:38 CET] <shincodex> :(
[21:26:08 CET] <shincodex> Thanks
[21:26:11 CET] <shincodex> ff_check_interrupt
[21:26:16 CET] <shincodex> might be what i need to do
[21:26:28 CET] <shincodex> assign a call back and return 1? to stop internal operations
[21:26:30 CET] <shincodex> if 1 doesnt work then 0
[21:47:00 CET] <arog> i am seeing a warning to not use stream->codec but instead stream->codecpar  is there an equivalent avcodec_get_context_defaults3 that works iwth AVCodecParamter instead of AVCodec?
[21:47:55 CET] <arog> [mp4 @ 0x1e53820] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[21:52:48 CET] <vlad_> hello! I was wondering how to set the video preset with libav
[21:57:31 CET] <shincodex> formatContext->streams[vidStreamIdx]->codec->codec_id
[21:57:40 CET] <shincodex> dont have a problem using it
[21:57:57 CET] <shincodex> mjpeg, rtsp h264, still jpegs so far
[22:08:09 CET] <kepstin> vlad_: are you talking about the x264 'preset' option? that should be set in the options parameter to avcodec_open2 i think
[22:09:41 CET] <kepstin> (i think there's some other way to do it too, but I'm not that familiar with it)
[22:10:08 CET] <vlad_> kepstin: ok I'll look into it
[22:11:30 CET] <kepstin> like, I think you can set the AVOption directly on the AVCodecContext instead
[22:34:11 CET] <arog> avcodec is deprecated, whats the alternative now to get the defaults from my encoder?
[22:34:59 CET] <DHE> the AVCodecContext thing inside an AVStream is deprecated. just don't use it, use your own AVCodecContext stored in your own code
[22:35:32 CET] <DHE> the idea is to separate the codec information needed by the codec itself, and the information that the container needs to fill in its own metadata
[22:36:12 CET] <arog> oh i se
[22:36:14 CET] <arog> I get it
[22:36:26 CET] <arog> right now i am doing
[22:36:35 CET] <arog> Oh wait i get it
[22:55:56 CET] <vlad_> kepstin: damn now I'm getting "lookahead thread is already stopped"
[22:57:05 CET] <kepstin> vlad_: with the x264 encoder? that probably means that you gave it a null frame (which means end of input), then tried to give it another frame later.
[23:07:47 CET] <vlad_> kepstin: it looks like the current code is "handling delayed frames" by looping avcodec_encode_video2 until it sets the last argument to false (got packet)
[23:08:05 CET] <vlad_> btw the code I am modifying is here: https://github.com/dolphin-emu/dolphin/blob/master/Source/Core/VideoCommon/AVIDump.cpp
[23:08:44 CET] <vlad_> line 251
[23:09:52 CET] <kepstin> that's the correct way to flush frames when you're done encoding yeah, but after you do that you can't submit any more frames to the same encoder.
[23:09:58 CET] Action: kepstin is off
[23:23:59 CET] <vlad_> maybe I need AV_CODEC_CAP_DELAY?
[23:31:34 CET] <ranamat> i noticed with a video that i'm attempting to cut with ffmpeg, that there's missing keyframes (around 7 minutes worth) however there doesn't appear to be any loss of video or audio when playing with something like VLC... however i can't cut on a keyframe that doesn't exist
[23:31:58 CET] <ranamat> i tried forcing keyframes at durations and using x264opts with a keyint valvue of 1, but neither has work
[23:32:01 CET] <ranamat> any suggestions?
[23:36:29 CET] <llogan> how do you know it has missing keyframes?
[23:36:51 CET] <faLUCE> hello. With snd_pcm_hw_params_set_period_size_near() (alsa) I set how many audio samples to read at each interrupt. Then I read all of them with snd_pcm_readi() (alsa too). This means that the buffer size must be a multiple of the value set with snd_pcm_hw_params_set_period_size_near(). Now, if I want to encode them, every encoder wants another buffer of samples for each encode() call. Then I ask: do I have to set the
[23:36:53 CET] <faLUCE> samples' buffer size of the input with a value that depends of the chosen encoder (so that the encoder's buffer_size is a multiple of the input buffer_size)? Otherwise, to which value can I set the input's buffer size regardless of the encoder that I will choose later? (I don't want to solve that by making copies of the samples)
[23:37:07 CET] <ranamat> llogan: i dumped the keyframes for the video using ffprobe
[23:38:01 CET] <ranamat> then wrote a script that reads the output and dumps any potential gaps that it finds if there's not a keyframe every second
[23:38:23 CET] <vlad_> if avcodec_encode_video2 is deprecated, what should be used instead?
[23:39:07 CET] <faLUCE> vlad_: use avcodec_send_frame()/avcodec_receive_packet() instead
[23:39:19 CET] <llogan> ranamat: what's the ffprobe command you used?
[23:39:58 CET] <faLUCE> vlad_: http://ffmpeg.org/doxygen/3.2/group__lavc__encoding.html#ga2c08a4729f72f9bdac41b5533c4f2642
[23:42:23 CET] <vlad_> faLUCE: I see
[23:43:27 CET] <vlad_> what should I do with the received packet? currently my code loops avcodec_encode_video2 with null frame until no more packet is received, but this breaks with x264
[23:44:47 CET] <ranamat> llogan: ffprobe -i video.flv -select_streams v -show_entries frame=key_frame,pkt_pts_time -of csv=nk=1:p=0
[23:45:24 CET] <faLUCE> [23:43] <vlad_> what should I do with the received packet?  <--- this doesn't make sense... once you have encoded, it's up to you to choose what to with the encoded packet
[23:52:16 CET] <vlad_> faLUCE: ok I see what the code is doing now. it does av_interleaved_write_frame on each packet, and then tries to avcodec_encode_video2 with a null frame in order to see if there are more packets. It's this second avcodec_encode_video2 that fails with x264
[23:55:41 CET] <vlad_> if I just break after the first av_interleaved_write_frame it seems to work
[23:58:22 CET] <llogan> ranamat: are you parsing the output with 'grep "1,"' or similar?
[00:00:00 CET] --- Thu Feb 16 2017



More information about the Ffmpeg-devel-irc mailing list