[Ffmpeg-devel-irc] ffmpeg.log.20170412

burek burek021 at gmail.com
Thu Apr 13 03:05:02 EEST 2017


[02:32:38 CEST] <dystopia_> hello
[02:32:48 CEST] <dystopia_> im using ffmpeg to resize jpg's
[02:33:03 CEST] <dystopia_> ffmpeg -i in.jpg -sws_flags spline -s 1600:2263 out.jpg
[02:33:15 CEST] <dystopia_> it works but it compresses the image
[02:33:28 CEST] <dystopia_> Stream #0:0: Video: mjpeg, yuvj420p(pc), 1600x2263 [SAR 663059:659200 DAR 293:412], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
[02:33:56 CEST] <dystopia_> images are all around 200kb after running it when the source images are aroun 16mb
[02:34:08 CEST] <dystopia_> is there anyway to get it not to compress the image and just resize it
[02:59:31 CEST] <klaxa> dystopia_: by design that is not possible
[02:59:46 CEST] <klaxa> you can increase quality though
[03:06:33 CEST] <some_user> Hi, I'm trying to config ffmpeg on windows, and getting: ERROR: opus not found using pkg-config
[03:06:45 CEST] <some_user> any idea what can I try? thanks!
[03:07:44 CEST] <some_user> I've tried following https://github.com/telegramdesktop/tdesktop/blob/dev/docs/building-msvc.md#ffmpeg
[03:11:24 CEST] <rjp421> using the latest zeranoe win64 static git build, can i fix or ignore the flooding of "[dshow @ 00000000003b9a20] real-time buffer [AVerMedia BDA Analog Capture Secondary] [video input] too full or near too full (136% of size: 3041280 [rtbufsize parameter])! frame dropped!"
[03:35:29 CEST] <rjp421> dshow list_options crashed ffmpeg on win7 x64 https://pastebin.com/raw/ugsYKcgG
[03:38:36 CEST] <rjp421> also list_devices will not show any audio only devices... and if i dont include the audio= device during list_options, i dont see the audio info (the last line, before crash)
[04:55:22 CEST] <thuann> Hi all I asked this question on StackOverflow earlier today http://stackoverflow.com/questions/43358943/ffmpeg-silencedetect-output-does-not-match-audacity-sound-finder
[04:55:33 CEST] <thuann> if someone could give me some pointers that'd be much appreciated
[05:21:15 CEST] <thebombzen> thuann: they probably have different algorithms
[05:21:22 CEST] <thebombzen> you shouldn't expect them to be exactly the same
[05:21:39 CEST] <thebombzen> plus, even if they have the same algorithm, the threshold is probably far lower
[06:07:09 CEST] <thuann> thebombzen: right but when i plot it an audio waveform it seems like audacity marked the sound timestamp correctly while ffmpeg silencedetect seems a bit off
[06:08:39 CEST] <thebombzen> perhaps change the parameters? ffmpeg's audio filters aren't particularly amazing
[06:08:48 CEST] <thebombzen> it's usually better to use dedicated tools for audio filtering
[06:08:53 CEST] <thebombzen> rather than libavfilter
[06:11:42 CEST] <thuann> okie is there a tool that you would recommend
[06:11:46 CEST] <thuann> sorry im pretty new to these stuffs
[06:14:28 CEST] <thebombzen> how about audacity
[06:14:31 CEST] <thebombzen> you said it worked
[06:45:06 CEST] <thuann> thebombzen: i need to be able to get these labels programmatically though that's why im trying ffmpeg
[07:24:28 CEST] <durandal_1707> thuann: silencedetect doesnt count from 0 apparently
[07:27:49 CEST] <durandal_1707> also make sure audacity is not resampling audio
[08:32:31 CEST] <Ursulo> im using ffmpeg to produce live hls streams, and i am getting a duration difference (declared vs real) on video and audio tracks
[08:49:19 CEST] <harry_> hi
[08:59:43 CEST] <harry_> anyone can help with a ffmpeg decklink output issue?
[11:44:09 CEST] <emiel_> Hey folks, I'm trying to pipe a series of png-data into a video and this works great with image2pipe, *however* I also want to explicitly set the pts/timing values per frame. I really want to use a pipe stream (eg otherwise ts_from_file could help). Is there any other way I could set timing values per frame when I have a stream of png images?
[11:45:19 CEST] <emiel_> Or perhaps a way to re-encode the index later to add the timings?
[11:48:41 CEST] <furq> i take it this isn't cfr
[11:53:54 CEST] <georgipopov> Hi all, i need some little help plz
[11:54:12 CEST] <georgipopov> i have firewire card on my linux lite machine
[11:54:15 CEST] <georgipopov> with 4 ports
[11:54:24 CEST] <georgipopov> but i can not use ffmpeg with it
[11:54:27 CEST] <georgipopov> any help ?
[12:05:29 CEST] <st-gourichon-fid> georgipopov, try to be more specific and detailed.  What do you imagine doing?  Capturing DV video from a firewire camcorder?
[12:06:36 CEST] <georgipopov> yes
[12:06:44 CEST] <georgipopov> relaying to tcp
[12:11:50 CEST] <georgipopov> ffmpeg -y -nostdin \
[12:11:50 CEST] <georgipopov> 	-f iec61883 \
[12:11:50 CEST] <georgipopov> 	-i auto \
[12:11:50 CEST] <georgipopov> 	-c:v copy -c:a copy \
[12:11:50 CEST] <georgipopov> 	-f matroska \
[12:11:51 CEST] <georgipopov> 	tcp://localhost:10000
[12:12:44 CEST] <georgipopov> i have this error : DV packet queue overrun, dropping. How can i fix this packet queue?
[12:13:43 CEST] <furq> presumably -dvbuffer
[12:13:51 CEST] <furq> !indev iec61883
[12:13:52 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-devices.html#iec61883
[12:24:33 CEST] <emiel_> @furq what's cfr?
[12:24:42 CEST] <furq> constant framerate
[12:25:26 CEST] <emiel_> Ah, no. I'm capturing screenshots from chrome headless, they come with a "timeIndex" that's basically all over the place based on when the document is repainted
[12:26:41 CEST] <emiel_> I've also tried to use the concat plugin, but also doesn't really seem to work with pipes - or at least: a single pipe for all the frames.
[12:27:06 CEST] <furq> i doubt there's a way to do this with ffmpeg
[12:27:31 CEST] <furq> you might be able to postprocess without reencoding using something like l-smash's timelineeditor
[12:27:41 CEST] <emiel_> Looking at the source, it's probably one or two extra lines in the image2 module, but I'd really prefer not to maintain my own ffmpeg fork also :)
[12:28:32 CEST] <furq> how would you feed the pts values to ffmpeg in that case
[12:28:35 CEST] <furq> wallclock time?
[12:28:49 CEST] <emiel_> Yes, that would be acceptable to begin with I guess. But not ideal either
[12:29:08 CEST] <furq> i'm not sure how else you'd trivially patch it into ffmpeg
[12:29:55 CEST] <emiel_> Or maybe somehow pass it over the fd in one of the png headers
[12:30:56 CEST] <furq> well yeah you probably want to take a look at timelineeditor
[12:31:10 CEST] <emiel_> Is it this? http://l-smash.github.io/l-smash/
[12:31:12 CEST] <furq> yeah
[12:31:22 CEST] <furq> the docs aren't great but it looks like you should be able to put each frame pts into a timecode file and then remux
[12:33:23 CEST] <furq> fwiw there is already `-video_pts wallclock` for some input devices, so you could potentially get that merged upstream if you added it to image2pipe
[12:33:56 CEST] <furq> assuming it's not there already, the docs for *pipe don't seem to exist
[12:34:49 CEST] <emiel_> No, it's very much undocumented. It's in the same source files as image2 though
[12:36:29 CEST] <emiel_> Where did you find that video_pts option?
[12:36:35 CEST] <furq> !indev decklink
[12:36:35 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-devices.html#decklink
[12:36:45 CEST] <emiel_> Ah, it is ptr_source?
[12:36:48 CEST] <emiel_> pts*
[12:36:57 CEST] <furq> ?
[14:53:51 CEST] <opus111> Hello.  Does anyone here know about MP4 files and avformat_open_input()?
[14:56:07 CEST] <opus111> I have an MP4 file that opens in my Browser, and the command-line FFMPEG works on it.  But when I try to open it with my code, it complains "[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fe26fb6bc00] moov atom not found" and then I get an error code "Invalid data found when processing input"
[14:56:50 CEST] <opus111> My code seems to work with MP3 and OGG files
[14:57:26 CEST] <inflex> Has anyone tried reconstructing a mixed up sectors video (h264 in this case) by trying each potential sector/block left in a pool until one matches with minimum errors?  I've been considering this for JPEG reconstruction but tonight I now need it for h264 :(
[14:58:13 CEST] <inflex> ( something mangled my FDR/FAT inode chain tables, so now I have the video data, but scrambled )
[14:59:12 CEST] <cart_man> Hi eveyrone. Where can I get some info on what a frame looks like inside a .ASF file or .WMV file ? I need to build a converter in C++ that converts from .ASF to something else but first I need to know how and what a .ASF file consists off
[14:59:31 CEST] <cart_man> Or maybe just a website with info to understand the encoding of such files
[15:44:50 CEST] <klaxa> cart_man: maybe this will help https://ffmpeg.org/doxygen/trunk/remuxing_8c_source.html
[15:45:06 CEST] <klaxa> looks like it has recently been updated to use the new codecparameter api as well
[16:05:45 CEST] <d3bug> is it possible to encode to hevc without an nvidia card?  all I see for encoders is hevc_nvenc and nvenc_hevc...   regular hevc just tells you to use one of the others... is there a way to force it to use the non-gpu accelerated encoder?
[16:09:03 CEST] <d3bug> I vote we change IRC to IRI.... since nobody chats, it would make more sense to call it Internet Relay Idle...
[16:17:19 CEST] <jkqxz> d3bug:  libx265
[16:17:42 CEST] <d3bug> so you have to install libx265 and libx264 to do it without nvenc?
[16:20:06 CEST] <jkqxz> s/ and libx264// - libx264 has nothing to do with it.
[16:20:54 CEST] <jkqxz> And there are other usable H.265 encoders, x265 is just the most common software one.
[16:22:31 CEST] <d3bug> jkqxz: i asked about 264 as well because I noticed the only ones in ffmpeg for 264 are also the nvenc ones as well
[16:25:17 CEST] <klaxa> x264 and x265 provide library interfaces which ffmpeg can use
[16:26:04 CEST] <jkqxz> In your build, presumably?  There are lots of different H.264 encoder interfaces available - x264, openh264, omx, vaapi, nvenc, videotoolbox, qsv.
[16:26:10 CEST] <klaxa> mainstream distributions usually link ffmpeg at least against libx264
[16:30:22 CEST] <d3bug> jkqxz:  when configuring all I did was enable-nonfree, and it only gave me nvenc versions of both x265 and x264... oh and this was cloned from git btw... like 30min ago... so I guess (for whatever reason) they decided everyone has an nvidia card... lol
[16:31:03 CEST] <d3bug> I am building x265 now, then I'll do x264 and I guess I'll probably have to specify to enable those two specifically
[16:31:09 CEST] <furq> yes
[16:32:53 CEST] <d3bug> jkqxz:  anyway, thanks for the help.  have a good one... back to compiling for me.
[16:32:54 CEST] <jkqxz> Yeah.  The autodetection of everything is silly like that.
[16:33:29 CEST] <furq> what a nice young man
[16:42:42 CEST] <d3bug> jkqxz:  I forgot to ask something... is there a simple way to enable compiling in ALL libraries that are set to NO by default instead of specifying all 30 or so of them with --enable switches?
[16:43:00 CEST] <d3bug> something like --enable-all-libs or something?
[16:44:40 CEST] <jkqxz> No.  Some subset of them are autodetected once the external dependency is present, though.
[16:47:38 CEST] <d3bug> jkqxz: now that well and truly sucks.  This is gonna take awhile... :S
[16:50:03 CEST] <furq> you really don't want every dependency
[16:52:06 CEST] <d3bug> furq:  yeah, I do... I want to run it static so it can be run with every feature no matter where that exe is.
[16:52:27 CEST] <DHE> and that's where the push back comes from. a lot of these things are not static link friendly.
[16:52:28 CEST] <d3bug> regardless of which machine I am using it on
[16:52:53 CEST] <furq> yeah that isn't going to work
[16:53:09 CEST] <d3bug> furq:  why?
[16:53:25 CEST] <furq> a lot of the libs conflict with each other, a lot of them are os/hardware specific, and a lot of them are just for ancient codecs that you shouldn't ever use for anything new
[16:53:50 CEST] <DHE> for example nvenc needs dynamic linking against the nVidia libraries. even with dlsym() support I think you still need a dynamic ffmpeg binary for that to work
[16:54:12 CEST] <DHE> external libs like x264 are usually static link safe, but hardware not so much
[16:54:15 CEST] <furq> and yeah static linking all external libs fucks with pretty much all the hardware encoding stuff
[16:54:36 CEST] <furq> maybe qsv still works
[16:54:51 CEST] <d3bug> what I am interested in specifically is the encoders and decoders, not hardware specific stuff.
[16:55:06 CEST] <furq> you mean like nvenc as opposed to nvenc
[16:55:19 CEST] <d3bug> and I think it was a terrible idea to turn off x265 and x264 by default because not everyone has NVIDIA
[16:55:38 CEST] <furq> nvenc was enabled for you because you have an nvidia card
[16:55:47 CEST] <d3bug> no... I don't
[16:55:54 CEST] <d3bug> I am AMD exclusively
[16:56:03 CEST] <d3bug> A10 processor and an RX480
[16:56:15 CEST] <jkqxz> nvenc is enabled for you because it is autodetected and depends only on the dynamic linker, so you basically always get it.  Kindof stupid, but meh.
[16:56:46 CEST] <d3bug> I don't care if it's enabled, what I care about is them taking out the others by default... that is stupid IMHO.
[16:56:59 CEST] <furq> oh does ffmpeg ship the nvenc headers now
[16:57:03 CEST] <furq> that is pretty dumb then
[16:57:08 CEST] <d3bug> yeah
[16:57:22 CEST] <jkqxz> libx264 is not autodetected because it has licence constraints.  You could probably argue for it being included automatically in --enable-gpl builds, because basically everyone wants it.
[16:57:26 CEST] <furq> normally the [autodetect] stuff looks for headers/libs on your systems
[16:57:30 CEST] <furq> -s
[16:57:44 CEST] <furq> but evidently nvenc looks for a header within the ffmpeg tree
[16:57:45 CEST] <d3bug> I just want to enable any encoders and decoders filters whatnot that don't depend on specific hardware.
[16:57:51 CEST] <furq> i'm not sure why it even still bothers to run a check
[16:58:14 CEST] <furq> also it's not like it'll take that much longer to enable them all
[16:58:14 CEST] <d3bug> I wish there was a configure switch for that instead of having to specify them manually... lol
[16:58:24 CEST] <furq> you'll still need to install or build all the libs
[16:58:42 CEST] <d3bug> yeah I know.
[16:58:56 CEST] <Threads> lib all the things
[16:59:02 CEST] <furq> there's like 80 not including the hardware stuff
[16:59:05 CEST] <d3bug> I'll static the ones that can be, and as a last resort disable the ones that can't be used that way
[16:59:11 CEST] <furq> i don't think there's any reason to have more than 10 or so
[16:59:36 CEST] <d3bug> I want to be able to encode or decode ANYTHING... it's just how I roll :P
[16:59:42 CEST] <furq> decoding anything is fine
[16:59:58 CEST] <furq> nearly every format/codec has a builtin decoder now
[17:00:22 CEST] <furq> and you really don't want to encode xvid in 2017
[17:00:42 CEST] <d3bug> hey now, I might be a masochist you know... :)
[17:01:18 CEST] <d3bug> I remember when xvid was the cat's meow...
[17:01:47 CEST] <d3bug> wasn't that long ago irl... but in digital time is was a century ago or so.
[17:02:12 CEST] <furq> then you've got stuff like gnutls/openssl, libshine/libmp3lame, libkvazaar/libx265, librtmp vs native rtmp + gcrypt/gmp, etc etc
[17:02:14 CEST] <Threads> idiots still use xvid in 2017
[17:02:20 CEST] <furq> i know
[17:02:26 CEST] <furq> i share a website with some of them
[17:02:44 CEST] <d3bug> libkvazaar sounds like some kind of scifi disease... lol
[17:02:54 CEST] <Threads> lol
[17:02:57 CEST] <furq> oh and libopenh264 vs libx264
[17:03:42 CEST] <d3bug> oooh, it's an open source hevc... how does it perform in comparison to x265 ?
[17:03:51 CEST] <furq> worse
[17:03:55 CEST] <d3bug> aaah
[17:04:01 CEST] <furq> x265 is also open source
[17:04:09 CEST] <d3bug> I would love to see some amd accel for 265
[17:04:47 CEST] <d3bug> how possible is that (with the newer hardware, like rx480 and such)
[17:04:48 CEST] <furq> i guess kvazaar is lgpl, but it's generally a worse encoder afaik
[17:04:56 CEST] <d3bug> aaah
[17:05:26 CEST] <furq> the amd hwaccel stuff isn't really there yet in ffmpeg afaik
[17:05:38 CEST] <Threads> amd is like the step child you will always have around you but never love
[17:05:47 CEST] <d3bug> oh I know, I just mean is there an amd equiv of cuda?
[17:06:06 CEST] <furq> there's an amd equivalent of nvenc but it's not supported well
[17:06:34 CEST] <furq> they have decode hardware but i have no idea if that's usable
[17:06:49 CEST] <furq> apparently polaris supports hevc and vp9
[17:07:03 CEST] <d3bug> Threads: oh I dunno, I love my rx480... I can do 4k (if I had a 4k monitor that is) for $279... as opposed to the 1080 which is alot more $ that I don't have. :)
[17:07:21 CEST] <furq> hevc is a bit of a waste if you're not dealing with a lot of 1440p/4k
[17:07:54 CEST] <furq> hardly anyone uses it because of the insane license fee situation, and that's probably not going to change
[17:08:05 CEST] <d3bug> furq: true... I do have a 4k TV so I do use it for movies and such
[17:08:14 CEST] <furq> and the compression gains aren't really worth it for a home user
[17:08:42 CEST] <furq> unless you don't mind waiting 10x longer than x264 veryslow for like 25% size reduction
[17:08:50 CEST] <d3bug> furq: not unless you can encode better than realtime, it's just not worth it... you can get very nice results with 264.
[17:09:09 CEST] <d3bug> even in 4k
[17:09:20 CEST] <furq> well i mostly encode sd and occasionally 720 so hevc is a complete waste of time for me
[17:09:21 CEST] <jkqxz> The AMD decode hardware works fine with DXVA2, VAAPI or VDPAU.  The encode situation is ... not so good.  It kindof works a bit with VAAPI and H.264 only?
[17:09:22 CEST] <Threads> did test samples a while back between x26(4/5) and all i saved was something like 100mb which wasnt needed really
[17:09:31 CEST] <furq> if i dealt with a lot of 4k then maybe it'd be a different story
[17:09:41 CEST] <d3bug> if I  encode 4k on 264, I always do at least a 2pass encode to save every bit I can.
[17:09:54 CEST] <Threads> why not crf ?
[17:10:04 CEST] <furq> yeah 2pass isn't really useful unless you're targeting a specific filesize
[17:10:08 CEST] <furq> with x264 anyway
[17:10:27 CEST] <d3bug> it is to me
[17:10:40 CEST] <jkqxz> Because you want to save exactly the number of bits necessary to fit onto a CD or DVD, maybe...
[17:10:59 CEST] <furq> "unless you're targeting a specific filesize"
[17:11:03 CEST] <d3bug> I always try and keep it down (size) as much as possible because I love to have uncompressed or DTS Master audio which is big.
[17:12:51 CEST] <rjp421>  using the latest zeranoe win64 static git build, can i fix or ignore the flooding of "[dshow @ 00000000003b9a20] real-time buffer [AVerMedia BDA Analog Capture Secondary] [video input] too full or near too full (136% of size: 3041280 [rtbufsize parameter])! frame dropped!"
[17:13:17 CEST] <furq> increase -rtbufsize
[17:13:29 CEST] <furq> failing that, probably not
[17:13:50 CEST] <d3bug> ttyl.  thanks
[17:14:53 CEST] <rjp421> ty, what value, for 1080p/i 29.97fps?
[17:16:05 CEST] <rjp421> dshow list_options crashed ffmpeg on win7 x64 https://pastebin.com/raw/ugsYKcgG also list_devices will not show any audio only devices... and if i dont include the audio= device during list_options, i dont see the audio info (the last line, before crash)
[17:16:37 CEST] <furq> shrug
[17:16:45 CEST] <furq> presumably something bigger than 136% of 3041280
[17:19:15 CEST] <rjp421> it seems that i dont have any audio only devices, just a device with both video and audio
[17:21:52 CEST] <rjp421> so far i can only get the crossbar pins set correctly with ffmpeg.. obs seems to auto set them, and setting the pins in vlc capture options saw no audio stream..
[17:24:16 CEST] <rjp421> but fmle tells me both the video and audio devices are in use
[17:25:23 CEST] Action: rjp421 has more googling to do after win updates
[17:28:49 CEST] <rjp421> having this exact problem with amarectv (used to split the capture) https://forum.videohelp.com/threads/352699-Confused-with-AVerMedia but havent found the answer on google... not very familiar with dshow
[17:32:19 CEST] <rjp421> that particular post has no useful answers but the device and app+error are the same
[17:33:08 CEST] <rjp421> i dont suppose ffmpeg can show the tree of "filters" its using to input the devices?
[17:33:46 CEST] <rjp421> since ffmpeg can play it properly, ill know what works
[17:35:20 CEST] <BtbN> there is some tool that can show all filter graphs
[17:36:24 CEST] <rjp421> BtbN, ty do u know a name i can google for pls?
[17:36:34 CEST] <BtbN> no
[17:38:13 CEST] <rjp421> np ill shotgun search it with 100 phrases :p
[17:38:32 CEST] <rjp421> updates going slow...
[17:39:48 CEST] <rjp421> i need to know what exactly ffmpeg is using while its playing correctly
[18:26:52 CEST] <djk> I am trying make a time-lapse video out of stills taken every 15sec. The following works but it running the still too fast taking several hours down to a minute. What am I missing to slow down the transition between the stills?
[18:26:52 CEST] <djk> ffmpeg -f image2 -pattern_type glob -i 'wmstill*.jpg'  -framerate 30 -crf 35 -preset veryslow -c:v libx264 -pix_fmt yuv420p out2.mp4
[18:27:28 CEST] <c_14> move -framerate before -i
[18:27:40 CEST] <c_14> then just adjust
[18:27:50 CEST] <c_14> in your case you probably want -framerate 1
[18:27:58 CEST] <c_14> that'll have 1 frame every 15 seconds
[18:28:35 CEST] <furq> i guess you also want -r 30 as an output option
[18:28:45 CEST] <furq> but since i know this is x264 on an rpi, maybe not
[18:29:46 CEST] <c_14> Is it a pi?
[18:29:50 CEST] <c_14> Can a pi handle veryslow?
[18:30:15 CEST] <rjp421> with omx_h264?
[18:30:38 CEST] <furq> that should be fine
[18:31:07 CEST] <rjp421> actually i use veryslow with my pi cam... still havent changed from raspivid cmd to full ffmpeg :p
[18:31:37 CEST] <kerio> why do you want to use ffmpeg?
[18:31:41 CEST] <kerio> just to be less efficient?
[18:31:53 CEST] <kerio> raspivid works entirely on the raspberry pi videocore
[18:32:28 CEST] <vlt> It encodes?
[18:32:48 CEST] <kerio> it can output raw h264 at least
[18:33:04 CEST] <kerio> alternatively you can use ffmpeg but only to grab h264 output through the v4l2 interface to the camera
[18:33:24 CEST] <kerio> the important part is to not copy raw frames to and from the videocore
[18:33:34 CEST] <rjp421> kerio, thats what i was thinking when i decided not to bother switching yet... but then furq assured me the omx_h264 was just as good, so im just waiting for a chance to bs with it and try
[18:34:06 CEST] <kerio> h264_omx is good but if your video source comes from the videocore itself you are going to waste a lot of cpu time on mindlessly copying memory back and forth
[18:34:15 CEST] <kerio> also, "good"
[18:34:23 CEST] <kerio> all hardware h264 encoders suck :^)
[18:34:27 CEST] <rjp421> im streaming 720p at 24fps h264 to rtmp
[18:34:38 CEST] <rjp421> with almosty no load on a pi2
[18:34:55 CEST] <rjp421> i mean like under 5% of one core
[18:35:06 CEST] <kerio> rjp421: i'm pretty sure you can just ffmpeg -f v4l2 -c:v h264 -i /dev/something -f flv -c:v copy rtmp://whatever
[18:35:36 CEST] <kerio> plus or minus some syntax
[18:35:45 CEST] <furq> does it really make a difference if your source is rawvideo
[18:36:02 CEST] <kerio> i assumed their source was the raspberry camera
[18:36:09 CEST] <furq> ffmpeg is definitely slower if your webcam is mjpeg because the mmal mjpeg decoder isn't in ffmpeg yet
[18:36:13 CEST] <kerio> if it's some other camera then no, it doesn't make sense
[18:37:10 CEST] <kerio> ffmpeg -f v4l2 -input_format h264 -i /dev/video0 -c:v copy -f flv rtmp://whatever
[18:37:16 CEST] <furq> i mean with the pi cam
[18:37:24 CEST] <furq> obviously with a usb camera it can't make a difference
[18:37:37 CEST] <kerio> furq: you can get the videocore to output h264 without having to juggle rawvideo around
[18:37:45 CEST] <rjp421> kerio, depends how you set the v4l device options, like from YUY2 or MJPG to H264 so its sending raw h264.. but idk how to further define profile/level (always 4.1?) and fps/bitrate/resolution of the input without raspivid and some v4l2-ctl cmds
[18:37:58 CEST] <furq> well yeah but you can do that with ffmpeg
[18:38:54 CEST] <djk> actually this is on a separate server with more capability. The stills are grabs from the rpi that is stream using HawkEye that  using parts of mpg-streamer.
[18:39:48 CEST] <djk> The idea is to post on website and maybe to facebook, youtube, etc.
[18:41:14 CEST] <rjp421> kerio, i havent tried that cmd but i will ty
[19:20:27 CEST] <MR-2> having trouble converting avi to mp4
[19:20:34 CEST] <MR-2> Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 160x120, 13253 kb/s, 42.89 fps, 42.89 tbr, 42.89 tbn, 42.89 tbc
[19:21:34 CEST] <mdavis> What's the trouble?
[19:21:58 CEST] <MR-2> output is cropped and speed up
[19:22:23 CEST] <MR-2> file size ends up being like 2% of the original file
[19:22:56 CEST] <mdavis> If you're converting from rawvideo to mp4 (h264?) I certainly hope so
[19:23:12 CEST] <mdavis> Speed up may be because your input is an unusual framerate
[19:23:37 CEST] <MR-2> ffprobe says it's 42.89
[19:23:44 CEST] <MR-2> which it probably isn't
[19:23:48 CEST] <MR-2> more like 24
[19:25:09 CEST] <mdavis> I wouldn't rely on ffprobe to determine the dimensions/framerate of rawvideo
[19:25:55 CEST] <mdavis> If you know the actual dimensions, you can use:
[19:26:34 CEST] <mdavis> ffmpeg -r 24 -s 640x480 -i input.avi
[19:26:52 CEST] <mdavis> I substituted a random size, but it should illustrate the idea
[19:28:05 CEST] <MR-2> hmm gonna try that
[19:28:15 CEST] <MR-2> tried playing it in VLC, says the index is broken
[19:28:16 CEST] <MR-2> :/
[19:28:30 CEST] <MR-2> the input that's
[19:29:59 CEST] <mdavis> I probably ought to know for sure, but I suspect that the avi container doesn't actually hold any information about the video size and framerate
[19:30:50 CEST] <mdavis> Without that, rawvideo is just a stream of pixel data, no way to know for sure how it's supposed to be arranged
[19:33:35 CEST] <MR-2_> thnx for the input:)
[19:35:42 CEST] <MR-2_> :)
[19:35:50 CEST] <mdavis> Sure thing
[19:37:03 CEST] <MR-2_> think the camera is outputting corrupt files
[20:18:53 CEST] <rjp421> MR-2, not sure if it matters but try forcing the output pixel format to yuv420p
[20:45:57 CEST] <mavi> Does anyone know if there is a point to lowering AQ from 1 to .6 for instance for lower bitrate films?
[20:46:23 CEST] <mavi> Lowering psyrd helped a lot. But not sure about aq. This is for x264 codec.
[21:10:51 CEST] <thebombzen> mavi: as you probably are aware AQ will cause x264 to allocate bits more toward highdetail areas and less toward lowdetail areas
[21:11:22 CEST] <thebombzen> more aggressive AQ is better if edge boundaries are more important
[21:11:45 CEST] <thebombzen> but higher AQ will cause less nonuniform quality if it's too high
[21:12:16 CEST] <thebombzen> lowering hte AQ from 1 to .6 will cause the artifacts to be more uniformly distributed across the frame, which could look better, but could also look worse if the detail is more important
[21:12:40 CEST] <thebombzen> I think for liveaction film it's probably best to use use -tune:v film
[21:13:55 CEST] <mavi> I actually don't use tune film, as it's ultra low bitrate, aka 250 kbps for 480p
[21:14:08 CEST] <thebombzen> tunes are *more* effective at lower bitrates
[21:14:16 CEST] <thebombzen> not less effective
[21:15:03 CEST] <mavi> From my test, yes but only up to a certain point. Like at crf 23 yes. but at crf 28, having more psy-rd and a psy trellis of :15, just increases artifacts.
[21:15:25 CEST] <thebombzen> that's probably because you're tuning it incorrectly
[21:15:26 CEST] <mavi> I ending up setting it to .5:00 for psy
[21:15:37 CEST] <thebombzen> the tunes are really good
[21:15:43 CEST] <mavi> how? using the film preset is easy..
[21:15:54 CEST] <thebombzen> if you're encoding liveaction film, I think -tune:v film is your best bet
[21:16:46 CEST] <thebombzen> beyond that, the minor tweaks you could make to the parameters are going to depend on the specific video you have
[21:16:49 CEST] <thebombzen> and there's no easy test
[21:17:08 CEST] <mavi> right, yea. I had to do around 20 tests to figure out optimal psy. Was hoping to avoid that for AQ
[21:17:35 CEST] <mavi> just leaving it at one lol
[21:17:40 CEST] <thebombzen> perhaps you shouldn't set it
[21:17:56 CEST] <mavi> Too hard to tell the different of aq in my test setup
[21:18:07 CEST] <mavi> they look fairly similiar.
[21:18:45 CEST] <thebombzen> the AQ setting is more useful for animation-style things
[21:18:53 CEST] <thebombzen> the default is good for most videos
[21:19:06 CEST] <mavi> yea it seems like it.
[21:19:29 CEST] <mavi> It's for the animation line retention I believe.
[21:20:15 CEST] <mavi> Alright cool, thanks!
[21:37:52 CEST] <rjp421> MR-2_, not sure if it matters but try forcing the output pixel format to yuv420p
[21:39:10 CEST] <furq> thebombzen: i wouldn't be surprised if film had a negative effect at such low bitrate
[21:39:47 CEST] <thebombzen> at an extremely-crazy-low bitrate, I can see -tune:v film having a negative effect
[21:39:58 CEST] <thebombzen> but even at "very low bitrates" I think it's good for real film
[21:40:34 CEST] <thebombzen> tune film decreaes the deblocking, which could be a problem at crazy-low bitrates if the macroblocking gets out of hand
[21:41:12 CEST] <thebombzen> cause macroblocking artifacts are characteristic of low-bitrate content
[22:03:43 CEST] <kepstin> on the other hand, too much deblocking makes the video look really blurry, sometimes macroblocking artifacts give the impression of more detail even if it's not really there
[22:20:29 CEST] <MR-2_> gonna ask the source of the files about a few things because i'm assuming it has a resolution and framerate that it may or may not have had when being recorded
[22:20:50 CEST] <MR-2_> so gonna wait until i get an answer from the source
[22:21:35 CEST] <MR-2_> just got a bunch of random avi:s that are broken and some that isn't broken
[22:42:54 CEST] <rjp421> MR-2_, not sure if it matters but try forcing the output pixel format to yuv420p
[22:43:48 CEST] <rjp421> im not sure how to other than -vf "format=yuv420p"
[22:44:32 CEST] <rjp421> after the -i and before the output file
[22:54:48 CEST] <MR-2_> yeah have already tried that
[23:15:59 CEST] <blue_misfit> hey guys - I currently use map_channel to pull channels out of a file with one video track and one audio track containing 8 channels
[23:16:23 CEST] <blue_misfit> so I do like map_channel 0.1.0 -map_channel 0.1.1 to get the first 2 channels, which is great
[23:16:49 CEST] <blue_misfit> BUT this breaks when I feed it a file without a video track because it's now 0.0.0 and 0.0.1
[23:16:55 CEST] <blue_misfit> how can I make this adaptive?
[23:18:02 CEST] <klaxa> can you try 0.a.0 and 0.a.1 ?
[23:18:09 CEST] <klaxa> a for audio
[23:18:10 CEST] <blue_misfit> when using the amerge filter and selecting tracks this is quite easy - I just do [0:a:0] and [0:a:1]
[23:18:15 CEST] <blue_misfit> yeah I think I tried that
[23:18:17 CEST] <blue_misfit> one sec
[23:18:36 CEST] <blue_misfit> syntax error
[23:18:38 CEST] <blue_misfit> Syntax error, mapchan usage: [file.stream.channel|-1][:syncfile:syncstream]
[23:19:34 CEST] <klaxa> according to the documentation it should be input_file_id.stream_specifier.channel_id
[23:19:37 CEST] <klaxa> hmm
[23:19:48 CEST] <klaxa> maybe it was added just recently and your ffmpeg is too old
[23:19:53 CEST] <blue_misfit> damn
[23:19:59 CEST] <blue_misfit> mine is fairly new
[23:20:00 CEST] <blue_misfit> one sec
[23:20:14 CEST] <blue_misfit> trying latest zeranoe
[23:20:49 CEST] <blue_misfit> yeah same error
[23:21:29 CEST] <klaxa> if you can, you can do a two pass step by running ffprobe first and parsing its output to select the correct stream
[23:21:45 CEST] <klaxa> and then run ffmpeg
[23:21:58 CEST] <blue_misfit> indeed we kind of do this already to count the number of audio tracks and the number of channels
[23:22:05 CEST] <blue_misfit> problem is we don't take total tracks into consideration :)
[23:22:15 CEST] <blue_misfit> ok thanks I think we'll have to figure that one out
[23:23:15 CEST] <klaxa> well if you already collect that much metadata about the files, it shouldn't be too hard to find the missing few pieces :)
[23:26:24 CEST] <blue_misfit> indeed - just annoying and with lots of ripple effects :)
[23:26:35 CEST] <blue_misfit> looking into alternatives like using pan
[23:26:59 CEST] <blue_misfit> we use amerge in cases where we have 1 track per channel, which is so much easier
[00:00:00 CEST] --- Thu Apr 13 2017


More information about the Ffmpeg-devel-irc mailing list