burek021 at gmail.com
Mon Jan 9 03:05:01 EET 2017
[00:16:52 CET] <sleet> i have jpg captures of traffic cameras that i want to stitch together into a web-friendly video time-lapse at 2-3 fps
[00:17:05 CET] <sleet> i have it working fine, but the encoding time seems extremely slow
[00:19:25 CET] <sleet> for each cam i keep the last 150 jpgs, 328x295, 15-20 kB each
[00:19:57 CET] <sleet> im doing it kind of naive: ffmpeg -v0 -r2 -pattern_type glob -i <glob here> /out/foo.webm
[00:20:04 CET] <sleet> im looking at several minutes to encode each webm
[00:29:48 CET] <oink> Hi all! When installing 'ffmpeg' have got a msg: "ERROR: gnutls not found using pkg-config". But I have latest gnutls. What's wrong?
[00:38:21 CET] <DHE> oink: you need gnutls-devel (or whatever your distro calls it)
[00:44:14 CET] <oink> DHE: thnx
[00:45:50 CET] <oink> DHE: have slackware, but there is no 'gnutls-devel' in it
[00:59:41 CET] <DHE> oink: can't comment on slackware. I was thinking more like fedora and debian
[01:30:30 CET] <rjp421> after reading the above conversation, i have come up with: ffmpeg -loglevel info -f video4linux2 -s 320x240 -r 15 -fflags nobuffer -re -i /dev/video0 -vf "fps=24,scale=320:240,format=yuv420p" -c:v libx264 -profile:v baseline -level 13 -r 24 -g 48 -tune:v zerolatency -preset ultrafast -x264opts keyint=48 -an -sn -timecode `date '+%H:%m:%S.00'` -f flv 'rtmp://192.168.10.104/testplaylist/pi2cam0 flashver=FMLE/3.0\20(compatible;\20FMSc/1.0) live=1'
[01:33:43 CET] <rjp421> which of the resizing and setting fps can i get rid of, for best performance?
[01:35:11 CET] <rjp421> i have 5 usb webcams plus the onboard pi-cam in the pi2
[01:35:28 CET] <rjp421> *5 total, including the onboard
[01:35:56 CET] <rjp421> i need to keep the input bandwidth as low as possible
[01:38:12 CET] <furq> get rid of -r 24 for starters
[01:38:59 CET] <furq> there's not really much point setting -profile and -level
[01:39:16 CET] <furq> i assume you're aiming for the lowest possible amount of latency
[01:39:36 CET] <furq> zerolatency and preset ultrafast will increase the required bandwidth quite a lot
[01:41:55 CET] <DHE> zerolatency is meant to have output frames go out as quickly as possible when they hit the encoder. it also badly affects image quality and bitrate consistency. you didn't specify a bitrate or quality target so I think 1Mbit is the default
[01:42:21 CET] <furq> it defaults to crf 23
[01:42:37 CET] <DHE> hmm.. must have changed or I'm thinking of a different codec
[01:42:43 CET] <furq> but yeah don't use zerolatency unless you need <50ms or so of latency
[01:42:52 CET] <rjp421> furq, ty.. usb bandwidth or stream ?
[01:43:00 CET] <DHE> stream
[01:43:10 CET] <DHE> zerolatency is more for videoconferencing software or the like
[01:43:24 CET] <furq> that's probably the worst cargo-cult option for streaming
[01:43:36 CET] <furq> i see that all the time when people are streaming to twitch or some other service that introduces 5+ seconds of latency
[01:43:55 CET] <furq> so the 500ms or whatever you gain from enabling it is not worth how badly it fucks the image quality
[01:44:10 CET] <furq> probably way less than 500ms with -preset ultrafast
[01:44:47 CET] <furq> but yeah if you can spare the cpu time and you're not aiming for <1s latency you should try to at least use -preset faster
[01:48:14 CET] <rjp421> i havent implimented it yet but i wanted to stream from ffmpeg to an old chat i built and just recently resurrected http://imgur.com/a/MQbxN using adobe media server
[01:49:14 CET] <furq> also get rid of -re
[01:50:43 CET] <furq> and -vf fps=24
[01:51:27 CET] <furq> -vf fps and -r are the same and they'll just duplicate frames
[01:51:34 CET] <furq> which is just a waste of bandwidth if you don't specifically need 24fps
[01:52:15 CET] <rjp421> ty, i was wondering about that
[01:53:35 CET] <rjp421> furq, is it better to set the new fps in one place or the other? i need to bring the low cam fps to 24 for h264 then eventually hls
[01:54:01 CET] <furq> i don't see why you'd need 24fps for this
[01:54:09 CET] <rjp421> hls is done by the media server, and likes 24fps with gop of 48
[01:54:11 CET] <furq> but they do the same thing so pick one or the other
[01:54:22 CET] <furq> also you should probably be using nginx-rtmp
[01:54:52 CET] <rjp421> https://media.whohacked.me
[01:55:56 CET] <rjp421> im definitely overblowing it, but it works for now
[02:01:31 CET] <rjp421> does this look right? should i drop x264opts or -g? -f video4linux2 -fflags nobuffer -i /dev/video1 -vf "scale=320:240,format=yuv420p" -c:v libx264 -profile:v baseline -level 13 -r 24 -g 48 -preset ultrafast -x264opts keyint=48
[02:01:59 CET] <rjp421> i kept -r 24, and took it out of -vf
[02:03:21 CET] <rjp421> the input to video1 is 512x244 at some unknown fps.. i have to LD_PRELOAD v4l1compat.so to even get ffmpeg to stream it (so far)
[02:03:48 CET] <rjp421> an old 3com homeconnect usb webcam
[02:12:57 CET] <rjp421> which, using with -fflags nobuffer causes a kernel panic...
[02:13:17 CET] <sleet> woo!
[02:13:50 CET] <sleet> are you running as root?
[02:14:21 CET] <rjp421> sleet, are you harvesting traffic cams or do you work for odot? dont have to say, jw :)
[02:14:29 CET] <rjp421> sleet, yes, running as root
[02:14:56 CET] <sleet> im harvesting cams because tripcheck.com and/or ODOT doesnt seem to offer historical data/video
[02:15:21 CET] <sleet> we're having a snow week here, town's shut down due to unseasonable cold ice/snow
[02:16:05 CET] <rjp421> ah, cool :D
[02:16:24 CET] <sleet> looks like odot only offers cams via tripcheck, and they update every 5 minutes
[02:16:57 CET] <sleet> i have a cron to borrow them from the site then assemble the video
[02:17:09 CET] <sleet> im just disappointed with how long it takes to encode to webm
[02:29:47 CET] <dave0x6d> How can I see which codecs have hardware acceleration support in ffmpeg on my system?
[03:40:55 CET] <athan> Hi everyone. Can I use `ffmpeg` to stream video footage, such that if my machine dies, the file is not considered "corrupt"?
[03:41:39 CET] <athan> or does the process need to exit cleanly for the file to be saved?
[03:42:19 CET] <DHE> use a file format that tolerates cutoffs. mpegts is well supported and meets that criteria, but lacks a keyframe index which makes seeking error-prone.
[03:42:43 CET] <DHE> seeking will be imprecise and may land on a non-keyframe resulting in glitched video for a moment, but otherwise works
[03:42:46 CET] <athan> That's fine with me :) thank you dearly DHE
[03:59:51 CET] <sleet> so anybody got an ideas on how to speed up my webm encoding?
[04:02:07 CET] <BotoX> i had to https://i.imgur.com/Fjwswbz.jpg
[04:04:39 CET] <athan> what program would you use to actually convert and encode DHE? avconv?
[04:17:17 CET] <athan> Do you guys take bitcoin donations? I don't see it on the page :(
[04:18:42 CET] <rjp421> is there a preferred or "best" option for aac en/decoding? to use in ./configure
[04:23:05 CET] <rjp421> libfaac libaacplus libvo-aacenc libfaac libfdk-aac
[04:37:21 CET] <rjp421> libfdk-aac is the only one that does both it seems, but if i choose another for encoding, how do i specify it in the ffmpeg cmd? ive been using '-c:a aac -strict -2 -ar 44100 -ac 1 -ab 48k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0"'
[04:42:26 CET] <DHE> fdkaac is probably best
[04:42:33 CET] <DHE> but it's not free
[04:43:17 CET] <rjp421> as in it costs money? or not oss?
[04:44:08 CET] <DHE> open source, but license incompatible with GPL and stuff
[04:46:25 CET] <rjp421> DHE, ah ty. can/should i still use ' --enable-nonfree --enable-gpl --enable-version3' with --enable-libfdk-aac?
[04:55:40 CET] <DHE> if it works, sure. the resulting program just becomes undistributable
[05:05:26 CET] <rjp421> cool ty
[05:17:32 CET] <furq> rjp421: fwiw aac (builtin) is second best and the other s are all pretty much trash
[05:20:27 CET] <rjp421> http://www.tmplab.org/wiki/index.php/Streaming_Video_With_RaspberryPi#FFMPEG_compilation
[05:20:34 CET] <rjp421> furq, ty
[05:23:35 CET] <rjp421> https://www.patreon.com/posts/creating-virtual-7108778 says to use x11grab which i see is legacy and disabled by default, and xcb is used? does that change the cmd line to capture?
[05:24:14 CET] <rjp421> they show: ffmpeg -f x11grab -r 15 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video1
[05:24:21 CET] <furq> http://ffmpeg.org/ffmpeg-devices.html#x11grab
[05:24:31 CET] <rjp421> ty, looking
[05:27:00 CET] <rjp421> furq, there it says it will be detected, while ./configure --help (with git version) just says "--enable-x11grab" with a default of [no], not autodetect like xcb
[05:27:35 CET] <rjp421> --enable-x11grab enable X11 grabbing (legacy) [no]
[05:27:50 CET] <rjp421> --enable-libxcb enable X11 grabbing using XCB [autodetect]
[05:29:06 CET] <rjp421> that howto shows the old avconv syntax with vcodec, so im not sure to trust it with the latest build
[05:30:37 CET] <debianuser> rjp421: I think "-f x11grab" _command_ uses "xcb" backend if available, but can use "x11" backend if enabled manually, i.e. there's no separate "xcbgrab" command - it's a single "X11 grabbing" command that works either "using XCB" or "using XGetImage" (or XShmGetImage).
[05:31:15 CET] <rjp421> debianuser, awesome, ty! ill leave it alone then :)
[05:32:33 CET] Action: debianuser doesn't remember what's the difference between two backends, I think X[Shm]GetImage uses more CPU on xorg side and less CPU on ffmpeg side, while xcb backend needs more CPU on ffmpeg side, but less CPU for xorg, so it's up to you which one suits you better
[05:33:13 CET] <debianuser> (I think x11 backend gave me higher fps than xcb grab, but I'm not sure, haven't tested that for a while)
[05:34:14 CET] <furq> yeah they're just two backends for the same input device
[05:34:28 CET] <furq> xlib is legacy so i assume it performs worse
[05:38:03 CET] <rjp421> im booting the pi to console atm, but plan to run realvnc in virtual-service mode.. hoping ill be able to capture it
[05:40:51 CET] <rjp421> ill be scaling it to unreadable levels, more of a live thumbnail
[05:45:40 CET] <debianuser> rjp421: if you mainly need it for remote access and not for capturing - consider using something like NX, Xpra or winswitch.org :)
[06:19:01 CET] <rjp421> build failed on libvpx? http://pastebin.com/raw/PSsD1Spt with "./configure --enable-nonfree --enable-gpl --enable-version3 --disable-ffplay --enable-fontconfig --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-gnutls --enable-libv4l2 --enable-mmal --enable-vfp --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-librtmp --enable-libspeex --enable-libsoxr --enable-libmp3lame --enable-libfdk-aac --enable-li
[06:19:02 CET] <rjp421> bopus --enable-libtheora --enable-libvorbis --enable-libvo-aacenc --enable-libvo-amrwbenc"
[06:23:20 CET] <furq> if this is running on an rpi then pretty much all of those libs are a waste of time
[06:26:05 CET] <furq> you're certainly not going to be using x264, x265 or libvpx, vo-aacenc is worse than fdk, librtmp doesn't do anything that ffmpeg's builtin rtmp support doesn't handle afaik
[06:26:20 CET] <furq> fontconfig/freetype/fribidi/libass are only useful for subtitles
[06:26:30 CET] <furq> and you probably want --enable-omx-rpi to use the rpi's builtin h264 encoder
[06:27:02 CET] <rjp421> ok, ty
[06:28:24 CET] <rjp421> do i want enable-vfp for the pi? isnt that the floating point acceleration
[06:28:40 CET] <furq> it'll be enabled by default
[06:28:42 CET] <furq> same with neon
[06:28:47 CET] <rjp421> pi2/3, not 1/0
[06:28:51 CET] <rjp421> ok ty
[06:29:04 CET] <furq> the pi 0/1 have an fpu fwiw
[06:29:23 CET] <furq> the armhf/armel thing is just because debian compile their armhf packages for armv7
[06:30:08 CET] <furq> but yeah optimisations are all detected at runtime, you can only disable them at configure time
[06:32:30 CET] <rjp421> Unknown option "--enable-omx-rpi".
[06:32:50 CET] <furq> https://github.com/FFmpeg/FFmpeg/blob/master/configure#L308
[06:32:56 CET] <furq> it's been there for a while
[06:33:05 CET] <rjp421> i dont see anything in configure --help with omx either
[06:34:04 CET] <furq> what version
[06:34:13 CET] <rjp421> im using the git clone
[06:34:16 CET] <furq> weird
[06:34:22 CET] <furq> i built with that like five days ago
[06:34:24 CET] <rjp421> from an hr or so ago
[06:35:25 CET] <rjp421> hmm, its an old clone. ill rm it and get it again
[06:36:26 CET] <rjp421> i did 'git fetch && git remote prune origin' to update, not sure thats correct
[06:37:43 CET] <furq> git pull should do it
[06:38:27 CET] <furq> i normally just grab the nightly tarball or clone with --depth=1
[06:38:30 CET] <furq> it takes forever otherwise
[06:53:43 CET] <rjp421> furq, ty again, it worked after cloning again
[07:47:41 CET] <JeffATL> no reason why two instances of ffmpeg can't be using the same file as input at once, right?
[09:12:06 CET] <ZexaronS> Any word on whether TS concating works now ? ... or if I remember correctly
[09:23:52 CET] <markvandenborre> ZexaronS: works for me, but it is a bit tricky to get right...
[09:26:48 CET] <ZexaronS> well I must have always had the syntax wrong
[09:26:58 CET] <ZexaronS> I'm not sure now if it's MP4 or TS ... forgot a bit
[09:28:16 CET] <furq> ts concat should always have worked
[09:28:19 CET] <furq> mp4 is trickier
[09:28:57 CET] <kerio> mp4 concat shouldn't... work
[09:28:58 CET] <kerio> should it
[09:29:07 CET] <furq> it should work with the demuxer
[09:29:09 CET] <furq> it doesn't always though
[09:29:59 CET] <kerio> ok but
[09:30:07 CET] <kerio> two concatenated mp4s are not a valid mp4
[09:30:08 CET] <kerio> are they
[09:30:32 CET] <kerio> whereas mpegts is designed to be concatenated
[09:33:22 CET] <furq> no and yes
[09:34:19 CET] <furq> you can't concat mp4 (or most other formats) with the protocol, that's what the demuxer is for
[09:34:25 CET] <furq> the demuxer has always been a bit flaky though
[09:34:33 CET] <kerio> can ffmpeg output fragmented isobmff?
[09:34:38 CET] <kerio> and can it do it "live"?
[09:34:41 CET] <ZexaronS> what does it mean "with demuxer"
[09:34:43 CET] <furq> yes and probably
[09:34:57 CET] <furq> ZexaronS: https://trac.ffmpeg.org/wiki/Concatenate#demuxer
[09:35:17 CET] <furq> the protocol is pretty much the same as doing cat 1.ts 2.ts > 3.ts
[09:35:25 CET] <furq> which is possible with ts but not with most other formats
[09:35:51 CET] <furq> the demuxer demuxes the streams and concats those
[09:35:53 CET] <ZexaronS> I have heard about this and used it and It worked but it's been like 6 months the last time i touched transcoding
[09:36:09 CET] <furq> i wouldn't rely on anything but the filter, which obviously requires transcoding
[09:36:17 CET] <ZexaronS> I remember there's like 3 ways to concat, one of them via a list file
[09:36:21 CET] <furq> yeah that's the demuxer
[09:36:39 CET] <furq> if you can't transcode then mpegts fragments are your best bet afaik
[09:37:13 CET] <furq> but i see a lot of people in here doing that who have issues with concatenating aac audio
[09:37:26 CET] <ZexaronS> well, if I have to transcode then there is no point using concat in ffmpeg, then I would just load it up into Vegas or Premiere , the point of this is to preserve quality, otherwise it's useless
[09:39:03 CET] <ZexaronS> I have TS AVC files with MPEG 2 audio
[09:39:25 CET] <furq> well yeah the protocol and the demuxer should both work
[09:39:30 CET] <furq> the key word there is "should"
[09:39:34 CET] <ZexaronS> I finished my old dvd collection restoration so most of the stuff going forward will be in this
[09:39:48 CET] <furq> give them both a try and hope for the best i guess
[09:39:56 CET] <furq> look out for audio pops at join points
[09:40:29 CET] <ZexaronS> What is special about TS concentation bit, will most PC programms seamlessly support it as 1 file, this is not meant to be used on media players
[09:40:42 CET] <ZexaronS> or can i see this concentation indicator in mediainfo?
[09:41:27 CET] <ZexaronS> if the pop is not loud it shouldn't be an issue
[09:41:49 CET] <ZexaronS> since this type of editing will be at scenes, it won't be in the middle of a scene
[09:41:59 CET] <ZexaronS> i mean at milestones
[09:43:09 CET] <furq> mpegts is a streaming format so it doesn't have global headers
[09:43:32 CET] <furq> it has per-packet headers, so you should in theory just be able to concat fragments together with anything
[09:43:45 CET] <dongs> < furq> the protocol is pretty much the same as doing cat 1.ts 2.ts > 3.ts
[09:43:59 CET] <dongs> sure you can cat 2 ts together but unless they're same codec/etc i wouldnt expect much interesting to happen
[09:44:04 CET] <furq> well duh
[09:45:24 CET] <ZexaronS> furq, can I detect the separation, so I don't trim the parts in the middle of a chunk and rather at the end of it or the beginning of the next
[09:45:54 CET] <ZexaronS> that's why that common video glithc happens when it's all blocky for a few moments?
[09:46:19 CET] <furq> that shouldn't matter
[09:46:33 CET] <furq> you can only cut at keyframes if you're not reencoding, and if you are reencoding then it makes no difference
[09:47:04 CET] <furq> i don't think there's any way to tell where the joins are in a concatenated file
[09:48:30 CET] <ZexaronS> oh ... wait, so it automatically cuts at nearest keyframe ignoring the precise -ss -to commands ? that's a bit of a big deal
[09:48:43 CET] <furq> yes
[09:48:57 CET] <ZexaronS> not a problem, but if I don't know about it it is
[09:49:05 CET] <furq> well you'd get broken fragments otherwise
[09:49:23 CET] <bencoh> 10:47 < furq> i don't think there's any way to tell where the joins are in a concatenated file
[09:49:42 CET] <bencoh> if you're talking about mpeg-ts and no-reencoding, then you can probably tell
[09:49:56 CET] <ZexaronS> yes, I rather have it a bit inaccurate but clean and not broken, but jeez how I didn't knew about this before
[09:49:57 CET] <bencoh> by checking vbv status / non-compliance
[09:50:14 CET] <ZexaronS> with ffmpeg or probe?
[09:50:24 CET] <bencoh> (I mean, assuming everything else was done properly)
[09:50:35 CET] <furq> is there a way to do it improperly with mpegts
[09:50:55 CET] <ZexaronS> i'd like to know about this, i would then stick with ts until the whole thing is finished, maybe recode it to MP4 once it's all done, if necessary but I don't see why right now
[09:51:04 CET] <bencoh> improper mpeg-ts remuxing? ffmpeg mpegts mux is already broken ;P
[09:51:23 CET] <bencoh> and mostly non-compliant
[09:51:27 CET] <furq> fun
[09:51:42 CET] <bencoh> it "works", but still
[09:52:22 CET] <furq> it's been a long time since i muxed anything but mkv or nut
[09:52:47 CET] <bencoh> mkv is usually a safer choice when you have full controle over player, yeah
[09:53:21 CET] <bencoh> and a pretty universal/agnostic container, too :)
[09:55:20 CET] <kerio> bencoh: nut ftw
[09:59:43 CET] <ZexaronS> bencoh, so now you tell me is non-compliant ?
[09:59:56 CET] <ZexaronS> isn't this a bit of a too standard thing to be left like that
[10:00:35 CET] <ZexaronS> what about trimming and concating the freaking raw AVC ?
[10:00:45 CET] <ZexaronS> why do I have to deal with containers anyway
[10:01:02 CET] <furq> that's what the demuxer is for
[10:01:55 CET] <bencoh> demuxer does the job already
[10:02:35 CET] <bencoh> but no matter what, if we're talking about a supposedly mpeg tstd compliant stream, you will most probably break compliance by splitting/concating it
[10:02:46 CET] <bencoh> even with a compliant mux
[10:02:48 CET] <ZexaronS> it sounds like a special thing but implemented part of the mylist thing, who would have ever figured that one out, so I simply pick a raw video format without container ?
[10:03:07 CET] <bencoh> you probably don't care about tstd though
[10:03:38 CET] <ZexaronS> cause i'll have like 5 parts which I have to then concat, only then I need a container, and only then I can transcode from 1080i to 720p
[10:03:42 CET] <bencoh> (since ffmpeg itself already doesn't care much about tstd)
[10:03:58 CET] <furq> if you're transcoding at the end then you might as well just use the concat filter
[10:03:59 CET] <ZexaronS> it's a DVB file
[10:04:17 CET] <furq> that's the least flaky of all the ffmpeg concat methods in my experience
[10:04:34 CET] <furq> you'll need a newish ffmpeg though (3.0+ iirc)
[10:04:45 CET] <ZexaronS> the point is, I don't want to waste time recoding what I will cut later out
[10:05:03 CET] <ZexaronS> I have a smaller file and only recode that
[10:05:25 CET] <furq> what are you using to cut parts out
[10:05:27 CET] <ZexaronS> rather have*
[10:05:39 CET] <ZexaronS> well the ffmpeg ss and to right ?
[10:05:53 CET] <ZexaronS> vcodec and acodec copy
[10:05:58 CET] <ZexaronS> I don't know any other method
[10:06:16 CET] <furq> you can cut with filters
[10:06:26 CET] <furq> if you know the frame ranges you want then you could do the whole lot in one command
[10:06:35 CET] <furq> -vf trim, select etc
[10:08:16 CET] <furq> or you could just cut the parts you want from the fragments and then concat those with the filter
[10:08:37 CET] <ZexaronS> ah so I just specify split points, and it automatically outputs the segments before and after each split point ?
[10:09:05 CET] <ZexaronS> Well that's exactly what I was trying to do
[10:09:15 CET] <furq> oh nvm you have one file that you want to cut parts from
[10:09:28 CET] <furq> then yeah you can do all that with -vf select
[10:10:25 CET] <ZexaronS> so you're saying I don't need the demuxer now ... I have like 20 files, yes they can bee similar cutpoints in them, but i don't think it's close enough to do them all in one go
[10:11:29 CET] <furq> if you already have multiple files then cut the parts you want with -ss and -to and then join them with the concat filter when encoding
[10:11:46 CET] <ZexaronS> or you meand that the command cuts out stuff unneded and concats the rest automatically, without having to run another concat command separately ?
[10:11:58 CET] <furq> i figured you had one file, you were going to cut fragments out and then join them up
[10:12:18 CET] <ZexaronS> no no, these are all separate files, they won't get concated, im only talking about concating the parts of one file
[10:12:44 CET] <furq> so you have one file that you want to cut parts out of, multiple times?
[10:13:06 CET] <furq> use select and setpts then
[10:14:04 CET] <ZexaronS> yes, like 4 to 5 usable parts should come out of each one, around 4 parts will be discarded, so I guess it's around 6-8 parts in total if that matters ...
[10:14:08 CET] <furq> something like -vf "select=between(t\,10\,20)+between(t\,30\,40),setpts=PTS-STARTPTS"
[10:14:30 CET] <furq> will give you 00:10 to 00:20 and 00:30 to 00:40 as one continuous output
[10:14:37 CET] <ZexaronS> there are multipel source files, but this is only a coincidence, let's just say I have only 1 file, since I'll just do one at a time
[10:14:53 CET] <furq> or use between(n, 10, 20) for frames 10 to 20 if you need that level of precision
[10:16:31 CET] <ZexaronS> Wow that really should speed things up, i'll try it thanks
[10:59:27 CET] <ZexaronS> well actually it's a bit different, non HD channels are in MPEG2 and AC-3
[10:59:41 CET] <ZexaronS> hopefully it will still work
[18:55:55 CET] <thebombzen> hmm, I'm having an issue with the mjpeg encoder. it seems to be giving an absurdly high bitrate
[18:56:54 CET] <kerio> >mjpeg
[18:57:52 CET] <thebombzen> https://hastebin.com/guheponogi.pas
[18:58:21 CET] <kerio> why pascal
[18:58:26 CET] <thebombzen> autodetection
[18:58:39 CET] <thebombzen> it's just the pastebin of the full output and command
[18:58:48 CET] <kerio> idk if you can set the bitrate with mjpeg like that
[18:59:04 CET] <thebombzen> really? cause mjpeg is just jpeg
[18:59:18 CET] <thebombzen> if you can't, can you set the jpeg quality (0-100)
[18:59:19 CET] <BtbN> mjpeg is litterally just a stream of jpeg images. It does not support targetting a bitrate. And it's expected to be quite large.
[18:59:31 CET] <thebombzen> in that case can you set the jpeg quality
[18:59:56 CET] <kerio> try -q:v
[19:00:01 CET] <kerio> or some mjpeg-specific option
[19:00:17 CET] <thebombzen> the issue is I'm trying to send a low-latency stream video over a very unstable connection
[19:00:31 CET] <thebombzen> we figured mjpeg-in-mpegts was the best option
[19:00:45 CET] <DHE> .. is that even supported?
[19:00:54 CET] <thebombzen> avformat says it is
[19:01:05 CET] <thebombzen> and that's all that matters because we have avformat on both ends
[19:01:16 CET] <kerio> thebombzen: hold on
[19:01:21 CET] <kerio> so you're controlling both ends
[19:01:33 CET] <kerio> and you decided to not go for h264
[19:01:33 CET] <thebombzen> yea
[19:01:38 CET] <kerio> y tho
[19:01:47 CET] <thebombzen> yea we wanted it to have very low latency
[19:01:50 CET] <thebombzen> which means intra-only
[19:02:00 CET] <thebombzen> we also wanted it to be able to randomly pick up after a loss
[19:02:11 CET] <thebombzen> afaik h264 doesn't do that
[19:02:16 CET] <thebombzen> and if it does I don't know the options to get it to do that
[19:02:17 CET] <DHE> you can specify -g 1 for that
[19:02:27 CET] <thebombzen> what does -g do?
[19:02:29 CET] <DHE> still, for birate reasons you may want to use something like -g 5
[19:02:40 CET] <DHE> Group Of Pictures setting. A group of pictures always begins with an Intra frame
[19:02:46 CET] <thebombzen> ah
[19:02:55 CET] <thebombzen> but doesn't setting -g 5 increase the latency to five frames?
[19:03:11 CET] <DHE> random start time, yes
[19:03:30 CET] <thebombzen> cause we're going for sub-100 ms and 5 frames out of 30 is a ton of latency
[19:03:31 CET] <DHE> but will likely improve bitrates dramatically
[19:03:46 CET] <thebombzen> well to improve the bitrate we'll just reduce the qp
[19:03:55 CET] <DHE> you mean increase
[19:04:00 CET] <thebombzen> yes
[19:04:05 CET] <DHE> qp=0 is lossless, qp=51 is a blurry smudge
[19:04:12 CET] <thebombzen> yes I misspoke
[19:04:27 CET] <thebombzen> the most imporant factors are very low latency and resilience to random large chunks of packet loss
[19:04:36 CET] <thebombzen> if that's true, x264 has -tune zerolatency
[19:04:49 CET] <thebombzen> but what about the resilience to large chunks of packet loss
[19:05:55 CET] <JEEB> low gop lengths or periodic intra refresh
[19:07:12 CET] <DHE> if you run the stream over TCP, packet loss should be a non-issue
[19:07:17 CET] <JEEB> very low latency doesn't require all-intra, unless you consider a small delay in being able to show an image part of the delay. which isn't really latncy to be honest :P
[19:07:45 CET] <JEEB> because as soon as you get that first image it's at low latency, you're not staying behind
[19:08:11 CET] <DHE> all intra frames is intended if you're, say, sending to a multicast group and expect players to tune in by joining the group...
[19:08:23 CET] <DHE> since you don't know what point they'll join it at
[19:08:41 CET] <JEEB> well even then you usually can take some time for them to "tune in", if it's within a second or so
[19:08:47 CET] <JEEB> as long as they don't get stuck behind too much
[19:09:04 CET] <JEEB> because start delay and actual latency are two different things
[19:09:08 CET] <thebombzen> so the application is we have a camera on a rover
[19:09:12 CET] <DHE> and you need stream metadata as well. for mpegts you need the PAT and PMT spammed as well
[19:09:16 CET] <thebombzen> and we need the latency to be low enough for us to drive the rover
[19:09:20 CET] <thebombzen> ideally sub-100ms
[19:09:26 CET] <JEEB> yes, that's easily accomplished
[19:09:34 CET] <thebombzen> by what settings?
[19:09:36 CET] <JEEB> not sure if with ffmpeg.c but with libx264 it can be
[19:10:02 CET] <JEEB> well, you start off with the zerolatency tuning in libx264 itself
[19:10:09 CET] <thebombzen> well ideally i'd like to accomplish it with ffmpeg.c
[19:10:11 CET] <JEEB> and make sure you minimize the rest of your chain's latency
[19:10:25 CET] <JEEB> ffmpeg.c is a mess, it might be possible but most definitely not designed for it
[19:10:39 CET] <thebombzen> well the webcam we have spits out mjpeg via v4l2
[19:10:59 CET] <thebombzen> so I'm not really sure how else to feed it to x264
[19:11:05 CET] <DHE> and the bitrate of the mjpeg is too high to be useful as-is
[19:11:07 CET] <JEEB> basically if you're using lavf/lavc there's all the buffering :P
[19:11:23 CET] <JEEB> so you have to make sure that is minimized as much as possible
[19:11:30 CET] <JEEB> and at that point the latency of the encoder is the least of your issues
[19:11:34 CET] <JEEB> whatever encoder you pick
[19:11:39 CET] <thebombzen> maybe that's true but if our camera gives us mjpeg via v4l2
[19:11:48 CET] <thebombzen> and I don't want to use lav* then how do I get it to x264
[19:12:10 CET] <JEEB> I don't say don't use lavf/lavc, I just say that you might have fun trying to tune the params with ffmpeg.c
[19:12:30 CET] <JEEB> also you might be able to skip lavf with mjpeg input
[19:12:30 CET] <thebombzen> well what do I need besides -fflags +nobuffer -avioflags +dirct
[19:12:47 CET] <JEEB> unfortunately I have no effing idea, I've never optimized lavf/lavc for that scenario
[19:13:03 CET] <JEEB> you'd have to calculate your current latency
[19:13:29 CET] <JEEB> and see how much extra latency is OK to hit your target and then either move on to the encoder optimization, or to optimize it even more on the input side
[19:13:48 CET] <thebombzen> it's pretty bad rn. I'm testing it by using x11grab_xcb to grab the screen and then just piping the output to ffplay. but that might be because x11grab_xcb sucks
[19:14:32 CET] <thebombzen> I won't really be able to tell until I get down to campus today and try with the actual hardware. I figured I"d test the software first but I'm sort of hitting a roadblock
[19:15:47 CET] <kerio> thebombzen: slice-based threading, progressive intra refresh
[19:16:06 CET] <JEEB> yes, that's what tune zerolatency and then periodic intra refresh would do
[19:16:15 CET] <JEEB> which would be my recommendation with libx264 as well
[19:16:17 CET] <kerio> you can have literally one frame in the pipeline at a time
[19:16:39 CET] <JEEB> or even a slice :P
[19:16:54 CET] <JEEB> not... really, though :P
[19:18:31 CET] <thebombzen> kerio isn't that what -tune zerolatency and -g X do, where 0 < X <= 5?
[19:18:37 CET] <JEEB> no
[19:18:38 CET] <kerio> no
[19:18:43 CET] <JEEB> you need to specifically enable period intra refresh
[19:18:49 CET] <thebombzen> then how do I do the thing you just said I should do
[19:18:59 CET] <JEEB> -x264-params libx264thing=yes
[19:19:09 CET] <thebombzen> well okay
[19:19:12 CET] <kerio> progressive intra refresh will not have I-frames at all
[19:19:18 CET] <kerio> only P-frames
[19:19:25 CET] <thebombzen> wait what is progressive intra-refresh then
[19:19:31 CET] <kerio> super magic
[19:19:33 CET] <JEEB> you have parts of the image intra
[19:19:35 CET] <thebombzen> how do you not have I-frames. I'm somewhat ignorant here
[19:19:44 CET] <JEEB> and after N frames you get a full image and can decode the rest of the stream
[19:19:52 CET] <DHE> intra refresh spreads the massive bitrate spike of an I frame into 5 (?) P-frames which will refresh the image in 1/5 increments
[19:20:04 CET] <JEEB> ^
[19:20:13 CET] <DHE> so if you receive all 5 of these P frames cleanly, you effectively have a good start point for decoding
[19:20:14 CET] <JEEB> and you don't have to have specific intra only frames for random access
[19:20:34 CET] <thebombzen> so instead of having an I-frame every 5 frames, you have 1/5 of every frame by intra-only blocks?
[19:20:43 CET] <JEEB> simplified, yes
[19:20:46 CET] <thebombzen> that's really clever lol
[19:21:18 CET] <thebombzen> what is the practical effect of that, other than making the bitrate more constant?
[19:21:21 CET] <JEEB> also as I noted before, start delay is different from your stream's latency. you can wait for a second for a full image to be decoded, but after that your latency is still the same as if you could decode the first :P
[19:21:27 CET] <JEEB> thebombzen: no need for actual intra frames
[19:21:37 CET] <DHE> making the bitrate more consistent is exactly its purpose
[19:21:49 CET] <DHE> certain streaming use cases will benefit from it
[19:21:55 CET] <JEEB> any client at any point needs to decode N frames and it's "in" the stream
[19:22:16 CET] <thebombzen> so what is the practical difference between "start delay" and "latency"
[19:22:33 CET] <thebombzen> cause I know that one is how long it takes to start up and one is the response time
[19:22:41 CET] <thebombzen> but like I"ve never actually seen these numbers be different
[19:22:55 CET] <DHE> start delay is how long you wait before the video starts playing. latency is the delay between pressing a button on your little rover and seeing it move on your video
[19:22:57 CET] <JEEB> thebombzen: start delay is the time it takes from you starting playback to getting an image, and the latency is actual latency between what you're seeing on screen and what is actual
[19:23:13 CET] <thebombzen> I mean I figured that out, but I've never seen those numbers be different
[19:23:22 CET] <JEEB> then you've never optimized your stuff
[19:23:28 CET] <thebombzen> correct
[19:23:33 CET] <thebombzen> I'm trying to figure out how to do that now
[19:23:37 CET] <JEEB> because the first can be 10 seconds or whatever, but still your actual latency isn't 10 seconds
[19:24:04 CET] <thebombzen> but if the start delay is 10 seconds, what happens to the first 10 seconds of video
[19:24:06 CET] <thebombzen> is it lost?
[19:24:11 CET] <DHE> yes
[19:24:13 CET] <kerio> rip
[19:24:16 CET] <thebombzen> okay
[19:24:19 CET] <thebombzen> that's fine I just need to know
[19:24:24 CET] <kerio> start delay is not going to be 10 seconds tho
[19:24:33 CET] <JEEB> yeah, that was an extre example
[19:24:37 CET] <JEEB> *extreme
[19:24:38 CET] <kerio> well ok actually
[19:24:47 CET] <kerio> if it scans a column of pixels
[19:24:53 CET] <thebombzen> and what is the x264-params for progressive intra refresh?
[19:24:54 CET] <kerio> and it's 1920 wide
[19:25:21 CET] <JEEB> thebombzen: that stuff is just passed on to libx264 internally so it's found in x264's documentation/headers
[19:25:44 CET] <JEEB> or common/common.c
[19:25:49 CET] <thebombzen> well I know that but I was hoping someone here would know off the top of their head so I don't have to go parse through x264's documentation/headers and common/common.c
[19:26:00 CET] <JEEB> OPT("intra-refresh")
[19:26:01 CET] <JEEB> p->b_intra_refresh = atobool(value);
[19:26:07 CET] <JEEB> there you fucking go :P
[19:26:18 CET] <thebombzen> well I"m sorry I don't know C
[19:26:31 CET] <thebombzen> and thank you :)
[19:27:08 CET] <JEEB> anyways, if you don't know C then I'm just going to buzz off because trying to do low latency a) with ffmpeg.c and b) without you being able to optimize your player side because you can't do C
[19:27:14 CET] <JEEB> is gonna be a mess
[19:27:23 CET] <DHE> sadly the best documentation for x264 seems to be x264's --fullhelp
[19:27:39 CET] <thebombzen> I was planning on using mpv --untimed
[19:27:48 CET] <thebombzen> which is the new version of mplayer's -benchmark
[19:28:05 CET] <JEEB> you will have to write code to first of all know your latency
[19:28:05 CET] <thebombzen> although ffplay -probesize 32 has been not bad
[19:28:14 CET] <thebombzen> true
[19:28:21 CET] <thebombzen> our goal tho isn't to time the latency and get data on it
[19:28:23 CET] <JEEB> if you have no hard numbers you don't know which part of your flow is working well enough
[19:28:34 CET] <thebombzen> well you overestimate the size of this project
[19:28:47 CET] <thebombzen> the guy driving the rover is going to be standing there and tell me if the latency is good enough or not lol
[19:29:07 CET] <thebombzen> this is an engineering team at my uni. we have like 10 people.
[19:39:05 CET] <thebombzen> also am I supposed to use -x264-params intra-refresh=yes with -g 5?
[19:39:11 CET] <thebombzen> or does -g 5 override the intra refresh?
[19:43:37 CET] <DHE> it would make intra-refresh run every 5 frames...
[19:50:42 CET] <kerio> does intrarefresh even have a setting like that
[19:51:19 CET] <DHE> it's just a boolean. so "-x264-params intra-refresh" should be sufficient
[19:51:53 CET] <kerio> i thought intra-refresh was just its own thing
[19:51:58 CET] <kerio> and how much time it took depended on other factors
[19:52:23 CET] <DHE> there's a different between how many frames intra-fresh requires and how often it runs
[19:56:51 CET] <thebombzen> well if that's true
[19:57:03 CET] <thebombzen> how should I tell it to require 5 frames but be "always on"
[19:57:16 CET] <thebombzen> or does it not really matter how many frames it requires
[19:58:53 CET] <kerio> DHE: doesn't it have to always run tho
[19:58:57 CET] <kerio> for the framerate to be consistent
[20:05:51 CET] <DHE> so I ran into a possible bug. -vf yadif when using a non-interlaced video is affecting the video's timestamps
[20:07:19 CET] <thebombzen> it should just pass it through right? that sounds like a bug
[21:45:05 CET] <thebombzen_> hmm so I got to campus and now I'm getting a strange video4linux2 error
[21:45:06 CET] <thebombzen_> https://hastebin.com/iwekoqisuy.pas
[21:45:29 CET] <thebombzen_> [video4linux2,v4l2 @ 0x55715b140ea0] Dequeued v4l2 buffer contains 1702875 bytes, but 8294400 were expected. Flags: 0x00012001.
[21:46:47 CET] <thebombzen_> so I did a bit of research and apparently this bug existed in the past. a trac ticket #4030 was opened but it was closed 20 months ago
[21:46:56 CET] <thebombzen_> open -> closed, resolution -> fixed
[21:47:08 CET] <thebombzen_> but I"m still getting the error. Is this a regression or am I doing something wrong?
[21:47:24 CET] <thebombzen_> see: https://trac.ffmpeg.org/ticket/4030
[22:43:34 CET] <rjp421> i get that error with some webcams, i have to LD_PRELOAD v4l1compat.so
[22:45:25 CET] <rjp421> which i also need for v4l2-ctl to list and set working pixelformat etc
[00:00:00 CET] --- Mon Jan 9 2017
More information about the Ffmpeg-devel-irc