[Ffmpeg-devel-irc] ffmpeg.log.20190319

burek burek021 at gmail.com
Wed Mar 20 03:05:02 EET 2019


[00:13:24 CET] <bindi> I have two videos i'd like to combine side-by-side and I'm trying to avoid using a "movie maker", but can I alternate between the audio? as in, play audio from video1 0-5secs, then video2 5-10 secs, and so on
[01:49:33 CET] <friki> bindi: https://stackoverflow.com/questions/35349935/ffmpeg-crop-with-side-by-side-merge like that?
[01:50:33 CET] <bindi> I managed the side by side part with google but the audio alternating I could not find. But combined audio worked just fine for this use case
[01:51:10 CET] <friki> like diagonal example
[01:51:33 CET] <friki> probably you can found a function based on time that fits
[02:00:44 CET] <friki> bindi: ffmpeg -f lavfi -i color=color=red -t 30 red.mp4 && ffmpeg -f lavfi -i color=color=blue -t 30 blue.mp4 && ffmpeg -i red.mp4 -i blue.mp4 -filter_complex '[1:v][0:v]blend=all_expr=if(gt(T\,9)\,B\,A)' fcb.mp4
[02:01:52 CET] <friki> [...] \,A\,B)' fcb.mp4 # probably better :-)
[02:10:49 CET] <friki> bindi: I think now here is the complete solution, switching video each 10 seconds: ffmpeg -f lavfi -i color=color=red -t 60 red.mp4 && ffmpeg -f lavfi -i color=color=blue -t 60 blue.mp4 && ffmpeg -i red.mp4 -i blue.mp4 -filter_complex "[1:v][0:v]blend=all_expr=if(gt(mod(T\,20) \,10)\,A\,B)" fcb.mp4
[02:16:17 CET] <bindi> friki: works, but the audio isn't alternated with it :P
[02:16:26 CET] <bindi> and my original question was to have them side-by-side and alternate the audio only
[02:16:45 CET] <bindi> however I was satisfied with the one I managed to create which is just side by side and combined audio :D
[02:16:53 CET] <bindi> or rather, I am :P
[02:53:54 CET] <friki> oh, sorry. similar efect with audio should be possible with audio pan filter
[02:58:45 CET] <xxxmaker> anybody here who is video/camera/photo   expert ?
[03:05:35 CET] <TheSashm_> is there anyone that can properly explain qmin and qmax when dealing with mpeg2??
[03:05:38 CET] <TheSashm_> so confused...
[03:30:55 CET] <dongs> xxxmaker: i thought we already determined this yesterday. the one wiht shit colors, same camera as the other one?
[03:31:18 CET] <xxxmaker> it should be, but i wasn't at the scene
[03:31:46 CET] <dongs> really low saturation, maybe someone didn't set it up properly for low-light shooting
[03:32:43 CET] <xxxmaker> dongs join #photogeeks
[03:33:08 CET] <dongs> nah i got shit to do man, the point here is you arent gonna be able to fix that video, so you might as well just reshoot
[03:33:13 CET] <dongs> you can't fix that shit in post
[04:48:02 CET] <xxxmaker> what can i do to make video encoding faster with ffmpeg?
[04:51:21 CET] <dongs> use nvenc
[04:51:57 CET] <xxxmaker> how do i do that
[04:52:15 CET] <xxxmaker> is that totally separate program?
[04:52:33 CET] <dongs> no, its nvenc encode pipeline in ffmpeg
[04:52:53 CET] <dongs> i.e. -c:v h264_nvenc -preset slow -profile high -level 4.1 -b:v 9M
[04:53:26 CET] <xxxmaker> okay
[04:53:32 CET] <dongs> on a consumer nvidia card you can encode 2 streams at ~120fps each, on quadro you can do like 4 streams+ at 60fps
[04:53:36 CET] <xxxmaker> what is the downside/negative  of using nvenc
[04:53:40 CET] <dongs> none
[04:54:04 CET] <dongs> it uses hardware h264/265 encoder thats on every ~recent nvidia card
[04:54:23 CET] <dongs> basically same thing thats in phones/etc that does video encoding/decoding by hardware instead of CPU
[04:54:25 CET] <xxxmaker> so if i use  nvenc,  does it use any CPU usage?
[04:54:36 CET] <dongs> nope, barely any CPU
[04:54:40 CET] <xxxmaker> i see
[04:54:57 CET] <dongs> just wahtever is needed to copy to/from GPU and any overhead of parsing input + applying filters (if any)
[04:55:14 CET] <kepstin> well, the downside is that you can attain better compression ratios for a given quality if you do cpu encoding - the hardware encoer isn't as good as a cpu encoder
[04:55:17 CET] <dongs> the above nvenc line for me when I'm encoding 1080p stuff runs at 240fps
[04:55:21 CET] <kepstin> but it sure is fast
[04:55:21 CET] <xxxmaker> i have gt 1030
[04:55:27 CET] <dongs> works
[04:55:30 CET] <kepstin> gt 1030 does not have a hardware encoder
[04:55:38 CET] <dongs> https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
[04:55:42 CET] <dongs> does it not?
[04:55:50 CET] <dongs> oh lul
[04:55:52 CET] <dongs> right on top of that page
[04:55:56 CET] <dongs> with NO everywehre
[04:55:56 CET] <kepstin> nope, minimum is gtx 1050 for the encoder
[04:56:26 CET] <dongs> geez thats terrible, well okay. pickup a chceap 1050 then.
[04:57:01 CET] <xxxmaker> dongs kepstin is claiming  the hardware encoer isn't as good as a cpu encoder
[04:57:04 CET] <kepstin> or stick with software encoding and assuming you're using x264 just use a faster preset
[04:57:14 CET] <kepstin> which will make it go faster, but not be as good
[04:57:50 CET] <dongs> unless you're authoring bd quality stuff and hand-tuning encoding paramters i think nvenc is perfectly fine for modern media for consumption
[04:57:54 CET] <kepstin> the benefits of the software encoder kind of go away if you try to speed it up to match the hardware encoder.
[04:59:04 CET] <xxxmaker> if i encode a  raw video with cpu encoder    and  encode a same raw video with  nvidia encoder,     are you able to tell the difference which is which?
[04:59:54 CET] <kepstin> xxxmaker: if they're both encoded with settings that result in the same quality, you can't tell visually - but the filesize from the nvidia encoder will probably be bigger, depending on settings used.
[05:00:24 CET] <xxxmaker> kepstin  what if i made the filesize of both of them equally
[05:00:33 CET] <xxxmaker> which one would have better quality
[05:00:58 CET] <kepstin> then assuming you were using x264 with a preset that makes it run slower than the hardware encoder, it'll probably look better - although this depends on video content.
[05:01:26 CET] <kepstin> if you set the output filesize sufficiently large, you won't be able to tell them apart
[05:01:59 CET] <kepstin> differences are bigger when files are being compressed smaller/lower quality
[05:02:03 CET] <xxxmaker> dongs  240 fps ?  wow  are you serious?
[05:02:27 CET] <xxxmaker> cpu encoder does  3 fps for  1080p
[05:03:07 CET] <kepstin> depending on use case, hardware encoders definitely can be the best option.
[05:04:22 CET] <xxxmaker> kepstin  what fps would you get if you do  1080p video using hevc  using  nvenc
[05:04:39 CET] <kepstin> xxxmaker: see the nvidia documentation, they provide specs
[05:05:23 CET] <kepstin> well, i thought they did. lets see if i can find them
[05:06:21 CET] <xxxmaker> kepstin i see YES  for gt 1030  on the second part:  https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
[05:06:33 CET] <kepstin> it has a hardware *decoder*
[05:06:38 CET] <kepstin> but not an *encoder*
[05:06:47 CET] <xxxmaker> i see
[05:06:50 CET] <kepstin> it's a consumer chip meant for media consumption :)
[05:07:35 CET] <xxxmaker> is it possible to use  both  cpu and gpu at the same time to speed up the encoding (if you have the right nvdia hardware)
[05:07:44 CET] <xxxmaker> or is that not possible?
[05:07:59 CET] <kepstin> xxxmaker: yes, by encoding two videos at once - one on the cpu, the other on the gpu :)
[05:08:21 CET] <xxxmaker> what about for one video
[05:08:31 CET] <xxxmaker> can it be hybrid?
[05:08:42 CET] <kepstin> looks like nvidia claims around 120-150fps hevc 1080p 4:2:0 8bit encoding, although they specify that using a quadro card running 4 or 5 concurrent 30fps streams
[05:08:44 CET] <kepstin> no
[05:08:52 CET] <kepstin> the hardware encoder is an encoder on its own
[05:09:04 CET] <kepstin> there's nothing that can be broken out to software
[05:09:19 CET] <kepstin> (it would probably slow it down if there was, since stuff would have to be copied in/out of gpu ram)
[05:10:03 CET] <kepstin> consumer geforce cards are capped at encoding either 1 or 2 concurrent streams, even tho the hardware is capable of more.
[05:10:16 CET] <xxxmaker> it says gtx 1050 does not support  b frame;   is b frame important?
[05:11:32 CET] <kepstin> b frame is required for modern codecs to achieve the compression ratios that people claim they can do
[05:11:57 CET] <xxxmaker> kepstin so it's very important
[05:12:07 CET] <xxxmaker> especially for people who care about file size
[05:12:34 CET] <kepstin> i don't know for sure, but i suspect that the hevc encoder on pascal might not actually provide better results than the h264 encoder in terms of quality/bit
[05:12:46 CET] <kepstin> since the h264 encoder can use b-frames
[05:13:23 CET] <kepstin> be interesting for someone to test that
[05:13:27 CET] <xxxmaker> it says GeForce RTX 2080  support  b frames   but i am guessing GeForce RTX 2080  is super expensive
[05:13:31 CET] <kepstin> (maybe someone has and i just haven't seen it)
[05:14:11 CET] <kepstin> all turing cards should, a gtx 1660 would do it and that's around $230usd iirc?
[05:14:29 CET] Action: cards passed a turing test
[05:15:34 CET] <xxxmaker> cards what gpu do you have
[05:15:45 CET] <kepstin> i assume they'll come out with a tu117 based gtx 1650 at some point, which would be the cheapest card with turing's nvenc.
[05:15:46 CET] <cards> nothing of note.
[05:16:04 CET] <cards> i'm not a cutting edge fan
[05:16:35 CET] <xxxmaker> what is better turing or volta
[05:18:10 CET] <kepstin> xxxmaker: they do different things. but turing is *newer* and has a newer hardware encoder revision, so for encoder purposes turing is probably better for most use cases.
[05:18:26 CET] <xxxmaker> kepstin okay
[05:18:48 CET] <xxxmaker> kepstin what gpu do you have?
[05:19:56 CET] <kepstin> i've got a bunch of intel integrated stuff (intel actually has an ok hardware encoder too), a radeon rx 560 (which has a pretty bad hardware encoder with iffy drivers), some radeon vega integrated stuff that i haven't really tested yet, and a gt 1030 :)
[05:20:49 CET] <kepstin> i mostly do software encoding, fits my use cases best (picked up a ryzen chip with lots of cores for that)
[05:21:21 CET] <kepstin> at work, i do cpu encoding on amazon spot instances mostly :)
[05:21:31 CET] <xxxmaker> does amd GPU support  nvenc too?
[05:21:43 CET] <kepstin> hint: the nv stands for nvidia
[05:22:26 CET] <xxxmaker> does amd GPU support  hardware encoding too?
[05:22:27 CET] <kepstin> "nvenc" is the name of nvidia's hardware encoder, which uses proprietary drivers & api to access. so nvidia only
[05:23:12 CET] <kepstin> amd gpus have hardware encoders, accessible via different apis, e.g. vaapi on linux. their encoders tend to not be as good as nvidia's
[05:23:48 CET] <kepstin> intel's igpus have a hardware encoder named 'quicksync' which is generally considered ok, you might even already have one of those
[05:24:28 CET] <kepstin> (quicksync is sometimes disabled if you have an external gpu added, depending on motherboard settings - and xeon chips with no igpu don't have quicksync at all)
[05:24:46 CET] <xxxmaker> i have i7 3770
[05:25:00 CET] <xxxmaker> with geforce gt 1030
[05:25:40 CET] <kepstin> hmm. ivy bridge is kinda old at this point. i'd have to check references, but I think it can encode h264 ok.
[05:26:04 CET] <xxxmaker> i am getting 2 fps
[05:26:11 CET] <xxxmaker> 2-3 fps
[05:27:56 CET] <kepstin> that number is meaningless without knowing what encoder settings and filtering you're using.
[05:28:28 CET] <xxxmaker> x265 3.0:[Windows][GCC 7.1.0][64 bit] 8bit+10bit+12bit
[05:28:28 CET] <xxxmaker> Encoding settings              : cpuid=1049583 / frame-threads=3 / numa-pools=8 / wpp / no-pmode / no-pme / no-psnr / no-ssim / log-level=2 / input-csp=1 / input-res=1600x900 / interlace=0 / total-frames=0 / level-idc=0 / high-tier=1 / uhd-bd=0 / ref=4 / no-allow-non-conformance / no-repeat-headers / annexb / no-aud / no-hrd / info / hash=0 / no-temporal-layers / open-gop / min-keyint=24 /
[05:28:28 CET] <xxxmaker> keyint=240 / gop-lookahead=0 / bframes=4 / b-adapt=2 / b-pyramid / bframe-bias=0 / rc-lookahead=25 / lookahead-slices=4 / scenecut=40 / radl=0 / no-splice / no-intra-refresh / ctu=64 / min-cu-size=8 / rect / no-amp / max-tu-size=32 / tu-inter-depth=1 / tu-intra-depth=1 / limit-tu=0 / rdoq-level=2 / dynamic-rd=0.00 / no-ssim-rd / signhide / no-tskip / nr-intra=0 / nr-inter=0 / no-constrained-intra
[05:28:28 CET] <xxxmaker> / no-strong-intra-smoothing / max-merge=3 / limit-refs=3 / limit-modes / me=3 / subme=3 / merange=57 / temporal-mvp / weightp / no-weightb / no-analyze-src-pics / deblock=0:0 / sao / no-sao-non-deblock / rd=4 / no-early-skip / rskip / no-fast-intra / no-tskip-fast / no-cu-lossless / no-b-intra / no-splitrd-skip / rdpenalty=0 / psy-rd=2.00 / psy-rdoq=1.00 / no-rd-refine / no-lossless / cbqpoffs=0
[05:28:28 CET] <xxxmaker> / crqpoffs=0 / rc=crf / crf=22.0 / qcomp=0.60 / qpstep=4 / stats-write=0 / stats-read=0 / ipratio=1.40 / pbratio=1.30 / aq-mode=2 / aq-strength=1.00 / cutree / zone-count=0 / no-strict-cbr / qg-size=32 / no-rc-grain / qpmax=69 / qpmin=0 / no-const-vbv / sar=1 / overscan=0 / videoformat=5 / range=0 / colorprim=1 / transfer=1 / colormatrix=1 / chromaloc=0 / display-window=0 / max-cll=0,0 /
[05:28:29 CET] <xxxmaker> min-luma=0 / max-luma=255 / log2-max-poc-lsb=8 / vui-timing-info / vui-hrd-info / slices=1 / no-opt-qp-pps / no-opt-ref-list-length-pps / no-multi-pass-opt-rps / scenecut-bias=0.05 / no-opt-cu-delta-qp / no-aq-motion / no-hdr / no-hdr-opt / no-dhdr10-opt / no-idr-recovery-sei / analysis-reuse-level=5 / scale-factor=0 / refine-intra=0 / refine-inter=0 / refine-mv=0 / refine-ctu-distortion=0 /
[05:28:29 CET] <xxxmaker> no-limit-sao / ctu-info=0 / no-lowpass-dct / refine-analysis-type=0 / copy-pic=1 / max-ausize-factor=1.0 / no-dynamic-refine / no-single-sei / no-hevc-aq / qp-adaptation-range=1.00
[05:28:30 CET] <xxxmaker> Default                        : Yes
[05:28:38 CET] <kepstin> and don't spam the irc channel
[05:28:47 CET] <xxxmaker> sorry didn't realize it was this long
[05:29:06 CET] <kepstin> i don't care about x264's internal settings, just the ffmpeg command line you're using
[05:29:20 CET] <kepstin> i can't read that mess :/
[05:29:21 CET] <xxxmaker> x265
[05:29:29 CET] <kepstin> oh, hevc, lol
[05:29:31 CET] <kepstin> that's why it's slow
[05:29:35 CET] <kepstin> use x264
[05:30:08 CET] <kepstin> hevc software encoders are slow, hevc hardware encoders are fast but barely better than the h264 encoders :/
[05:30:32 CET] <kepstin> (although now that turing added b-frame support, that might have changed a bit)
[05:30:50 CET] <xxxmaker> you mean turing didn't support b-frame before?
[05:31:23 CET] <kepstin> you read that chart - pascal (the previous nvidia generation - 10XX cards) doesn't do b frames in hevc.
[05:32:06 CET] <xxxmaker> correct
[05:32:56 CET] <kepstin> turing is so new that i haven't seen any comparisons on the efficiency of the hevc encoding, but i suspect it has improved compared to pascal.
[05:33:32 CET] <xxxmaker> when did turing come out
[05:34:39 CET] <kepstin> late 2018/early 2019 depending on card model
[05:34:46 CET] <kepstin> the extremely high end stuff came out first
[05:36:18 CET] <xxxmaker> wow didn't realize turing is so new
[05:37:02 CET] <kepstin> like, the gtx 1660 (non-ti) was announced just last week.
[05:37:28 CET] <xxxmaker> is it safe to assume  youtube uses  GPU-hardware encoding
[05:37:44 CET] <kepstin> youtube almost certainly uses software (cpu) encoding
[05:38:06 CET] <xxxmaker> kepstin why do you say that
[05:38:50 CET] <kepstin> google has a lot of cpus, and not a lot of gpus. they encoder a lot of videos using vp9, which doesn't really have a competitive hardware encoder out yet (they're probably using libvpx)
[05:39:33 CET] <xxxmaker> youtube started using av1
[05:39:39 CET] <xxxmaker> as well
[05:39:46 CET] <xxxmaker> which is suppose to be even better
[05:40:03 CET] <kepstin> only google has enough cpu power available to encode any meaningful amount of av1 video
[05:40:17 CET] <kepstin> if you think x265 is slow, well, av1 is a lot slower :)
[05:40:35 CET] <kepstin> (at least with libaom, the primarily google-developed reference encoder)
[05:40:54 CET] <xxxmaker> does ffmpeg support av1 encoding?
[05:41:14 CET] <kepstin> can't remember. last i checked, libaom api wasn't stable enough yet.
[05:42:20 CET] <kepstin> (i should probably say that libaom dev is "google led", i don't know who the main developers on it actually are, and there's a lot of people involved)
[05:42:46 CET] <xxxmaker> wonder if there is freenode channel for libaom
[05:48:10 CET] <xxxmaker> has anybody here used  nvenc  for video encoding?
[07:47:01 CET] <dongs> xxxmaker: thats all i use
[07:47:04 CET] <dongs> i even pasted you command line
[07:47:09 CET] <dongs> but you need a less shitty GPU
[07:47:49 CET] <xxxmaker> dongs what GPU do you have
[07:48:01 CET] <dongs> quadro P2000
[07:48:13 CET] <dongs> the encoding hardware is same in all pascal cards. its just artificially limited by driver
[07:48:19 CET] <dongs> so quality etc is all same across the board.
[07:49:28 CET] <xxxmaker> i see
[07:49:59 CET] <xxxmaker> can you paste me the meta data that video file shows  after using nvenc
[07:51:13 CET] <dongs> you mean like mediainfo?
[07:51:57 CET] <dongs> http://bcas.tv/paste/results/IuWk8j47.html
[07:52:57 CET] <dongs> http://bcas.tv/paste/results/QhN3kv74.html
[07:52:59 CET] <dongs> some other shit
[07:54:34 CET] <xxxmaker> i see
[07:54:39 CET] <xxxmaker> you dont' get  meta like this?   Encoding settings              : cpuid=1049583 / frame-threads=3 / numa-pools=8 / wpp / no-pmode / no-pme / no-psnr / no-ssim / log-level=2
[07:54:50 CET] <dongs> no cuz there aren't any settings with nvenc
[07:55:03 CET] <dongs>  < xxxmaker> dongs  240 fps ?  wow  are you serious?
[07:55:07 CET] <dongs> yes serious
[07:55:11 CET] <dongs> its hardware encoder
[07:57:53 CET] <dongs> xxxmaker: frame= 1905 fps=150 q=12.0 size=   74922kB time=00:01:03.46 bitrate=9671.0kbits/s dup=0 drop=1 speed=5.01x
[07:57:59 CET] <dongs> this is encode w/deinterlace filter
[07:58:03 CET] <dongs> so its slowing shit down a bit
[07:58:16 CET] <xxxmaker> isn't deinterlacing done by CPU though?
[07:58:16 CET] <dongs> ffmpeg -hwaccel dxva2 -threads 1 -i video.mkv -filter:v yadif -c:v h264_nvenc -preset slow -profile:v high -level 4.1 -b:v 10M output.mkv
[07:58:19 CET] <dongs> yes
[07:58:26 CET] <dongs> thats why encode fps is lower
[07:59:09 CET] <dongs> frame= 1872 fps=218 q=12.0 size=   75185kB time=00:01:02.36 bitrate=9876.3kbits/s dup=0 drop=1 speed=7.26x
[07:59:13 CET] <dongs> same source without filter
[07:59:33 CET] <xxxmaker> i see
[07:59:50 CET] <dongs> anyway, still way better than 3 fps haha.
[08:00:04 CET] <xxxmaker> try using h265 nvenc
[08:00:17 CET] <dongs> yes, it should be same
[08:00:22 CET] <furq> nvdec/cuvid does deinterlacing fyi
[08:00:25 CET] <furq> supposedly it's pretty good
[08:00:35 CET] <dongs> furq, yeah im sure. i was jsut too lazy to look it up.
[08:00:48 CET] <furq> -deint adaptive -c:v h264_cuvid -i ...
[08:00:49 CET] <xxxmaker> but i was told pascal doesn't do  b-frames
[08:00:50 CET] <dongs> i onyl had that one interlaced source from an old mpeg2 camera
[08:00:59 CET] <dongs> furq: nice, saved
[08:01:14 CET] <furq> or -deint bob but i guess adaptive is better
[08:01:57 CET] <xxxmaker> i hate  3 fps  encoding
[08:02:01 CET] <xxxmaker> too slow
[08:03:19 CET] Action: JEEB still remembers doing encodes circa 2006 with an amd laptop (lol)
[08:03:26 CET] <JEEB> 0.6fps or so
[08:03:52 CET] <JEEB> so x264 f.ex. is fast now on any modern cpu by comparison
[08:05:35 CET] <JEEB> last I tested on my desktop I could go all 'tard with preset placebo with I think 720p at 8fps
[08:05:41 CET] <JEEB> (4790k)
[08:06:31 CET] <JEEB> and on various boxes doing 1080p25 with preset veryslow was very muchos realtime
[08:08:49 CET] <JEEB> hw encoders are for low latency and speed without cpu usage, anyways. so for example I would utilize my nvidia's encoder when doing lossless captures of gaming or whatever
[10:48:21 CET] <Dragas> Yo! I'm trying to cross compile for android but I keep getting the following error: `ndk-bundle/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc is unable to create an executable file.` This is my config.log that's provided at ffbuild
[10:48:22 CET] <Dragas> https://pastebin.com/r9XGSdNb
[10:49:38 CET] <Dragas> And I'm using the following script to cross compile it: https://pastebin.com/2UPe390g
[10:49:42 CET] <Dragas> What am I doing wrong here?
[10:51:47 CET] <furq> ./configure: line 968: /Users/mantas/Library/Android/sdk/ndk-bundle/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc: No such file or directory
[10:51:53 CET] <furq> maybe this has something to do with it
[10:54:15 CET] <Dragas> Hmm. You're right. It doesn't include gcc
[11:01:32 CET] <net|> anyone know how to fix mjpeg errors ?
[11:01:53 CET] <net|> [video4linux2,v4l2 @ 0x56511c3f99e0] Cannot find a proper format for codec 'mjpeg' (id 8), pixel format 'none' (id -1)
[11:02:45 CET] <net|> ./configure says mjpeg was included
[11:02:54 CET] <furq> set -pix_fmt before -i
[11:03:46 CET] <net|> awesome thanks much
[11:08:50 CET] <net|> seems to be working, http://wiki.webmproject.org/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dash  this code keeps spitting out chunks of files.. i just want a single webm file to access via website is that doable ?
[11:09:20 CET] <net|> makes alot of chk files
[11:11:24 CET] <net|> also seems to use all my cpu
[11:13:24 CET] <net|> i had too many threads
[11:30:57 CET] <nadermx_> Hi all I'm trying to fun a command that pipes two outputs into ffmpeg, but the program seems to only half recive one of them and then just hangs
[11:33:11 CET] <nadermx_> { command1 -o; command2 -o; } | ffmpeg -i pipe:0 -i pipe:1 -c:v copy -c:a aac -strict experimental output.mp4
[13:21:42 CET] <Abdullah> I'm using this command but getting not good result in output video. even I tried to increase framerate. ffmpeg -video_size 1600x900 -framerate 30 -f x11grab -i :0.0 -f pulse -ac 2 -i default output.mkv
[13:22:03 CET] <Abdullah> did with 60 framerate but still no good response.
[13:27:38 CET] <MatthewAllan93> Hey :), I am trying to use encode x265 with Opus, therefore using this part of the command for audio "-c:a libopus -b:a 128k -vbr on -ac 2". But I have checked in Mediainfo program, to check the output file. I saw that it isn't using the 128kb vbr and instead using the bitrate that is in the input video. Any help would be appreciated :).
[13:43:11 CET] <lethyr> can ffmpeg concatenate a video file and standard input?
[13:43:51 CET] <lethyr> to avoid XY problems I'll write up a description of my desired end result
[13:45:23 CET] <lethyr> I have a script that checks for live video streams on a cron job and downloads those live streams. a second script later uploads them to a YouTube channel for backup. I'd like to add an intro to the files before they are uploaded to YouTube and concatenating two files together takes a very long time. I was hoping to do it on the fly instead.
[13:58:08 CET] <lethyr> I have tried:
[13:58:10 CET] <lethyr> cat input_test.mp4 | ffmpeg -i input_test.mp4 -i - -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output_test.mp4
[14:00:01 CET] <lethyr> this results in multiple errors: https://pastebin.com/raw/0qYHUXVa
[14:02:47 CET] <lethyr> the concat demuxer will not accept standard input as a valid entry
[14:11:11 CET] <mickeyl> Hey. I'm working on decoding a H264 stream sent by an IP cam. This IP cam does not send SPS and PPS NALUs, still FFmpeg manages to gather the correct parameters by looking at the slices. Perhaps someone has a some insight in how that works. (At the end of the day, I'm trying to initialize the hardware decoder on iOS which absolutely needs the SPS and PPS for proper format initialization)
[14:19:37 CET] <BtbN> They have to be somewhere.
[14:22:56 CET] <furq> MatthewAllan93: opus vbr is actual proper vbr, not abr
[14:23:21 CET] <furq> -b:a 128k won't attempt to hit 128kbps, it's just a quality level
[14:23:31 CET] <brejeiro> kepstin: I'm using Windows, and was trying to avoid changing the cursor system-wide. I'd like to only change in the recording... But, if this is not possible, I'll just change it for the whole system.
[14:23:34 CET] <furq> also i wouldn't necessarily trust mediainfo to give the right bitrate there anyway
[14:23:41 CET] <brejeiro> Thanks for the answer!
[14:23:43 CET] <furq> especially if this is mkv
[14:24:48 CET] <kepstin> brejeiro: the way the screen capture on windows works, you could actually edit the gdigrab.c file to change the cursor image used, but you can't change it without editing the ffmpeg code.
[14:25:13 CET] <furq> lethyr: that command won't work because the input is an mp4
[14:25:24 CET] <furq> as a rule you can't read or write mp4s from or to pipes
[14:25:28 CET] <MatthewAllan93> furq: Ah ok, thanks for your answer. It was just because I am use to FFMPEG encoding at 128kb vbr around that bitrate, not at 320kb for an example.
[14:25:50 CET] <furq> if it's showing as 320k then either this is incredibly pathological audio or mediainfo is just reporting it wrong
[14:25:53 CET] <lethyr> @furq thanks
[14:25:53 CET] <brejeiro> Hmmm, got it. As I'm deploying it via chocolatey to hundreds of computers, I think it will be easier to just change the current cursor. Not a big deal, anyway!
[14:25:55 CET] <furq> probably the second one
[14:26:16 CET] <lethyr> I can use mpegts files which is what I'm experimenting with now if that'll get me in the right direction
[14:26:29 CET] <furq> well if nothing else ts will give you a more useful error
[14:26:39 CET] <furq> i feel like that should work though
[14:26:46 CET] <MatthewAllan93> furq: I thought that but wanted to make sure, thanks for your answer anyway :).
[14:27:43 CET] <furq> MatthewAllan93: you can demux the audio stream if you want to check the actual bitrate
[14:27:52 CET] <furq> -i foo.mkv -map 0:a bar.opus
[14:27:59 CET] <furq> er
[14:28:01 CET] <furq> -i foo.mkv -map 0:a -c copy bar.opus
[14:29:30 CET] <brejeiro> Another thing I was wondering is: is there a way to get the current frame/exact time of a recording? I'm recording some stuff in my desktop, and I need the precise moment that an event occurred. As the recording start a few moments before I manage to do things, If I try to use the times as my system sees it, I'll get close to a second or two of delay, which is not desirable. Is there anyway I could "ask" ffmpeg what's the actual t
[14:30:46 CET] <furq> brejeiro: yes but it depends what you want to do with it
[14:31:21 CET] <furq> e.g. some filters have ways to use the wallclock time
[14:31:21 CET] <brejeiro> furq: I'd like to break the video in chapters. Or, if it is not possible, just getting the time is enough
[14:31:58 CET] <furq> how are you starting the recording
[14:34:03 CET] <brejeiro> via command line...
[14:34:19 CET] <brejeiro> It runs in the background. Do you need the full command line?
[14:34:34 CET] <furq> no but if it's a script then you could presumably just get the wallclock time in there
[14:34:43 CET] <brejeiro> ffmpeg -f gdigrab -i desktop -vcodec libx264 -y -force_fps  -framerate 30 myvideo.mp4
[14:35:31 CET] <brejeiro> Well, it is. The problem is that from the point I send the command, to the point that it actually starts recording, I loose a couple of seconds...
[14:35:41 CET] <brejeiro> I'm actually doing that...
[14:36:44 CET] <lethyr> streamlink -o - <url> best | ffmpeg -i mixer.ts -i - -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output_test.ts
[14:36:45 CET] <furq> oh right the filename is a few seconds early
[14:36:53 CET] <furq> there is a -strftime option but it only works for certain muxers
[14:36:54 CET] <lethyr> this complains about the resolutions not matching
[14:37:54 CET] <furq> apparently -strftime only works for hls/segment/image2 muxers
[14:38:28 CET] <furq> -strftime 1 "%Y%m%d%H%M%S.mp4" if you happen to be using any of those
[14:38:59 CET] <brejeiro> I'll give them a try! Thanks!
[14:40:02 CET] <lethyr> I see this discusses the error I'm getting https://stackoverflow.com/questions/37327163/ffmpeg-input-link-in1v0-parameters-size-640x640-sar-169-do-not-match-the
[14:40:19 CET] <lethyr> however their issue is the sample aspect ratio doesn't match. mine is that the size (resolution) doesn't match.
[14:40:35 CET] <lethyr> can I modify the best answer there to solve my problem?
[14:40:36 CET] <furq> lethyr: you'd need to scale it before concat
[14:40:51 CET] <furq> you can do pretty much the same thing as that SO answer, just replace setdar with scale
[14:41:04 CET] <furq> or scale2ref rather
[14:42:31 CET] <furq> -i intro.ts -i - -lavfi "scale2ref[tmp];[tmp][1:v]concat=n=2:v=1:a=1" out.ts
[14:44:51 CET] <lethyr> "Stream specifier ':v' in filtergraph description scale2ref[tmp];[tmp][1:v]concat=n=2:v=1:a=1 matches no streams."
[14:45:05 CET] <lethyr> I'm not familiar enough with filters to know what that means
[14:48:03 CET] <raytiley> Is there a reason release_request_pad api is different on a audiomixer vs compositor? https://www.irccloud.com/pastebin/qykYyNbZ/api_difference.rs
[14:48:59 CET] <raytiley> nevermind... see my typo
[14:58:34 CET] <lethyr> streamlink -o - https://www.youtube.com/c/mtcdood/live best | ffmpeg -i mixer.ts -i - -lavfi "[1:v][0:v]scale2ref[wm][base];[wm]setsar=1[wm];[base][wm]concat=2" output.flv
[14:58:43 CET] <lethyr> this command appears to work. do you see any problems with it?
[15:00:41 CET] <furq> it looks fine but that's scaling the stream to the size of your intro video
[15:00:47 CET] <furq> i assume you want it the other way round
[15:00:57 CET] <furq> although if it's for youtube then maybe not
[15:04:18 CET] <lethyr> hmm.
[15:04:35 CET] <lethyr> nah I want it to scale the intro to the stream
[15:05:10 CET] <furq> just get rid of [1:v][0:v] then
[15:05:34 CET] <furq> or swap them around, but either should work
[15:12:03 CET] <lethyr> I get messages like "Buffer queue overflow, dropping.= 889.8kbits/s speed=0.744x"
[15:12:19 CET] <lethyr> it looks like I might be able to fix that with advice from https://stackoverflow.com/questions/39574032/ffmpeg-error-buffer-queue-overflow-dropping-when-merging-two-videos-with-del
[15:34:43 CET] <lethyr> I notice that that command results in the intro's audio being overlapped with the stream, but the intro's video is not concatenated
[15:35:43 CET] <kepstin> lethyr: the concat filter by default only does the video streams, you have to provide additional options to set the number of audio streams
[15:36:08 CET] <lethyr> ah I didn't realize that
[15:39:31 CET] <lethyr> regarding the advice given about which input is being scaled to which, simply removing or reversing the two portions of the filter does not perfom as expected
[15:39:49 CET] <faLUCE> I don't understand why the matroska muxer, at least in H264 case, wants so many params from the avocedcontext. For example: if I don't set the bitrate even on the muxer, the stream can't be muxed
[15:39:50 CET] <lethyr> doing either results in the intro's video simply not being concatenated to the beginning of the stream
[15:40:09 CET] <furq> oh right
[15:40:19 CET] <furq> lethyr: scale2ref only produces one output iirc
[15:40:31 CET] <furq> so get rid of [base] and replace [base][wm] with [0:v][wm]
[15:42:53 CET] <lethyr> that doesn't have any effect on the result
[15:43:13 CET] <lethyr> I should point out that I've started outputting in .flv for these tests by the way
[15:43:29 CET] <lethyr> ultimately I'll need something YouTube will happily accept as an upload
[15:44:18 CET] <lethyr> that particular command won't work with flv because "at most one video stream is supported in flv"
[15:44:18 CET] <kepstin> youtube happily accepts basically anything
[15:44:46 CET] <lethyr> I'll check a mpegts file real quick, for some reason I thought they wouldn't take them
[15:45:16 CET] <kepstin> furq: lethyr: scale2ref does actually produce two outputs
[15:47:25 CET] <kepstin> the first input to scale2ref is the video to be scaled, the second input is the reference video. The first output is the newly scaled video, the second output is a pass-through of the reference video.
[15:48:06 CET] <furq> fun
[15:48:10 CET] <furq> not sure what's going on there then
[15:49:58 CET] <lethyr> YouTube will happily accept mpegts
[15:50:02 CET] <lethyr> that simplifies one thing
[15:52:06 CET] <lethyr> https://pastebin.com/raw/1DP3kxXY
[15:52:18 CET] <lethyr> here are two commands and their results
[15:53:32 CET] <lethyr> actually the stream looks bad in both results
[15:57:27 CET] <kepstin> lethyr: of course the video looks bad, you haven't set an output codec or options
[15:57:36 CET] <kepstin> it's probably using terrible mpeg2 settings or something
[15:57:49 CET] <lethyr> aha
[15:58:13 CET] <lethyr> I hoped it'd use the settings of the standard input
[15:58:25 CET] <lethyr> but glad to understand why it looks like that
[15:59:09 CET] <kepstin> there's no way to know what settings the input was encoded with in general, and even if you're using the same settings and encoder, you'll lose quality due to generation loss with lossy codecs.
[15:59:26 CET] <lethyr> hmm
[15:59:45 CET] <kepstin> lethyr: anyways, your first command in that paste looks roughly correct, except that you haven't set the concat filter to concat the audio streams
[16:00:08 CET] <lethyr> yeah
[16:07:23 CET] <lethyr> I've been trying to find someone using both concat with audio and scale2ref at once but I can't find any examples
[16:07:48 CET] <kepstin> the scale2ref usage doesn't change how concat works...
[16:08:24 CET] <kepstin> concat in this case would be "[firstvideo][firstaudio][secondvideo][secondaudio]concat=n=2:v=1:a=1"
[16:09:27 CET] <lethyr> ffmpeg -i intro.ts -i - -lavfi "[0:v][0:a][1:v]scale2ref[1:a][wm][base];[wm]setsar=1[wm];[base][wm]concat=n=2:v=1:a=1" output.ts
[16:09:48 CET] <lethyr> this does not work because I don't know where to put the audio in reference to the scaling and aspect ratio synching
[16:09:57 CET] <kepstin> lethyr: why are you adding an audio intput to scale2ref? that filter can't take audio
[16:10:03 CET] <kepstin> you need to pass the audio to the concat filter
[16:10:15 CET] <lethyr> you seem to be under the impression I know what I'm doing
[16:10:21 CET] <lethyr> I'm not sure what gave you that idea
[16:11:51 CET] <lethyr> moving [1:a] before scale2ref yields the same result
[16:11:55 CET] <kepstin> assuming your first example from your paste has the video in the correct order, you'd want "[0:v][1:v]scale2ref[wm][base];[base][1:a][wm][0:a]concat=n=2:v=1:a=1"
[16:13:43 CET] <kepstin> (actually, how does that even work? i'd think your command should give an error since you're using the same link name as an input and an output on one filter)
[16:16:49 CET] <lethyr> that still doesn't concatenate the video
[16:17:10 CET] <lethyr> I need the intro with its audio to play, then the stream with its audio to play
[16:17:58 CET] <kepstin> ah, i guess the order is wrong then
[16:18:17 CET] <kepstin> change it to '[wm][0:a][base][1:a]' before the concat filter then
[16:18:22 CET] <Dragas> In what cases does can be stream's codec information be missing?
[16:18:59 CET] <kepstin> Dragas: no idea, need more context.
[16:18:59 CET] <lethyr> that modification plays the intro's audio over the stream's video
[16:19:26 CET] <Dragas> With ffprobe and ffmpeg tools my rtsp stream information is reported just fine, but when I try to set it up with AVStream, the reported codec is NULL
[16:19:51 CET] <kepstin> lethyr: oh, huh, still wrong. there's only so many permutations, it should be easy to fix that with a little logical thinking and problem solving.
[16:20:10 CET] <lethyr> yeah
[16:20:29 CET] <lethyr> I do want to bring up these messages I'm seeing though: "[Parsed_concat_2 @ 0x55bdb1953e60] Buffer queue overflow, dropping.=14807.6kbits/s speed=0.474x"
[16:20:46 CET] <Dragas> https://i.imgur.com/3QaIoz9.png
[16:20:51 CET] <Dragas> For example
[16:20:54 CET] <kepstin> lethyr: those messages are happening because the wrong video and audio streams are paired, they'll go away once you fix that
[16:21:00 CET] <lethyr> ah ok
[16:21:16 CET] <lethyr> I just wanted to make sure it wasn't "your machine is too slow to do this, don't waste more time"
[16:23:55 CET] <lethyr> [wm] here is the intro after it's been scaled right?
[16:24:07 CET] <lethyr> and [base] is just the standard input being passed to ffmpeg?
[16:24:23 CET] <kepstin> the first input to scale2ref is scaled and sent to the first output
[16:24:32 CET] <kepstin> the second input to scale2ref is passed through to the second output
[16:27:36 CET] <kepstin> Dragas: hard to say without seeing your code. some information might not be populated until you call avformat_find_stream_info, but I'm not familiar enough with rtsp to know if that's the issue.
[16:27:39 CET] <lethyr> and "[wm]setsar=1[wm]" is just taking [wm], applying an aspect ratio of 1, and passing the output to the same variable name?
[16:28:01 CET] <kepstin> lethyr: no, since you can't have an input and output with the same name. I have no idea what that's doing.
[16:28:11 CET] <kepstin> maybe nothing? i'd expect it to be an error, but... ?
[16:28:19 CET] <lethyr> makes sense to try removing it then
[16:28:54 CET] <Dragas> Hold on, i'll post how I open up my stream.
[16:29:03 CET] <kepstin> the scale2ref should be taking care of making sure sar matches, i think? if not, you might have to put it back, but make sure you use different name for the output (and update the concat filter to use the new name in the input)
[16:30:14 CET] <lethyr> I admit I didn't understand what any of this was supposed to do an hour ago but it *looks* like it should work
[16:34:48 CET] <lethyr> even the command that I documented as concatenating the video doesn't do it anymore
[16:35:04 CET] <Dragas> kepstin: https://pastebin.com/e20NgqTm
[16:35:18 CET] <Dragas> I'm using the following snippet to open up stream information
[16:35:22 CET] <Dragas> the stream, rather
[16:38:11 CET] <lethyr> ffmpeg -y -i intro.ts -i - -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output.ts
[16:38:52 CET] <lethyr> this will work if the resolutions are the same
[16:41:36 CET] <lethyr> can I work from here instead of starting completely over with a new command?
[16:42:03 CET] <kepstin> lethyr: yeah, that would be where you start from.
[16:42:07 CET] <lethyr> I tried this but I'm jumbling up the scale and the concat
[16:42:09 CET] <lethyr> ffmpeg -y -i intro.ts -i - -filter_complex "[0:v][1:v]scale2ref[vid0][vid1];[vid0][vid1][0:a][1:a]concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output.ts
[16:42:25 CET] <faLUCE> do you know if is there a function for obtaining the header of a muxer? I'm forced to use av_write_frame and() for the first packet but then I don't know how to separate the header part from the muxed frame part....
[16:42:42 CET] <faLUCE> do you know if is there a function for obtaining the header of a muxer? I'm forced to use av_write_frame() for the first packet but then I don't know how to separate the header part from the muxed frame part....
[16:42:44 CET] <kepstin> lethyr: you changed the order of the inputs to the concat - it's first video, first audio, second video, second audio
[16:44:06 CET] <kepstin> faLUCE: why do you think you want that info? at that point, the muxer output is an opaque bytestream.
[16:44:49 CET] <faLUCE> kepstin: I have to put the header at the beginning of each http stream that I produce with the same muxer
[16:45:26 CET] <kepstin> faLUCE: you should really just run a separate muxer for each stream.
[16:45:32 CET] <faLUCE> kepstin: I mean, I send muxed date to a custom streamer, made without ffmpeg, and I need the header at the beginning
[16:45:34 CET] <lethyr> thanks @kepstin that works perfectly except for the poor quality which you already mentioned the reason for
[16:46:33 CET] <faLUCE> kepstin: that's not possible
[16:46:47 CET] <kepstin> lethyr: yeah, to fix the quality start by adding "-c:v libx264" to your output to get a reasonable video codec, then see https://trac.ffmpeg.org/wiki/Encode/H.264 for details about options.
[16:48:25 CET] <kepstin> faLUCE: what container are you using?
[16:48:48 CET] <faLUCE> kepstin: matroska
[16:49:35 CET] <faLUCE> kepstin: I wonder if is there a way to pick it in the opaque priv_data
[16:50:02 CET] <kepstin> faLUCE: but yeah, no way to do this in general. the closest thing to a "proper" way would be to parse the muxer output to find the header, but imo you should really use separate muxers.
[16:51:01 CET] <kepstin> faLUCE: i assume this is a live streaming application where you're continuously encoding video, but whenever someone connects to http they get the stream starting at the time they connected?
[16:51:18 CET] <faLUCE> kepstin: exactly
[16:52:01 CET] <faLUCE> kepstin: I used a bad hack: I muxed a dummy packet (without data) soon after write_header
[16:52:18 CET] <faLUCE> so I got the header from that, but it's not a proper way
[16:52:24 CET] <kepstin> faLUCE: yeah, i'd do that by creating a new muxer when someone connects, and start muxing encoded video packets at the point where they connected (but note that you will probably need to cache the codec initialization data to feed to the new muxer)
[16:53:32 CET] <faLUCE> kepstin: this is overkill
[16:53:47 CET] <lethyr> ... why is ffmpeg taking two 1080p videos and outputting a 240p video?
[16:53:49 CET] <lethyr> that's why it looks so bad
[16:54:09 CET] <kepstin> lethyr: oh, right, you need to set some options on the scale2ref filter :)
[16:54:19 CET] <lethyr> ok I'll look it up
[16:55:15 CET] <kepstin> i'm surprised it's not the default, but "scale2ref=w=iw:h=ih" should do it
[16:55:35 CET] <kepstin> hmm, but that's weird, that shouldn't be scaling your other input video at all
[16:55:59 CET] <kepstin> lethyr: i suspect your second input isn't actually 1080p like you think it is
[16:56:07 CET] <kepstin> lethyr: ffmpeg's output will tell you
[16:56:09 CET] <lethyr> LOL
[16:56:16 CET] <kepstin> (if you can't read it, please pastebin the entire output)
[16:56:19 CET] <lethyr> remember when I said it doesn't work when the resolution changed
[16:56:37 CET] <lethyr> I specifically started grabbing the lowest possible resolution to make sure it worked with different resolutions
[16:56:44 CET] <faLUCE> it's very strange the libavcodec doesn't have at least an opaque field where to get that header
[16:57:25 CET] <kepstin> faLUCE: libavcodec has nothing to do with it, and libavformat produces opaque muxed output, you're supposed to demux it again if you want data from it.
[16:58:26 CET] <kepstin> breaking apart and mashing together output after muxing requires that you know how to parse the muxed output
[16:58:39 CET] <kepstin> (it's not even necessarily possible in all formats)
[16:59:27 CET] <Dragas> kepstin: well apparently the issue was caused by av_register_all, which is deprecated
[16:59:51 CET] <kepstin> Dragas: shouldn't cause an issue. with old ffmpeg that's required or nothing works, with new ffmpeg it does nothing.
[17:00:02 CET] <Dragas> hmm
[17:00:14 CET] <Dragas> to be honest it's the only thing i removed and it started working
[17:03:22 CET] <faLUCE> kepstin: well, I could demux, then.  but in which way? with av_read_frame() I could get again the packet but how could I get the header?
[17:03:45 CET] <kepstin> faLUCE: you can't, ffmpeg is not designed to do this.
[17:03:53 CET] <lethyr> "frame= 1012 fps= 21 q=0.0 size=   12456kB time=00:00:16.86 bitrate=6050.4kbits/s speed=0.352x" since this is live stdin what are the ramifications of ffmpeg not being able to process the video at or above 1.0x?
[17:04:06 CET] <lethyr> is it using memory, losing the video data, something else?
[17:04:35 CET] <kepstin> lethyr: eventually the pipe buffer leading into the ffmpeg will fill, and then it will cause whatever application is sending video to block
[17:04:46 CET] <kepstin> what happens then depends on the application sending stuff to ffmpeg
[17:05:13 CET] <faLUCE> there should be a way
[17:05:41 CET] <kepstin> faLUCE: yes, it's possible. using your own knowledge of how matroska is constructed, you can identify the header in the muxer output bytestream and save it your self.
[17:06:16 CET] <lethyr> lol this machine can't even keep up with a CRF of 51
[17:06:35 CET] <lethyr> I guess I wasted 3 hours figuring out how to do this on the fly only to learn that I can't do this on the fly
[17:07:53 CET] <kepstin> lethyr: are you using a raspberry pi or something? :/
[17:08:13 CET] <faLUCE> kepstin: I wonder if is there a way to av_write_frame() on a dummy packet
[17:08:13 CET] <kepstin> note that crf doesn't make a big difference on encoding speed - change the "-preset" option to do that
[17:08:20 CET] <faLUCE> a safe way, I mean
[17:08:49 CET] <lethyr> no this is a VPS from OVH
[17:09:10 CET] <kepstin> faLUCE: why can't you use the avformat_write_header() anyways?
[17:10:03 CET] <kepstin> it might work well enough for your use case if you're luck, although it depends on muxer implementation (some don't write anything until you write a frame)
[17:10:13 CET] <faLUCE> kepstin: because in order to produce the header, AFAIK/IIRC it needs to call av_write_frame() with a new packet
[17:10:56 CET] <kepstin> faLUCE: anyways, ffmpeg is not designed to do this, and there's no ffmpeg api to do this. if you really want to mess around with the output bytestream after it's muxed, then you have to mess around with the output bytestream after it's muxed by yourself.
[17:11:51 CET] <faLUCE> I understood that ffmpeg is not designed to do that, but I wonder if is there a way to send a "dummy packet" to the muxer, safely
[17:11:56 CET] <faLUCE> so I can get the header
[17:12:13 CET] <kepstin> faLUCE: if there's a way to send a dummy packet safely, then the muxer would be free to ignore it and do nothing.
[17:12:54 CET] <faLUCE> the muxer would ignore the packet, but not what preceeds the packet
[17:13:06 CET] <kepstin> faLUCE: the "proper" way to do what you want with ffmpeg is to use separate muxers for each output stream. If you want to hack up the bytestream from one muxer, you have to do that yourself.
[17:13:24 CET] <kepstin> faLUCE: nothing preceeds the packet tho
[17:13:35 CET] <kepstin> faLUCE: the muxer generates the header and writes it whenever it feels like doing that
[17:14:15 CET] <kepstin> the way it works is "avpacket -> muxer -> opaque bytestream of muxed data"
[17:15:27 CET] <faLUCE> kepstin: ok. do you know if the muxer safely discards too non keyframes soon after the header?
[17:15:33 CET] <faLUCE> (h264)
[17:16:09 CET] <kepstin> faLUCE: you should discard them yourself before muxing
[17:16:27 CET] <faLUCE> I see.
[17:17:16 CET] <lethyr> with a CRF of 28 and the ultrafast preset my bitrate has slowly increased and just passed 9K
[17:17:45 CET] <lethyr> as a result I started at 1.4x and have dropped to 0.7x
[17:17:47 CET] <faLUCE> anyway, there should be some hack for obtaining the header using some deep function of libavcodec... instead of parsing the data
[17:17:56 CET] <lethyr> how can I reign in ffmpeg so it behaves sanely
[17:17:56 CET] <kepstin> lethyr: bitrate has nothing to do with speed.
[17:18:13 CET] <kepstin> lethyr: many vps systems have cpu limits, where if you use a lot of cpu for a while, they start throttling you
[17:18:22 CET] <kepstin> lethyr: recommend you get a better server.
[17:18:26 CET] <lethyr> bitrate does have to do with speed. as the bitrate has gone up the speed has gone down.
[17:18:35 CET] <kepstin> lethyr: probably coincidence
[17:18:40 CET] <lethyr> lol
[17:18:53 CET] <lethyr> you think it's a coincidence that processing literally more data might take longer
[17:19:15 CET] <kepstin> "ultrafast" mode turns off much of the processing that depends on output video size
[17:19:54 CET] <lethyr> looks like it stabilized at 10.8K
[17:20:01 CET] <kepstin> ("cabac", an arithmetic encoder run over the output bitstream, can cause h264 encoding to be slower at higher bitrates, but cabac turns off with ultrafast mode)
[17:20:20 CET] <lethyr> the source video is nowhere near that.
[17:20:30 CET] <lethyr> it makes no sense for it to be doing that
[17:20:49 CET] <kepstin> lethyr: it's being encoded from scratch (raw video), the source video has nothing to do with it
[17:22:13 CET] <kepstin> lethyr: for your use case, you might actually want to run a separate ffmpeg command to attempt to encode the intro with matching settings to the real video, save that to a file, then do a plain file-level concatenation of the newly scaled intro video and the pass through video
[17:22:23 CET] <kepstin> lethyr: rather than re-encode the entire thing
[17:22:58 CET] <lethyr> the reason I was trying to do it on the fly was because I didn't want to use 50 gigs of disk space to process files
[17:23:01 CET] <kepstin> lethyr: that'll require more coding on your end, or pre-generating a bunch of intro videos for different settings.
[17:23:08 CET] <kepstin> you only need to store the intro videos...
[17:27:27 CET] <kepstin> lethyr: to do this, you'd just pre-run some ffmpeg commands like "ffmpeg -i intro.ts -s 320x240 -c:v libx264 intro-240.ts" to make the various intro video sizes, then replace the ffmpeg command you're doing now with "cat intro-240.ts - >output.ts"
[17:28:05 CET] <kepstin> you'd have to look at the exact specs of the input video - it might need some extra options to set a specific audio codec and maybe certain stream ids.
[17:30:29 CET] <lethyr> I've been up for 34 hours. I'll decide if I really want to deal with it after some sleep.
[17:51:44 CET] <ncouloute> Is there a easy way to parse the -vf "showinfo" data in windows? I'm having to rely on it because ffprobe,ffms2 is providing me with different frame+timestamps than ffmpeg showinfo. I'm guessing it has something to do with the built in deinterlacing that ffmpeg is doing. Although when I specify my own deinterlacer I'm unable to get it to maintain the timestamps. Is there a way to deinterlace and maintain the timings of
[17:51:45 CET] <ncouloute> the frames for vfr interlaced content. Any idea what is going on? Would I be better off learning and using the ffmpeg api to get this info in a easier to use format?
[18:04:49 CET] <JEEB> ncouloute: ffmpeg.c by default pokes the timestamps
[18:04:52 CET] <JEEB> you can see it with -debug_ts
[18:05:16 CET] <JEEB> you can try and minimize ffmpeg.c's pokings with -vsync passthrough -copyts
[18:05:41 CET] <JEEB> and yes, depending on your use case I would recommend utilizing either ffms2's or FFmpeg's API
[18:13:32 CET] <brimestone> I'm doing a -filter_complex [0]scale=480:-1:flags=fast_bilinear,split=2[h264][dnx36] -map [h264] <output codec and path> and I think I'm missing the audio...
[18:13:50 CET] <brimestone> How do I copy the audio over?
[18:13:58 CET] <DHE> -map 0:a  (typically)
[18:14:57 CET] <brimestone> Let me try that.. thanks
[18:14:59 CET] <DHE> so I imagine something like: ffmpeg -i $INPUTFILE -filter_complex $THAT_FILTER -map [h264] -map 0:a $H264_OPTS h264.mp4 -map [dnx36] -map 0:a $DNX36_OPTS dnx36_output
[18:20:05 CET] <brimestone> Seems to have worked... thanks
[18:30:22 CET] <ncouloute> Use Case: Trying to find a given frame location in the original file in the output file after its been converted. You would think it would be at the same timestamp as it was originally but apparently after its been read by ffmpeg the timestamps shifts somewhat for whatever reason. So I'm stuck with getting the showinfo output and trying to parse that. Maybe its easier to get that info using the api? Sounds like -copyts
[18:30:22 CET] <ncouloute> might help going to try that. Cant use vsync passthrough because I'm actually trying to change the file to cfr at 60000/1001.
[18:32:44 CET] <JEEB> you can do that in the filter chain, though?
[18:33:04 CET] <JEEB> as opposed to -r 60000/1001 after input on the ffmpeg.c cli :P
[18:33:23 CET] <brimestone> Looks like -map 0:a errors out if source input doesn't have audio..
[18:33:26 CET] <JEEB> yes
[18:33:39 CET] <JEEB> there's a way to say "map, if available" but normal map requires you to have the streams
[18:33:52 CET] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html
[18:33:56 CET] <JEEB> might have been a question mark or something
[18:34:03 CET] <JEEB> ctrl+F'ing "-map" in that should bring something up
[18:40:05 CET] <brimestone> Like an Optional in SwiftLang :) I like it!
[21:09:15 CET] <furq> so i'm reading about this google stadia thing, and
[21:09:17 CET] <furq> Stadia is perfectly playable and presentable here, but it's clear that there is a noticeable visual hit when the encoder - which is a bespoke Google creation, and not a part of the AMD GPU - is presented with more a more detail-rich, fast-moving scene process.
[21:09:24 CET] <furq> anyone have any speculation to what this "bespoke Google creation" is
[21:10:05 CET] <JEEB> something something gaikai 2010-2011 something something points at the clouds
[21:10:24 CET] <furq> i'm going to guess it's some kind of vp9 asic, but who knows
[21:10:39 CET] <furq> also yeah obviously the console will fail for the same reason all the others did
[21:11:05 CET] <JEEB> gaikai actually got bought proper and not "we just want the patents please"
[21:11:14 CET] <JEEB> (which is what happened with onlive three years later)
[21:11:18 CET] <furq> yeah onlive is ps live now
[21:11:21 CET] <JEEB> no
[21:11:33 CET] <furq> not the service but sony acquired the company and then launched ps live
[21:11:35 CET] <JEEB> gaikai is what became ps live and onlive just had its patents bought
[21:11:39 CET] <furq> oh really
[21:11:41 CET] <JEEB> yes
[21:11:47 CET] <JEEB> I just learned of it as well and laughed out loud
[21:11:58 CET] <JEEB> because onlive was the one that was marketing stuff while gaikai did tech
[21:12:10 CET] <furq> one of my friends went to some gamers conference thing where they launched onlive in the uk
[21:12:11 CET] <JEEB> sony bought gaikai in 2012 (as the whole team)
[21:12:29 CET] <furq> he had been dragged along by his friend and his friend got a free demo console and then immediately went and paid full MSRP for three more
[21:12:48 CET] <JEEB> and according to wikipedia onlive saw its closure after its patents got bought by sony in 2015
[21:12:54 CET] <furq> meanwhile my friend got his free one home and found he couldn't change the region from US-West so he was getting 300ms input lag on everything
[21:13:28 CET] <JEEB> gaikai also seemed to have a much more realistic business position. they were doing promotions and time limited demos
[21:13:45 CET] <JEEB> instead of trying to sell/rent games over video to people
[21:14:19 CET] <JEEB> anyways, gaikai is why we have intra refresh and reference frame invalidation in x264
[21:14:37 CET] <furq> anyway this article claims it's 25mbit 1080p30 (no 60fps support at all) and it still looks blocky in complex scenes
[21:14:48 CET] <shibboleth> isn't this like doing "web-based MS Office replacement"
[21:15:11 CET] <shibboleth> doing heavy stuff in js in browser will be slower than native code
[21:15:11 CET] <furq> no because google docs doesn't become unusable due to input lag
[21:15:34 CET] <furq> or stream your document to you as a video with a too-low bitrate
[21:15:39 CET] <shibboleth> as will gaming "by vnc/rdp/whatever"
[21:15:44 CET] <tdr> JEEB, i met with them on-site about a year before they were acquired.  some very tallented folks there.
[21:16:02 CET] <furq> the code execution has nothing to do with it, it's just running native code on a dedicated remote machine
[21:16:04 CET] <JEEB> :)
[21:16:12 CET] <furq> the problem is getting the video output to you in a reasonable time at a reasonable quality
[21:16:18 CET] <furq> that's what killed all the other companies that tried this
[21:16:37 CET] <furq> none of them had anything like google's infra or resources, of course
[21:17:03 CET] <furq> but the next problem is home internet connections, which still aren't good enough for a lot of people
[21:17:10 CET] <shibboleth> wifi
[21:17:13 CET] <furq> that as well
[21:17:28 CET] <furq> anyway i got sidetracked here, i just wanted to talk about google's bespoke video encoder
[21:18:08 CET] <JEEB> no idea what it is, but it's most likely some fast enough implementation of a standard codec hopefully in low latency yet compressing mode
[21:18:13 CET] <JEEB> like periodic intra refresh
[21:18:27 CET] <furq> i assume it's an asic just because of the way DF phrased it
[21:18:42 CET] <furq> if it's a software encoder that isn't "we made libvpx not suck ass" then i'm going to be unhappy about it
[21:19:01 CET] <furq> and if it's an entirely new codec then wtf
[21:19:10 CET] <JEEB> unlikely a new codec
[21:19:28 CET] <JEEB> if they used VP9 they might have tried to hack something like intra refresh into it
[21:19:40 CET] <furq> it'll probably turn out to be a patched x264 or something
[21:19:45 CET] <JEEB> although realistically yes
[21:19:49 CET] <JEEB> x264 or so
[21:19:56 CET] <JEEB> because you need users to decode the video
[21:20:26 CET] <shibboleth> and somehow overcome the latency of wifi+dsl/docsis
[21:20:29 CET] <furq> yeah there's no actual box to decode it, it just goes straight to a device
[21:20:34 CET] <shibboleth> anyone seen "valley of the boom"?
[21:20:36 CET] <furq> so it probably has to be h264 just for compat
[21:21:07 CET] <furq> i sort of wonder if they made a bespoke asic for that
[21:21:25 CET] <furq> or if the guy giving this interview is just full of marketing shit
[21:22:23 CET] <furq> i don't suppose it matters, they're probably never going to release it
[21:28:23 CET] <Mavrik> Google kinda uses VP9 a lot these days tho
[21:28:52 CET] <Mavrik> Even on devices which have H.264 HW decoders and no VP9 decoder
[21:30:03 CET] <furq> yeah i only thought vp9 because they're so invested in it and doubtless they want to avoid fees
[21:30:14 CET] <furq> i just don't know if it's practical
[21:30:17 CET] <JEEB> anyways, we'll see when the bits start flying :P
[21:31:28 CET] <Mavrik> Did anyone manage to take a closer look when they were running the Project Stream preview?
[21:32:05 CET] <furq> all the hands-on stuff i've seen hasn't mentioned the codec
[21:32:26 CET] <furq> which makes me suspect it is h264, otherwise google would probably be bragging about it
[21:45:45 CET] <ElePHPhant> Can ffmpeg be used to remove all metadata from an OGG? I tried multiple commands from shitty web articles but none of them actually worked.
[21:46:16 CET] <ElePHPhant> The purpose is just for my own use, to hide the title from myself, in order to avoid spoilers.
[21:46:26 CET] <ElePHPhant> It's not to steal people's work, if that's what you suspect.
[21:47:55 CET] <M6HZ> Hello, I would like to know if it possible to precisely cut a movie with -ss and -t without re-encoding. I always have an error of about 1 sec when I use them in conjunction of -c copy. Here is what the man page states -ss: «in most formats it is not possible to seek exactly [...] When transcoding and -accurate_seek is enabled [...] this extra segment [...] will be decoded and discarded. When doing stream copy or when -noaccurate_seek i
[21:49:42 CET] <pink_mist> M6HZ: no, you need to re-encode to get exact cutting
[21:50:30 CET] <ElePHPhant> That re-encoding thing has destroyed so many video files...
[21:50:40 CET] <ElePHPhant> I wasn't even aware that video editors did this for the longest time.
[21:50:56 CET] <M6HZ> pink_mist: Alright, do you know precisely why this is not possible?
[21:50:58 CET] <ElePHPhant> I assumed that "obviously", if I just cut away parts, it would not re-encode anything, but simply remove those portions.
[21:51:56 CET] <pink_mist> M6HZ: because of the different types of frames; without re-encoding you can only cut on a certain type of frame, which only comes around occasionally. the subsequent frames rely on that frame as their basis
[21:52:07 CET] <pzich> M6HZ: basically, videos aren't just stored as a series of pictures, but a few pictures (key frames) and updates on those images
[21:52:31 CET] <pzich> M6HZ: so if you cut just after that key frame, there's nothing to base the next few images on.
[21:53:13 CET] <M6HZ> pink_mist, pzich : Alright, really interesting!
[21:53:18 CET] <ElePHPhant> ... except that it already has that data?
[21:53:37 CET] <pzich> well it did, but you're trying to cut it off!
[21:53:49 CET] <ElePHPhant> So re-encode those little segments?
[21:54:00 CET] <pink_mist> but then you're not copying
[21:54:12 CET] <ElePHPhant> Any idea about my own question?
[21:54:31 CET] <pink_mist> I have no clue about ogg
[21:54:55 CET] <pzich> yeah I've googled around to remove this kind of stuff off of MKVs or MP4s and it worked, but I don't work with OGG much
[21:57:25 CET] <furq> ElePHPhant: did you try -map_metadata -1
[21:58:21 CET] <furq> also there are tools which will only reencode the gop you need for exact cutting, but ffmpeg isn't one of them
[21:58:48 CET] <furq> and by "there are tools" i mean people have posted their own github links in here and i don't remember what they are
[21:59:31 CET] <furq> you can somewhat do it yourself with ffmpeg with a bit of manual labour
[22:02:59 CET] <ElePHPhant> furq: ?
[22:03:08 CET] <ElePHPhant> What's the command?
[22:03:17 CET] <furq> -i foo.ogg -map_metadata -1 -c copy bar.ogg
[22:21:20 CET] <ElePHPhant> furq: I shall try it now.
[22:23:12 CET] <ElePHPhant> furq: Ah. Works. Thanks.
[22:23:50 CET] <ElePHPhant> I just wish it were more straight-forward, like: -i foo.ogg --remove-all-metadata -o bar.ogg
[22:23:56 CET] <ElePHPhant> But now that I know the command, I can use it.
[22:55:56 CET] <net|> http://www.netpipe.ca/paste/paste.php?id=25
[22:58:13 CET] <Hello71> no.
[22:59:57 CET] <faLUCE> kepstin (and others): regarding the past discussion, I'm seeing that the formats which have a .write_header function, do outoput the header in the avio context of the muxer. Then, given that the aviocontext can be flushed (avio_flush()) the the header of the muxers can be obtained without hacks....
[23:03:29 CET] <ElePHPhant> After all these years of dealing with video files and crap, I still have no clue what "mux" or "muxing" means.
[23:04:27 CET] <JEEB> think of it like this
[23:04:41 CET] <JEEB> 1. you have the I/O layer (reading writing, be it files or network)
[23:04:57 CET] <JEEB> 2. you have the DEmuxing (demultiplexing) phase
[23:05:16 CET] <JEEB> (reads the data coming from I/O and puts the stuff into streams and packets)
[23:05:48 CET] <JEEB> 3. you have the decoding (decompression of compressed packets into raw frames and everything that comes with it)
[23:06:02 CET] <JEEB> 4. you have possibly filtering (frames to filtered frames)
[23:06:19 CET] <JEEB> 5. you have encoding (compression of frames into packets with compressed data)
[23:06:39 CET] <JEEB> 5. you have muxing (multiplexing), which puts packets into streams and into some sort of format
[23:06:52 CET] <JEEB> 7. you have I/O layer again
[23:07:14 CET] <JEEB> so from just bunch of bytes to raw audio/video/subtitle frames and back
[23:07:43 CET] <faLUCE> JEEB: I know that
[23:08:11 CET] <JEEB> yup, and ElePHPhant just noted that he had no clue about what mux or muxing is :P
[23:08:26 CET] <faLUCE> JEEB: ah, the answer was for ElePHPhant
[23:08:28 CET] <faLUCE> :-)
[23:09:22 CET] <faLUCE> IMHO the "mux" term is much more clear than encoding/multiplexing etc.
[23:09:48 CET] <faLUCE> in fact, the muxers (IMHO) should be called, for example,  mpegtsmux/demux.c
[23:09:53 CET] <faLUCE> not enc/dec
[00:00:00 CET] --- Wed Mar 20 2019


More information about the Ffmpeg-devel-irc mailing list