[Ffmpeg-devel-irc] ffmpeg.log.20190221

burek burek021 at gmail.com
Fri Feb 22 03:05:01 EET 2019


[02:04:33 CET] <faLUCE> Hello, does  av_write_trailer() return immediately (at least for MPEGTS and MATROSKA) or does it have to calculate indexes?
[02:05:34 CET] <faLUCE> (I mean: if the recorded file is big, does av_write_trailer() require some time in order to finalize?
[02:05:37 CET] <faLUCE> )
[02:18:54 CET] <DHE> mpegts doesn't have an index and usually finishes immediately. other file formats will vary
[02:20:57 CET] <faLUCE> DHE: what about matroxa ?
[02:21:08 CET] <faLUCE> matroska
[02:25:07 CET] <DHE> can't answer what I don't know
[02:25:16 CET] <faLUCE> ok
[04:23:03 CET] <kepstin> matroska does have to write out an index at the end, yeah
[04:23:24 CET] <kepstin> should be pretty quick, since i think it builds the index as it goes, it's just a write out at the end.
[04:24:11 CET] <kepstin> the slowest format is mp4 with "-movflags faststart", which rewrites the entire file when closing it.
[04:56:09 CET] <Hello71> what happens if you do -moov_size -movflags faststart
[05:19:31 CET] <kepstin> i'm actually not sure what hapens when both options are used. with just moov_size it reserves some space at the start of the file, and if the moov fits there it will seek back and write it at the start instead of the end.
[12:11:36 CET] <ossifrage> Any ideas how to get the minimal latency out of ffplay when playing an h.264 elementary stream
[12:12:18 CET] <ossifrage> This: .... | ffplay -f h264 -probesize 32 -sync ext -flags low_delay -framedrop -       seems to accumulate latency
[13:27:34 CET] <faLUCE> kepstin: thanks. I use a single thread, then I have to write the trailer quickly
[14:29:26 CET] <fling> Is it a bad idea to use ffmpeg on numa hardware?
[14:29:53 CET] <fling> I'm getting really bad artifacts when capturing h264 from a v4l device on kgpe-d16.
[14:30:03 CET] <fling> Can't I fix this easily? Should I run ffmpeg in qemu instead?
[14:33:30 CET] <iive> fling, ffmpeg is mostly single threaded, with some codecs and filters using more threads
[14:34:21 CET] <iive> it is not very plausible that the numa would be causing problems.
[14:34:55 CET] <DHE> how is ffmpeg in qemu going to perform BETTER than bare metal?
[14:36:00 CET] Last message repeated 1 time(s).
[14:36:05 CET] <DHE> running ffmpeg on a single numa node may increase performance a bit. numactl can help here. but I'm not expecting it to be the difference between life and death
[14:36:40 CET] <fling> DHE: I'm reading this -> https://www.coreboot.org/Board:asus/kgpe-d16#MCM.2FNUMA_notes_-_Read_if_you_play_video_games
[14:36:45 CET] <fling> DHE: >> The correct way to do this is to create a VM with properly pinned CPU's including iothread/emulator with all of the RAM on one node which is the same one that your interrupts for assigned devices such as graphics usb etc are being processed on.
[14:36:58 CET] <fling> Looks like there is some numa magic&
[14:38:04 CET] <DHE> numactl --hardware   # and view the topology
[14:38:16 CET] <iive> it talks about speed
[14:38:41 CET] <iive> getting corruption means you are having major flaw.
[14:38:43 CET] <DHE> which COULD be a problem for realtime encoding, but usually that's pretty easy to spot in ffmpeg
[14:39:00 CET] <iive> fling, what encoder are you using?
[14:39:46 CET] <DHE> PS1="(node0)$PS1" numactl --physcpubind=0 --preferred=0 $SHELL
[14:39:55 CET] <fling> iive: ffplay -f v4l2 -input_format h264 -video_size 1920x1080 -framerate 30 -i /dev/video0
[14:40:05 CET] <DHE> or something like that to get a node-0 shell. note that exceeding the memory capacity of node-0 will spill into node 1, etc
[14:41:18 CET] <iive> fling, so you not using any encoder, you get the h264 stream from the device?
[14:41:32 CET] <iive> does it play correctly if you store it as file first?
[14:43:21 CET] <fling> DHE: https://paste.pound-python.org/show/c7z5P9bXGVBCimPn2Ad0/
[14:43:36 CET] <fling> iive: yes, will try to put it in a file&
[14:44:15 CET] <DHE> Ohhh... 4 nodes...
[14:44:43 CET] <DHE> only 8 cores/threads? is this a threadripper 1950x?
[14:45:34 CET] <fling> DHE: kgpe-d16 :D
[14:46:38 CET] <fling> iive: saving to a file now and watching it with mpv
[14:46:55 CET] <fling> iive: it is sometimes needed to wait for a minute or two for the artifacts to appear
[14:48:00 CET] <DHE> costs aren't as bad as I thought they'd be. my dual xeons are usually 11 and 21, you have to 10 and 16
[14:49:35 CET] <fling> this does not help btw -> `numactl --physcpubind=0 --preferred=0`
[14:50:00 CET] <fling> iive: I see no artifacts playing from the file for some reason
[14:50:20 CET] <fling> iive: is it a bug in ffplay?
[14:50:53 CET] <iive> donno, try playing the file with ffplay
[14:51:30 CET] <fling> I will start new recording and will put ffplay and mpv in parallel
[14:51:36 CET] <fling> I think it is just luck :P
[14:53:37 CET] <iive> is it cable/tv capture or hardware encoder on camera on analog input?
[14:53:53 CET] <iive> the thing you get the stream from?
[14:54:27 CET] <fling> iive: c920 camera
[14:56:27 CET] <fling> Ok, looks like I can't reproduce with a file.
[14:56:37 CET] <fling> Will try a pipe now :P
[14:58:31 CET] <fling> iive: reproduced piping to ffplay
[15:01:02 CET] <fling> Can't reproduce with mpv reading from pipe.
[15:02:25 CET] <fling> Ok, visible with mpv too but less severe for some reason.
[15:02:40 CET] <fling> So should be numa related then as happens only on this box.
[15:02:50 CET] <fling> DHE: costs?
[15:04:31 CET] <amosbird> hi
[15:04:34 CET] <DHE> fling: the values in that 4x4 table
[15:04:44 CET] <amosbird> I'm trying to use ffmpeg to build a delayed screenshotter
[15:04:45 CET] <amosbird> ffmpeg -f x11grab -show_region 1 -s "$W"x"$H" -i $ffmpeg_display+$X,$Y -framerate 1 -vframes 1 -f singlejpeg - | xclip -selection clipboard -t image/jpeg
[15:05:13 CET] <amosbird> how can I make ffmpeg hold the region for given seconds and only takes the last frame as the singlejpeg output?
[15:05:44 CET] <amosbird> can I also make sure it doesn't consume any cpu cycles in the delay period?
[15:06:36 CET] <fling> amosbird: use -ss "$delay_seconds" before -i
[15:07:09 CET] <amosbird> fling: thanks
[15:07:20 CET] <amosbird> does that work as I expected?
[15:07:25 CET] <fling> amosbird: or sleep $delay_seconds ; ffmpeg&
[15:07:29 CET] <fling> amosbird: I don't know.
[15:07:32 CET] <amosbird> sleep won't work
[15:07:37 CET] <amosbird> as it doesn't hold the region
[15:07:44 CET] <amosbird> or else I don't need ffmpeg
[15:07:59 CET] <fling> I don't understand the hold thing
[15:08:30 CET] <amosbird> yeah, it works as expected
[15:08:32 CET] <amosbird> thank you fling
[15:08:44 CET] <fling> amosbird: yw! what is hold for anyway?
[15:08:56 CET] <amosbird> holding the area boarder
[15:09:04 CET] <amosbird> -show_region 1
[15:09:19 CET] <amosbird> more like showing
[15:09:22 CET] <fling> I need to try this myself&
[15:09:44 CET] <amosbird> it consumes 0.23 cpu
[15:09:50 CET] <amosbird> acceptable
[15:10:01 CET] <fling> I don't see it in manpage
[15:11:41 CET] <amosbird> it would be nice that ffmpeg can show a count down of delayed seconds ...
[15:12:37 CET] <fling> amosbird: you could add a second audio stream with countdown as input and your speakers as output.
[15:12:49 CET] <amosbird> fling: I don't follow
[15:12:59 CET] <amosbird> audio...
[15:13:01 CET] <amosbird> ok..
[15:13:13 CET] <amosbird> visual count down is way better
[15:15:00 CET] <fling> amosbird: you could add a second video stream with countdown as input and a pipe to mpv as output :P
[15:18:22 CET] <fling> iive: ok pipe helps and I'm getting less artifacts
[15:18:35 CET] <fling> iive: could be related to the slow gpu!
[15:19:06 CET] <iive> gpu?
[15:20:01 CET] <iive> most likely reason for artifacts is if packets from the input stream are lost or dropped
[15:20:09 CET] <iive> try without the framerate thing.
[15:21:18 CET] <amosbird> fling: could you show me how to achieve that?
[15:22:55 CET] <fling> amosbird: ffmpeg -i countdown.nut -c copy -f nut - | mpv -- -
[15:23:05 CET] <fling> amosbird: mix it with your ffmpeg command
[15:23:15 CET] <amosbird> hmm,
[15:23:23 CET] <amosbird> how can I mix that into the region of my command?
[15:23:34 CET] <fling> amosbird: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[15:23:48 CET] <amosbird> countdown.nut: No such file or directory
[15:23:54 CET] <amosbird> I thought that's a thing....
[15:24:35 CET] <fling> amosbird: you need to record a countdown video.
[15:27:17 CET] <fling> iive: x11 performance is really bad
[15:27:24 CET] <amosbird> this doesn't work as expected
[15:27:28 CET] <fling> iive: I can't get artifacts when running ffplay in a small window
[15:27:45 CET] <amosbird> in fact I need a popup count down window
[15:27:53 CET] <amosbird> do you know anything similar to that?
[15:27:55 CET] <iive> fling, that's really strange
[15:28:23 CET] <fling> iive: slow xorg slows down cpus
[15:28:29 CET] <iive> what the artifacts look like ?
[15:28:42 CET] <fling> like missing parts in h264
[15:29:36 CET] <iive> do you have xvideo working? `xvinfo` should display a lot of info (don't need details)
[15:30:39 CET] <fling> X-Video Extension version 2.2 \n screen #0 \n  no adaptors present
[15:31:18 CET] <iive> so you have no hardware acceleration on the video card.
[15:31:35 CET] <fling> Yes.
[15:31:42 CET] <iive> what is the gpu?
[15:32:12 CET] <fling> iive: VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 10)
[15:33:29 CET] <fling> I see no artifacts with ffplay running at 1/6 of screen size! And not because my eyes are bad :D
[15:35:23 CET] <fling> DHE: ^ looks like I figured it out. I hope this slow X is not causing any issues with my storage :P
[15:35:31 CET] <fling> I need to stop using this gpu probably.
[15:36:06 CET] <iive> don't worry, you are not using it :|
[15:36:14 CET] <DHE> that's not a GPU. that's a VGA port that the IPMI controller can hijack as well
[15:36:27 CET] <fling> There is also this one VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1)
[15:36:41 CET] <DHE> that one is at least presentable, if slightly old
[15:36:48 CET] <fling> but it has no d-sub port, I need another monitor&
[15:37:04 CET] <DHE> called it
[15:40:15 CET] <fling> Why is it called aspeed? haha
[15:40:37 CET] <iive> i think in latin "a" means "anti"
[15:40:42 CET] <fling> I'm only getting ~2 fps with full screen mpv
[15:40:52 CET] <fling> oh anti speed, sounds right.
[15:40:59 CET] <iive> :)
[15:41:30 CET] <iive> but honestly, s3virge had video overlay...
[15:42:00 CET] <fling> &or I could install s3virge!!
[15:42:20 CET] <iive> make sure you have 4MB :D
[15:42:29 CET] <fling> No, I could not, the pike slot prevents me from using a single pci slot on this board, need a riser.
[15:42:59 CET] <fling> Time to buy a better screen anyway.
[15:43:11 CET] <iive> 1920*1080*1,5 ~= 3MB
[15:43:51 CET] <iive> the slowdown probably comes from swscale conversion of yuv to rgb colorspace
[15:44:13 CET] <iive> it might be done by a single thread and becoming bottleneck, causing dropped packets
[15:54:47 CET] <fling> iive: also this -> `ffmpeg -f v4l2 -input_format h264 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c copy -f nut /tmp/test.nut` gives me 15 fps on kgpe-d16 and 24 fps on x200
[15:55:06 CET] <fling> I should compare ffmpeg versions
[15:56:00 CET] <iive> maybe move input parameters before -i ?
[15:56:13 CET] <iive> oh, they are
[15:56:30 CET] <iive> this should literally do no processing at all.
[15:56:41 CET] <fling> kgpe-d16 one has libv4l disabled
[15:56:44 CET] <fling> need to enable it
[15:57:02 CET] <iive> isn't the cam usb ?
[15:57:12 CET] <iive> doesn't the cam do the encoding?
[15:57:29 CET] <fling> iive: it does.
[15:57:54 CET] <iive> then it should just pass the encoded bitstream through the usb and that's it.
[15:58:10 CET] <iive> even P2 should be able to handle it.
[15:58:10 CET] <fling> iive: but ffmpeg sends it the parameters first
[15:58:32 CET] <fling> This one is not a performance issue.
[15:58:45 CET] <fling> It is the format issue. I'm getting wrong fps&
[15:58:52 CET] <iive> oh
[15:58:53 CET] <iive> ok
[15:59:12 CET] <fling> probably because of disabled v4l, because on another box it runs better ;P
[16:00:39 CET] <fling> iive: v4l-ctl says the device is capable of 30fps btw
[16:00:57 CET] <fling> 24 is good too but still
[16:02:02 CET] <fling> rebuilding did not help&
[16:34:41 CET] <AutismeGutten> i have a bunch of video files with the naming scheme:IMG_<number>.MOV. I want to resize and rename these files. The commanf ffmpeg -i IMG_<number>.MOV -s 1280x720 output.mkv works for indivdual files but i want to mass convert them iwth the naming scheme <number>.mkv for instance
[16:35:26 CET] <kepstin> AutismeGutten: write a shell script that runs ffmpeg multiple times in a loop. ffmpeg has no built-in batch operation support.
[16:35:56 CET] <AutismeGutten> alright thanks
[16:38:54 CET] <sparrowsword> how on earth do i get this line of code into python? "ffmpeg -f dshow -i video="Virtual-Camera" -preset ultrafast -vcodec libx264 -tune zerolatency -b 900k -f mpegts udp://10.0.0.193:8081"
[16:40:04 CET] <kepstin> sparrowsword: https://docs.python.org/3/library/subprocess.html
[16:40:43 CET] <sparrowsword> kepstin: right, but i need my python code to interact with the video/stream
[16:40:54 CET] <kepstin> what do you mean by "interact with"?
[16:40:59 CET] <sparrowsword> kepstin: im trying to use my webcam for a local stream
[16:41:53 CET] <sparrowsword> kepstin: the goal is to use image recognition on my local stream, and my image classifier is in python
[16:42:45 CET] <kepstin> sparrowsword: ok, so what you want to do is use ffmpeg as a way to read from the webcam and provide raw, decoded image frames to python?
[16:43:11 CET] <sparrowsword> kepstin: yes, because the latency is basically none
[16:43:38 CET] <kepstin> in that case, you'd probably want to build an ffmpeg cli that outputs raw video in your desired pixel format to stdout, and then run that command with Popen so you can read the frames into python.
[16:44:17 CET] <sparrowsword> kepstin: like the line above?
[16:44:41 CET] <kepstin> the example you gave streams video to an external server or application, that's quite different
[16:45:35 CET] <sparrowsword> hmm ok
[16:45:36 CET] <kepstin> something like "ffmpeg -f dshow -i video="Virtual-Camera" -pix_fmt yuv420p -f rawvideo -" is the command you want (don't run that in a terminal, that command outputs raw binary frame data to stdout)
[16:47:07 CET] <fling> iive: the problem was in camera controls getting saved in camera itself
[16:47:20 CET] <fling> iive: solved with v4l2-ctl
[16:47:29 CET] <iive> \o/
[16:49:40 CET] <sparrowsword> kepstin: so i dont actually have a webcam on my computer... i do however have one on my phone, and i am using ip-webcam to locally stream, in this case do i use the line i posted?
[16:54:40 CET] <kepstin> sparrowsword: replace the input stuff on that command line with whatever you need to get the video input... i don't know what you need.
[16:54:55 CET] <sparrowsword> kepstin: nvm.. i think i remember... forgot to use requests...
[16:55:01 CET] <sparrowsword> not a ffmpeg issue
[17:35:30 CET] <sparrowsword> has anyone used ipwebcam with ffmpeg?
[18:39:30 CET] <another> hmm.. anyone knows why ffv1 in mkv is muxed with codec id V_MS/VFW/FOURCC instead of V_FFV1 ?
[19:19:07 CET] <amosbird> hmm, does ffmpeg provide a countdown clock to show the starting ?
[19:19:11 CET] <amosbird> -ss
[19:27:05 CET] <amosbird> can I make ffmpeg -f x11grab  grab keyboard to quit early?
[19:41:30 CET] <kepstin> another: as far as I know, matroska doesn't define a native method for muxing ffv1? so it's done via the fourcc compat.
[19:41:51 CET] <JEEB> huh?
[19:41:58 CET] <JEEB> pretty sure there was a native ID
[19:42:33 CET] <JEEB> libavformat/matroska.c:    {"V_FFV1"           , AV_CODEC_ID_FFV1},
[19:43:06 CET] <kepstin> if there is, it's not on https://www.matroska.org/technical/specs/codecid/index.html or http://haali.su/mkv/codecs.pdf and I haven't been able to find any specs on how to do the muxing :/
[19:43:28 CET] <kepstin> then again, those are also missing opus
[19:43:32 CET] <JEEB> they're standardizing FFV1 and FLAC in matroska for archival so probably cellar?
[19:43:33 CET] <kepstin> so they're obviously out of date
[19:43:49 CET] <JEEB> and yes, the listing on matroska.org definitely is lacking and the haali one was never updated
[19:43:57 CET] <JEEB> also the HEVC one is only the matroska mailing list I think
[19:45:20 CET] <another> found the commit responsible: https://github.com/FFmpeg/FFmpeg/commit/9ae762da7e256aa4d3b645c614fcd1959e1cbb8d
[19:46:38 CET] <another> associated ticket: https://trac.ffmpeg.org/ticket/6206
[19:46:55 CET] <kepstin> ah, thanks to the pointer to cellar. There it is: https://tools.ietf.org/html/draft-ietf-cellar-codec-01
[19:47:25 CET] <kepstin> that draft has ffv1, but is still missing opus, heh
[19:48:38 CET] <JEEB> another: ok, so nobody cared to implement the CodecPrivate for writing and thus ffv1 was special cased
[19:48:44 CET] <JEEB> :psyduck:
[21:11:12 CET] <ossifrage> Anyone know how to get the lowest latency playback out h.264 elementary streams with ffplay? '-sync ext -flags low_delay' seems to accumulate latency (it grew to ~1minute)
[21:11:50 CET] <ossifrage> If I specify a frame rate with it starts out at ~1s but that seems to drift, just slowly
[21:14:54 CET] <ossifrage> Actually with '-framerate 30', the latency starts out at 416ms
[21:15:29 CET] <JEEB> the libraries definitely let you do low latency, but none of them are configured to be such by default
[21:15:36 CET] <JEEB> since low latency is a rather specific use case
[21:15:57 CET] <JEEB> basically you start with setting the video decoding threading to slice threads instead of frame threads first
[21:16:11 CET] <ossifrage> JEEB, I'm only using ffplay for playback, the video is coming from a hardware encoder
[21:16:21 CET] <JEEB> since you get a delay of THREAD_COUNT at the very minimum
[21:16:28 CET] <JEEB> (for frame threaded decoding)
[21:16:55 CET] <JEEB> with slice threads of course it's just slices of the same image being threaded
[21:17:10 CET] <ossifrage> The 416ms is acceptable, but it seems to slowly accumulate latency and the hardware encoder only produces 30fps when there is enough light
[21:17:13 CET] <JEEB> ossifrage: same goes for ffplay but I think that goes without saying :P
[21:17:34 CET] <JEEB> also if your camera is VFR you definitely want to output stuff with timestamps
[21:18:03 CET] <JEEB> if you want to try other things out, wm4 did implement this sort of stuff https://mpv.io/manual/master/#low-latency-playback
[21:18:07 CET] <ossifrage> I'm not so worried about the threading or decoding in slices, I just want to minimize all the bitstream level buffering
[21:18:26 CET] <JEEB> well threading with frame threads adds latency
[21:18:27 CET] <JEEB> :V
[21:18:28 CET] <ossifrage> JEEB, my next step is to use libavformat to add fmp4
[21:19:11 CET] <JEEB> for lavf you will have to set buffers etc
[21:19:48 CET] <JEEB> but I've never optimized my stuff for this so good luck and have fun. I just know it's possible.
[21:21:31 CET] <JEEB> but I recommend you make your own API client for your use case
[21:24:12 CET] <ossifrage> JEEB, thanks, I didn't know about mpv, I always just use ffplay when dealing with elementary streams
[21:26:10 CET] <ossifrage> I only just got the camera SOC to start spitting out frames early this morning, so having it playback with <1s latency is great
[21:26:36 CET] <ossifrage> I'm not putting any effort to lower the latency on the encode side, I want the best video quality I can get out of the encoder for the bits
[21:27:01 CET] <JEEB> if you're doing the encoding then I do recommend you put the stream into something that has timestamps
[21:27:15 CET] <JEEB> so the reading side doesn't have to guesswork them
[21:27:49 CET] <ossifrage> Yeah I have PTS in the metadata, but right now I'm just dumping out the elementary packs and none of the systems stuff
[21:29:02 CET] <ossifrage> The encoder tracks the PTS from the point it captures the images from the sensor, so I can calculate the delay inside the hardware
[21:31:01 CET] <ossifrage> Right now I'm just dumping the video to stdout and piping it over ssh to mpv
[00:00:00 CET] --- Fri Feb 22 2019


More information about the Ffmpeg-devel-irc mailing list