[Ffmpeg-devel-irc] ffmpeg.log.20160914

burek burek021 at gmail.com
Thu Sep 15 03:05:01 EEST 2016


[00:46:35 CEST] <infinito> hi guys, im segmenting whit hls segmenter and videos in android starts, freeze, and then continue playing normal. I tested everything but i cant find whats is wrong
[00:52:36 CEST] <Mateon1> Hi, I'm having trouble with a couple of corrupted streams. While screen recording, my power cut out and the video had invalid headers, but I used a program to extract the raw streams from the broken mp4. I got an .aac and an .h264 file. Both play fine in ffplay, but VLC cannot display the video properly (and ffmpeg throws a bunch of errors while combining the streams).
[00:52:50 CEST] <Mateon1> "
[00:52:53 CEST] <Mateon1> Oops
[00:54:29 CEST] <Mateon1> "Error while decoding stream #0:0: Invalid data found while processing input", followed by various errors: Number of bands (24) exceeds limit (4) [and a few more, like (43) exceeds (21), ...]; decode_band_types: Input buffer exhausted before END element found, and "ms_present = 3 is reserved"
[00:54:54 CEST] <Mateon1> I'm sure I missed a few, but the end result is VLC (and pretty much anything else but ffplay) can't play the file properly
[00:55:06 CEST] <Mateon1> Is it possible to fix the video stream?
[00:56:14 CEST] <furq> what did you use to recover the streams
[00:56:36 CEST] <furq> also in future you should use flv or mpegts to avoid this sort of problem
[00:57:09 CEST] <Mateon1> I used a commandline tool, let me find it in my downloads
[00:57:32 CEST] <furq> i don't really have any suggestions other than trying other tools
[00:57:47 CEST] <Mateon1> I used this tool: http://slydiman.me/eng/mmedia/recover_mp4.htm
[00:57:53 CEST] <furq> http://vcg.isti.cnr.it/~ponchio/untrunc.php
[00:57:55 CEST] <furq> you could maybe try that
[00:58:12 CEST] <furq> those are the only two free tools i know of
[00:58:22 CEST] <Mateon1> Hm, thanks for the suggestion
[00:59:13 CEST] <furq> https://github.com/ponchio/untrunc
[00:59:16 CEST] <furq> apparently that's more up to date
[00:59:20 CEST] <votz> If libopus can be included in ffmpeg, is it recommended to do so? How stable/featureful is ffmpeg's built-in opus decoder? Is it faster or slower than libopus?
[00:59:33 CEST] <furq> there was no builtin opus decoder last i checked
[00:59:41 CEST] <votz> furq: https://www.ffmpeg.org/ffmpeg-codecs.html#libopus
[00:59:51 CEST] <votz> ./configure --list-decoders also lists 'opus' and 'libopus'
[00:59:56 CEST] <furq> fun
[00:59:58 CEST] <furq> that must be new
[01:01:10 CEST] <votz> furq: Who should I point my questions at about the build-in opus decoder vs libopus?
[01:01:50 CEST] <furq> shrug
[01:01:56 CEST] <furq> just hope one of the devs in here sees it
[01:04:46 CEST] <Mateon1> furq: Unfortunately, I don't have a Linux machine handy, and I can't find libav in the MSYS repository
[01:05:00 CEST] <furq> it should work with the ffmpeg libs
[01:05:13 CEST] <Mateon1> I _could_ build it myself, but previous experiences doing linuxy things on Windows were quite painful
[01:06:47 CEST] <Mateon1> I'm not quite sure I have ffmpeg libraries, or just the binary
[01:14:23 CEST] <Mateon1> Can't compile on Windows due to lack of the <endian.h> header; doesn't look like I can fix it, unfortunately
[01:15:55 CEST] <TD-Linux> votz, I'm not sure which one is faster. you still need libopus for encoding
[01:16:20 CEST] <TD-Linux> also I think the built in encoder was missing a few features like fec
[01:16:23 CEST] <TD-Linux> *decoder
[01:18:22 CEST] <votz> TD-Linux: Huh. I'll benchmark the two and see which is faster
[01:18:35 CEST] <votz> Good to know about the built-in decoder's lack of some features, though. Thanks for that.
[10:22:19 CEST] <tontonth> hello
[10:25:05 CEST] <tontonth> is there a way to overlay the frame number with ffplay ?
[15:01:20 CEST] <mosb3rg> rtmp://127.0.0.1:1935/contained/atestfeed: Input/output error is something im seeing when i try and connect to nimble rtmp interface, what specifically does this error mean, that outright my syntax is wrong or the server connection isnt present
[16:26:09 CEST] <bencoh> hey there ... has anyone tried https://github.com/toots/shine ?
[16:28:32 CEST] <bencoh> ah, looks like we already have libavcodec/libshine
[17:01:03 CEST] <IchGucksLive> hi the vid conversion from ogv to mp4 is so slow i bearly hit 25framesper second
[17:01:06 CEST] <IchGucksLive> ffmpeg -threads 2 -i in.ogv -acodec libfdk_aac -ab 128k -vcodec libx264 -preset fast -crf 25 -r 25 out.mp4
[17:01:14 CEST] <IchGucksLive> is there a hint on my line
[17:01:38 CEST] <BtbN> 25 fps seems fine for 2 threads on fast.
[17:03:22 CEST] <IchGucksLive> what is the difference on slow
[17:03:32 CEST] <IchGucksLive> does it give me better quality
[17:03:49 CEST] <IchGucksLive> but then higher filesize
[17:03:51 CEST] <BtbN> it trades quality vs. used CPU time.
[17:04:01 CEST] <BtbN> or in crf/cqp mode, bitrate
[17:04:02 CEST] <IchGucksLive> ok
[17:04:26 CEST] <IchGucksLive> so i got to live with the timing
[17:04:57 CEST] <BtbN> why limit it to two threads?
[17:11:46 CEST] <IchGucksLive> only 2 in pc
[17:11:58 CEST] <IchGucksLive> -tread 0 will use all
[17:12:28 CEST] <IchGucksLive> ok uploaded https://www.youtube.com/watch?v=PYQKyv9-GTc
[17:13:50 CEST] <furq> if this is just for youtube then you shouldn't convert it at all
[17:14:36 CEST] <IchGucksLive> the ogv is 10times more
[17:14:44 CEST] <DHE> BtbN: wouldn't that be 2 threads for decode only?
[17:14:45 CEST] <IchGucksLive> and web traffic is limited
[17:15:03 CEST] <BtbN> I think threads is a global option.
[17:15:51 CEST] <IchGucksLive> the line gives a hd in best time best quality and best kompression as i found out
[17:16:21 CEST] <BtbN> well, lower the preset and you will get better quality
[17:16:22 CEST] <IchGucksLive> ok thanks
[17:16:36 CEST] <BtbN> or lower the crf value, and it will also look better
[17:16:39 CEST] <BtbN> but increase in size
[17:17:04 CEST] <IchGucksLive> i will check on preset slow and crf 22
[17:17:17 CEST] <IchGucksLive> i thik 22 is standard
[17:18:24 CEST] <IchGucksLive> oh that goes down to only 15fps
[17:20:32 CEST] <IchGucksLive> filesize on 3min is 21MB insted of 12 bevor
[17:20:50 CEST] <IchGucksLive> i will stay on the line i preseted
[17:20:56 CEST] <IchGucksLive> good to go
[17:21:05 CEST] <IchGucksLive> Thanks on the advice
[18:26:12 CEST] <Gear360> I've been trying to generate PGM map files to use with FFmpeg RemapFilter and I'm having troubles. I'm trying to generate these for use with photos taken from a Samsung Gear 360. The Samsung Gear 360 camera takes the photos at a resolution of 7776x3888 pixels. Remap filter documentation here https://trac.ffmpeg.org/wiki/RemapFilter
[18:32:19 CEST] <Gear360> whenever I generate using "-x xmap_samsung_gear_7776x3888.pgm -y ymap_samsung_gear_7776x3888.pgm  -h 3888 -w 7776 -r 3888 -c 3888 -m front --verbose" the resulting pgm files dont map correctly when using them with ffmpeg
[18:36:10 CEST] <Gear360> Is it possible to get help here or I've even tried using -c samsung_gear_360 flag, but I get "Camera mode samsung_gear_360  not implemented" error, as the projection.c file provided on the site isn't complete. How likly would I be to get an emqail response from  fsluiter if I sent him an email?
[18:36:41 CEST] <Gear360> that came out wrong
[18:36:49 CEST] <Gear360> I've even tried using -c samsung_gear_360 flag, but I get "Camera mode samsung_gear_360  not implemented" error, as the projection.c file provided on the site isn't complete.
[18:46:35 CEST] <SouLShocK> furq cheers for the tip about nginx rtmp! that seems to be able to support my idea :)
[19:13:10 CEST] <mosb3rg> hey folks, so im running an until , do , done sh script however i keep hitting situations where it does still stop and im not around to restart it. i need better error handling obviously. how do i first off ensure that it always continues to loop even if it comes to a natural end to the feed where it doesnt report a failure
[19:15:00 CEST] <mosb3rg> does anyone have a better idea or script that they are using, and wouldnt mind sharing :)
[19:16:43 CEST] <Gear360> not very responsive here
[19:17:50 CEST] <mosb3rg> ya i know but sometimes if i leave the irc open a while someone notices and shares some details.
[19:18:14 CEST] <Gear360> that's what I'm hoping for
[19:20:19 CEST] <DHE> well, you can wrap the whole thing in an infinite loop... while true; do until ... do ...    done ; sleep 60 ; done
[19:20:33 CEST] <DHE> or find out why it's dying. do you need nohup?
[19:25:39 CEST] <mosb3rg> thanks for the response dhe you have been helpful lately
[19:26:29 CEST] <Gear360> Anybody familiar with generating pgm files for the remap filter?
[19:29:20 CEST] <vans163> is there a way to improve the quality of mpeg1?
[19:29:28 CEST] <vans163> output command is -f mpeg1video -vf "crop=iw-mod(iw\,2):ih-mod(ih\,2)" -b 0.  input is 24bit RGB pixels
[19:29:33 CEST] <vans163> the quality is terrible
[19:29:52 CEST] <DHE> vans163: specify a bitrate, otherwise you only get 1 meg by default. -b:v 1.5M
[19:30:06 CEST] <mosb3rg> dhe
[19:30:07 CEST] <mosb3rg> #!/bin/bash
[19:30:07 CEST] <mosb3rg> while true ; do until ; do ; done ; sleep 60 ; done
[19:30:25 CEST] <mosb3rg> and between true and ; for example would be the entire ffmpeg command
[19:30:35 CEST] <mosb3rg> and between until and ;
[19:30:38 CEST] <vans163> DHE: would it be the case that if I spec like 5M, it can run at 1-2M? but will cap at 5M?
[19:31:01 CEST] <mosb3rg> and do and ;  thats all then the rest remain just like that, and it should infinite loop ?
[19:31:20 CEST] <DHE> vans163: if you want caps, you'll have to use vbv mode. specify -minrate:v, -maxrate:v, and -bufsize:v. -b:v is traditionally equal to maxrate
[19:31:39 CEST] <vans163> DHE: ah ty, I will play with it
[19:31:40 CEST] <DHE> with -b:v alone you get a best effort target bitrate
[19:32:16 CEST] <DHE> mpeg1, huh? going for video CD format or something?
[19:32:41 CEST] <purplex88> whats ffmpeg
[19:32:46 CEST] <purplex88> for
[19:33:15 CEST] <vans163> DHE: haha, im playing with low latency streaming, and there is a mpeg library for decoding realtime for JS
[19:34:03 CEST] <vans163> DHE: I was having trouble getting the x264 library to work with ffmpeg, so trying the mpeg one for now. I tried a LZ4 approach, where I just LZ4 the 24bit RGB valuies and draw them in chrome on a canvas. But the damn thing starts allocating 100MB/s and chrome slows to a crawl
[19:34:03 CEST] <DHE> vans163: okay... because if you're targetting a VCD or other specific hardware, you can simply run "-target vcd" to preset a bunch of options
[19:35:13 CEST] <DHE> really.. ffmpeg and x264 get along great.
[19:35:32 CEST] <vans163> DHE: I mean the JS Broadway.js frame by frame decoding library.  It wants a MP4 only
[19:35:45 CEST] <vans163> DHE: But FFMPEG cant write mp4 to a pipe or unix socket
[19:36:25 CEST] <furq> i thought you said fragmented mp4 was working
[19:37:06 CEST] <vans163> furq: Let me test again, but I recall after piping in over 100 frames it did not output anything
[19:37:24 CEST] <vans163> furq: Maybe I overlooked something and created the pipe wrong
[19:38:04 CEST] <furq> https://github.com/mbebenita/Broadway/wiki/Real-World-Uses
[19:38:07 CEST] <furq> beautiful
[19:38:15 CEST] <vans163> giving it -f ismv
[19:38:21 CEST] <vans163> instead of -f mp4
[19:38:31 CEST] <furq> ismv and fragmented mp4 are two different formats
[19:38:58 CEST] <vans163> I have noted.  <furq> `-f mp4 -movflags frag_keyframe+empty_moov` for fragmented mp4, `-f mpegts` for ts
[19:39:17 CEST] <vans163> not sure what ts is
[19:39:23 CEST] <vans163> time series?
[19:39:27 CEST] <furq> transport stream
[19:39:51 CEST] <furq> "The decoder expects an .mp4 file"
[19:40:05 CEST] <furq> this makes me think that neither of those will work
[19:40:16 CEST] <furq> and also that this isn't very good
[19:40:28 CEST] <mosb3rg> also i keep running into this error which is causing one of the scripts to crash:
[19:40:30 CEST] <mosb3rg> [flv @ 0x3c68ae0] Failed to update header with correct duration.ate=4462.0kbits/s speed=0.999x
[19:40:30 CEST] <mosb3rg> [flv @ 0x3c68ae0] Failed to update header with correct filesize.
[19:41:01 CEST] <mosb3rg> is this because its droppin below 1x and causing it to fail ?
[19:42:01 CEST] <vans163> furq: Yea, I did a dev time assessment, LZ4 was quickest to implement, mpeg1 second quickest (doing now), and if the performance of that is not enough, then I guess will have to take 2 approaches
[19:42:25 CEST] <vans163> First would be to try Broadway. Second would be to write a standalone app, that would be able to give me full control of decoding
[19:43:06 CEST] <vans163> I really want it to work inbrowser though for easy accessibility
[19:43:29 CEST] <vans163> but if MPEG works inbrowser, that would be enough for me.  I want it easy to administer Virtual Machines
[19:43:45 CEST] <furq> i mean i don't want to encourage javascript video decoding, but
[19:43:45 CEST] <vans163> So mpeg, low quality at 10-20fps is fine for my use case to work inbrowser
[19:43:48 CEST] <furq> https://github.com/brion/ogv.js
[19:43:55 CEST] <furq> that seems like it might work for streaming
[19:45:39 CEST] <furq> if this is for screen capturing then mpeg1video is probably useless because it only does yuv420p
[19:45:48 CEST] <furq> theora at least supports 4:4:4
[19:46:36 CEST] <vans163> furq: yuv420p I think is fine, AFAIK its what Nvidias NvEnc uses to stream games to the Nvidia Shield
[19:46:42 CEST] <vans163> as well as Steams
[19:47:11 CEST] <furq> coloured text is pretty much unreadable in 4:2:0
[19:47:15 CEST] <vans163> dont quote me on that
[19:47:48 CEST] <furq> it's probably less noticeable on the shield because it's a small high-dpi screen
[19:48:41 CEST] <vans163> i have the pixel format, but its on a seperate unplugged disk
[19:48:51 CEST] <vans163> need to boot off it to check what the code says
[19:49:27 CEST] <furq> you could just test capture and view it in a real player and see if it looks any good
[19:49:53 CEST] <vans163> https://developer.nvidia.com/nvidia-video-codec-sdk    HEVC 4:4:4 encoding *   So they say 4:4:4 for HEVC is new feature
[19:51:01 CEST] <vans163> "Capability to encode YUV 4:2:0 sequence and generate a H.264 bit stream."
[19:51:02 CEST] <phed> I think I am doing something very silly, so I have to be sure. Does anyone know the layout of AV_PIX_FMT_YUV422P10LE , is it padded at all?
[19:51:21 CEST] <phed> uhm ignore the AV_PIX_FMT - silly copy&paste
[19:51:24 CEST] <vans163> Higher end cards have   "Capability to encode
[19:51:24 CEST] <vans163> YUV 4:4:4 sequence
[19:51:24 CEST] <vans163> and generate a H.264
[19:51:24 CEST] <vans163> bit stream"
[19:51:30 CEST] <vans163> sorry.. z.z
[19:53:19 CEST] <vans163> "Resolution/Format: 1920x1080/YUV4:2:0, 8 bit "
[19:53:42 CEST] <vans163> So they assume yuv420p is used for low latency streaming
[19:53:55 CEST] <vans163> http://developer.download.nvidia.com/designworks/video-codec-sdk/secure/7.0/NVENC_DA-06209-001_v08.pdf?autho=1473875719_94fd42ab28ad97bc9500ac4ebd7c1f02&file=NVENC_DA-06209-001_v08.pdf
[19:54:26 CEST] <ossifrage> what is the hex value in a line: [h264 @ 0x7f64347a3600] error while decoding MB 52 55, bytestream -5
[19:55:00 CEST] <ossifrage> Is it a PTS? A pointer to some internal buffer?
[19:55:33 CEST] <DHE> presentation timestamp
[19:55:46 CEST] <DHE> no, it's a memory pointer
[19:56:02 CEST] <ossifrage> DHE a PTS would be useful, a pointer not so much
[19:56:03 CEST] <DHE> if you have several encoders going, you can use that to differentiate their outputs
[19:56:30 CEST] <DHE> or if you're using the API, you can find out what piece of code is generating your logs
[19:57:10 CEST] <ossifrage> It isn't a per stream value, but it does look like it cycles around
[19:57:40 CEST] <DHE> it'll be otherwise random, but in a single ffmpeg session the same encoder, decoder, filter, muxer, or whatever will keep using the same value as it runs
[19:58:10 CEST] <DHE> so if you have 2 audio streams and keep getting audio decoder errors, but always the same hex code, you know it's only 1 stream damaged and not both
[19:58:15 CEST] <ossifrage> It was from ffplay, and it is one of 16 different values
[19:58:31 CEST] <ossifrage> I'm playing back a raw h.264 elementary stream
[19:58:41 CEST] <ossifrage> (from a fifo)
[20:02:09 CEST] <DHE> well, that simplifies tracking down the source of the errors
[20:32:41 CEST] <ossifrage> pwd
[20:33:13 CEST] <ossifrage> Doh, this window manager really is buggy.
[20:48:15 CEST] <usernameialwaysf> I try using "avcodec_send_packet" and "avcodec_receive_frame" but I got the error
[20:48:18 CEST] <usernameialwaysf> "No start code is found." "Error splitting the input into NAL units."  with my log message  "Error on decoding video" Invalid data found when processing input
[20:48:28 CEST] <usernameialwaysf> my source is at http://pastebin.com/uQy8hjz4
[20:49:11 CEST] <usernameialwaysf> but I have no idea how to fix it. Could you guy please help me?
[20:50:12 CEST] <vans163> Is there a command to pass ffmpeg to have it not output anything except whats going to pipe:1 (stdout)?
[20:50:17 CEST] <vans163> like no information or bitrate, etc
[20:50:34 CEST] <furq> -v quiet
[20:50:55 CEST] <furq> although you probably actually want -v fatal or -v error
[20:50:56 CEST] <vans163> i think iv finally got something working. stdin input and stdout output from inside erlang
[20:51:03 CEST] <vans163> no need to mess with unix sockets or pipes
[20:51:14 CEST] <vans163> furq: ty will test now
[22:23:21 CEST] <vans163> Is it just me or does mpeg1 delay a few frames when it gets input?
[22:23:56 CEST] <vans163> Using mpeg1 I seem to have like 100-200ms more latency then just LZ4ing the 24bit RGB pixels and drawing them on a JS canvas
[22:24:41 CEST] <vans163> gonna try setting the bitrate way up
[22:24:44 CEST] <durandal_170> how to reproduce it?
[22:25:36 CEST] <vans163> durandal_170: This is the command im using  ffmpeg -y -f image2pipe -vcodec ppm -framerate 30 -i pipe:0 -f mpeg1video -vf \"crop=iw-mod(iw\\,2):ih-mod(ih\\,2)\" -b 0 pipe:1
[22:25:57 CEST] <vans163> so basically PPM files get piped in, and mpeg1 gets piped out piece by piece as its rendered
[22:34:23 CEST] <vans163> it might be that the encoding time is much longer
[22:34:28 CEST] <vans163> then just lz4ing
[22:34:34 CEST] <vans163> and that is what im noticing
[22:34:35 CEST] <kepstin> hmm, mpeg1 doesn't have b-frames, so I wouldn't have expected any delay inherent to the codec
[22:34:46 CEST] <kepstin> the decoding time in js is probably a lost slower
[22:35:04 CEST] <vans163> lz4ing takes 5~ms for a 1280x1024x24bpp 3~MB file
[22:35:10 CEST] <kepstin> you might also be hitting buffering issues in the pipes, depends exactly how you're handling that
[22:35:40 CEST] <kepstin> (ffmpeg uses blocking reads and writes on both ends, and OS pipe buffers are generally pretty small)
[22:35:41 CEST] <vans163> kepstin: perhaps, its all going over localhost.  the lz4 was using similar cpu usage in the browsedr to the mpeg lib im using
[22:36:02 CEST] <vans163> kepstin: as soon as msg goes to stdout, I receive it. Im using erlang
[22:36:09 CEST] <vans163> so i get chunks
[22:36:27 CEST] <vans163> problem with LZ4 was, it was allocating bytes out of contorl
[22:36:36 CEST] <vans163> and chrome would lock up after 10 seconds, going out of memory
[22:36:49 CEST] <vans163> (the lz4 library in javascript)
[22:37:21 CEST] <kepstin> huh, that's kind of weird. I would have expected there to be an asm.js implementation of lz4 decoding, which uses fixed buffers.
[22:37:28 CEST] <vans163> kepstin: is there a way to get ffmpeg to tell you the encoding time it took for the frame? As I see its using 33% of 1 core at a constant 30fps stream.
[22:37:56 CEST] <vans163> kepstin: Maybe im doing something wrong but I am also taking the return of the LZ4 result and creating a new imageData with it to draw on canvas
[22:38:02 CEST] <vans163> maybe I need to optimize that part
[22:38:40 CEST] <kepstin> yeah, chrome has issues when you go through a lot of imagedatas. you want to re-use them to avoid it kicking in the GC too much
[22:40:41 CEST] <vans163> kepstin: Would you know perhaps how to fix up a small block of code in JS? 10-15 lines? It takes the ArrayBuffer, LZ4 decodes it, makes image data, draws to canvas
[22:40:46 CEST] <vans163> (i can gist it)
[22:41:14 CEST] <kepstin> vans163: hmm, I might have a couple minutes right now to take a look. no guarantees :)
[22:44:27 CEST] <kepstin> oh, right, I forgot that the ImageData() objects were read-only. Now I'm forgetting what the best way to handle this was; it's been a while since I've looked into it
[22:45:31 CEST] <kepstin> er, wait, no, the property is readonly but the array itself isn't iirc
[22:48:25 CEST] <kepstin> but yeah, you should basically first create a single imagedata with getImageData at the correct size, then in a loop [ set all the pixels in that imagedata, then call putImageData to draw the updated image on the canvas ]
[22:48:38 CEST] <vans163> kepstin: yea I think you can draw the actual pixels line by line, but im not sure if thats more expensive. I did a iterating an array approach doing by uint32's
[22:48:44 CEST] <vans163> sizeof(uint32)
[22:48:50 CEST] <vans163> and it took like 200ms
[22:49:34 CEST] <kepstin> ideally, you'd be decoding from the lz4 directly into the imagedata's Uint8ClampedArray
[22:50:47 CEST] <vans163> kepstin: https://gist.github.com/anonymous/33e0e7aaa18a49f375d355511882bd87
[22:51:12 CEST] <vans163> kepstin: there must be a way to reuse the image data, something leads me to believe each time you make a new one, itl realloc the mem
[22:52:29 CEST] <kepstin> yes. You want to decode the lz4 directly into the imageData.data (which is a writable Uint8ClampedArray)
[23:13:09 CEST] <vans163> kepstin: ahh let me try that
[23:23:44 CEST] <agrathwohl1> Don't people generally suggest not doing work like video processing on Erlang?
[23:24:25 CEST] <agrathwohl1> Erlang is pretty slow, much slower than C.
[23:24:31 CEST] <vans163> agrathwohl1: yea erlang would be terrible for video processing :P
[23:24:44 CEST] <vans163> ffmpeg is doing the video processing which is written in C or C++, didnt checked
[23:24:56 CEST] <vans163> and erlang can call C code as if native
[23:25:03 CEST] <agrathwohl1> Ah! I see. TIL
[23:25:49 CEST] <vans163> in my case im just spawning ffmpeg, but for some other things like converting 24bpp RGB to RGBA 32bpp, erlang takes 200ms!, but I use erlang to call a small C lib I wrote compiled with -O2 and it takes 3-4ms
[23:26:16 CEST] <vans163> so you kind of get the speed of C + the fault tolerance and concurrency of erlang
[23:29:24 CEST] <vans163> so encoding PPMs at 30 FPS to mpeg1 is taking about 33% of 1 of my 3.1 ghz cores. I wonder if theres a way to get this down to 10%?
[23:29:34 CEST] <vans163> perhaps change the input to raw?
[23:31:12 CEST] <Mavrik> get rid of mpeg1
[23:31:20 CEST] <Mavrik> It's utterly obsolete codec noone cares about.
[23:31:24 CEST] <Mavrik> And hence noone optimizes.
[23:31:45 CEST] <Mavrik> Use x264 with a veryfast profile.
[23:32:14 CEST] <vans163> Mavrik: I think its leading to that. Testing x264 is next on my list, its just more complex to find a client that can decode it in Javascript land
[23:32:25 CEST] <Mavrik> hmm?
[23:32:31 CEST] <Mavrik> H.264 is pretty much universally supported?
[23:32:41 CEST] <Mavrik> And hardware accelerated on all platforms?
[23:33:01 CEST] <vans163> I mean, to have minimal latency, no prebuffering.  Some of the ahrdware accel (like OpenMAX on the raspi) only works if the video can be buffered
[23:33:15 CEST] <vans163> non-bufferable x264 it cant hardware accel
[23:34:19 CEST] <vans163> I want to transmit a desktop remotely.  the best solution right now if you dont count bandwith and Chrome going OOmemry is LZ4ing raw RGB 24bpp
[23:34:29 CEST] <vans163> The latency has to be minimal
[23:34:45 CEST] <vans163> But LZ4ing takes a ton of bandwith, especially if there is a lot of entropy in the pixels
[23:35:02 CEST] <Mavrik> So I-frame only fastdecode H.264?
[23:35:17 CEST] <Mavrik> With a fast AVX optimized decoder in C?
[23:35:25 CEST] <vans163> x264 is the best solution, but making it work on Chrome browser at minimal latency looks challenging  https://github.com/mbebenita/Broadway
[23:35:49 CEST] <vans163> Mavrik: Yea.. I think il end up just ditching the effort to make it work inbrowser, and just make a seperate remote viewer app
[23:35:52 CEST] <Mavrik> Why are you trying to even decode it in something as horrible as JavaScript if you need such incredible performance.
[23:35:56 CEST] <Mavrik> Use NaCL
[23:36:07 CEST] <Mavrik> :)
[23:36:15 CEST] <vans163> Mavrik: So its effortless to use, so users dont need to download a 3rd party program.
[23:36:22 CEST] <vans163> but just navigate and go
[23:36:45 CEST] <vans163> also if it works in browser, itl work on Mac, Linux, Windows
[23:36:49 CEST] <vans163> No need for 3 seperate versions
[23:37:02 CEST] <vans163> guess coudl write it in a managed language
[23:37:05 CEST] <Mavrik> Isn't NaCl pretty much built for that?
[23:37:17 CEST] <vans163> NaCL is a crypto library? https://en.wikipedia.org/wiki/NaCl_(software)
[23:37:22 CEST] <furq> native client
[23:37:49 CEST] <Mavrik> https://developer.chrome.com/native-client
[23:38:03 CEST] <vans163> Oh damn
[23:38:10 CEST] <vans163> let me read about this more
[23:38:23 CEST] <furq> it's chrome and opera only
[23:38:25 CEST] <Mavrik> It's pretty much what powers ChromeOS
[23:38:27 CEST] <furq> before you get too excited
[23:38:29 CEST] <kepstin> vans163: for your use case, you probably want to use webrtc to stream the video (which should let you do vp8/9 and maybe h264 in rtp)
[23:38:53 CEST] <kepstin> but that makes the server side a bit more complicated (need RTP with DTLS, ICE, etc)
[23:39:34 CEST] <Mavrik> He wants 0 latency :)
[23:40:34 CEST] <durandal_170> Buy quantum CPU
[23:40:47 CEST] <vans163> kepstin, Mavrik: Apparently WebRTC might work, I looked at it today, but my concern is if the x264 low latency optimized stream can be decoded by the video tag / media api
[23:41:40 CEST] <kepstin> vans163: up to the browser, but I know firefox can (it's limited to baseline decode, but you can do that with x264 just fine), and I think chrome can, but it might be behind a flag?
[23:41:57 CEST] <vans163> kepstin: and yea the overhead on the server side is dreadful, I tried to implement WebRTC once, didnt get passed the STUN/ICE
[23:42:07 CEST] <Mavrik> They'll offload that to hardware anyway and then you'll add latency.
[23:42:17 CEST] <Mavrik> If you need such incredible low latency then having that chain won't work.
[23:42:44 CEST] <Mavrik> If you decide that having 30ms of latency is fine, use WebRTC and save tons of time :)
[23:42:54 CEST] <Mavrik> And battery. And compatibility issues. :)
[23:43:02 CEST] <furq> it really doesn't sound like you need <1 frame of latency
[23:43:05 CEST] <vans163> furq: chrome and opera only is fine for my use case.  Mavrik: Let me look at WebRTC again then.
[23:43:20 CEST] <vans163> furq: I think up to 30ms decoding latency per frame with no prebuffer is fine for my use case
[23:43:53 CEST] <vans163> most of the examples iv see show that 15ms~ decode is average
[23:43:55 CEST] <furq> you can probably at least double that
[23:44:07 CEST] <furq> especially if this is 30fps video
[23:44:22 CEST] <vans163> yea 30fps for now, plans for 60fps 1080p, but 30fps 1080p more then enough
[23:44:37 CEST] <vans163> (to have a usable, snappy usage experience)
[23:45:04 CEST] <Mavrik> For that you can go up to 200 or 300ms roundtrip :)
[23:45:09 CEST] <furq> i'd be very surprised if anyone would notice four frames of latency at 60fps
[23:45:18 CEST] <furq> unless you're doing street fighter netplay or something
[23:45:31 CEST] <vans163> 60fps is yea, better for videos / games
[23:45:37 CEST] <vans163> less cutting
[23:45:43 CEST] <vans163> and tearing
[23:46:21 CEST] <vans163> Mavrik: I got the encode down to 5-15ms, (using Nvidia NvEnc)
[23:46:42 CEST] <vans163> Mavrik: then its just latency + transfer time + hardware overheads + decode
[23:47:16 CEST] <vans163> so the problem is decode now :P
[23:47:49 CEST] <vans163> Thats capture + encode
[23:48:01 CEST] <vans163> capture takes <1ms which is damn impressive using nvidia apis
[23:48:19 CEST] <furq> also yeah h264 tuned for zero latency will be baseline profile, so anything should be able to decode it
[23:48:22 CEST] <furq> depending on the bitrate obv
[23:48:49 CEST] <furq> but you'll probably run out of internet bandwidth before that becomes an issue
[23:48:50 CEST] <vans163> furq, Mavrik: I think WebRTC should be a good avenue to revisit then
[23:49:06 CEST] <vans163> furq: Afaik the streams are pretty small, I was getting 5-10mbps on 1280x1024 30fps
[23:49:14 CEST] <kepstin> you might have to check whether nvenc is giving you baseline, but if it's 1-in-1-out, it probably is.
[23:49:16 CEST] <vans163> I think 60fps 1080p would cap out at 50mbps
[23:49:29 CEST] <kepstin> firefox needs baseline, because they use cisco's 'openh264' decoder :/
[23:50:35 CEST] <vans163> yea the problem with the nvidia approach is you need grid cards which cost a fortune.  So I was looking into just using FFMpeg on the CPU
[23:50:36 CEST] <furq> do you mean for webrtc or in general
[23:50:45 CEST] <kepstin> for webrtc specifically
[23:50:49 CEST] <furq> oh
[23:50:51 CEST] <furq> that's dumb
[23:51:07 CEST] <kepstin> for non-webrtc video, it'll use the OS decoders, which can usually do high or better
[23:51:11 CEST] <furq> yeah
[23:51:27 CEST] <kepstin> (but on some linux - e.g. fedora - you'll get no OS decoder at all, of course)
[23:51:44 CEST] <furq> doesn't it just use libavcodec on linux
[23:52:01 CEST] <kepstin> it actually uses gstreamer to get a decoder on linux
[23:52:03 CEST] <vans163> This issue seems related to WebRTC and NvEnc  https://bugs.chromium.org/p/webrtc/issues/detail?id=5652
[23:52:27 CEST] <vans163> Maybe NaCl would have to be used
[23:52:35 CEST] <kepstin> which on most linux distros will pull in the plugin from "gst-libav", which is the ffmpeg wrapper.
[23:52:41 CEST] <furq> vans163: that's for encoding
[23:52:58 CEST] <kepstin> (as opposed to "gst-ffmpeg", which is the libav wrapper)
[23:53:11 CEST] <vans163> That last comment
[23:53:12 CEST] <kepstin> (it's a confusing historical thing ;)
[23:53:39 CEST] <vans163> Says that while VideoDecodeAccelerator is supported in Chrome, its not used for WebRTC media
[23:53:46 CEST] <vans163> If someone wants, they can put in the commit
[23:53:48 CEST] <furq> confusing historical things on *nix? never
[23:54:01 CEST] <vans163> For some reason it goes to say, they dont plan to support it
[23:55:07 CEST] <kepstin> I'd really have expected the chrome mobile team to try to get hwaccel stuff into webrtc, just for battery life reasons :/
[23:55:38 CEST] <kepstin> that "(on desktop)" parenthetical on comment 7 is a bit telling :)
[23:55:58 CEST] <furq> yeah this is talking about standalone webrtc
[23:56:16 CEST] <kepstin> hmm, right, so it's probably implemented in chrome but not in the standalone codebase :/
[23:56:28 CEST] <furq> it doesn't say anything about whether hardware decode is implemented in chrome webrtc
[23:56:32 CEST] <furq> which one would hope it is because otherwise wtf
[23:56:49 CEST] <furq> with that said i don't see how it matters
[23:56:56 CEST] <furq> it's not your fault if someone drains their battery
[23:57:14 CEST] <vans163> looks like its posisble https://groups.google.com/forum/#!topic/discuss-webrtc/8DC2iF0eP6s.  But mileage may vary
[23:57:23 CEST] <kepstin> I'm pretty sure it is - at least in the most recent versions - for <video> tag stuff (they made a big deal about the power improvements), but it's unclear about whether it's also in webrtc
[23:57:33 CEST] <vans163> Someone at the bottom of that said they got NvEncoded video to render on WebRTC
[23:58:01 CEST] <vans163> kepstin: simple video tag may work now.. I should try with chunked encoding
[23:58:07 CEST] <vans163> *chunked transfer or whatever
[23:58:49 CEST] <kepstin> simple video tag using media source extensions should be able to do live video, but the latency's gonna be pretty bad
[23:59:13 CEST] <furq> also this is almost certainly going to consume less memory than decompressing 6MB of rgb24 pixels 30 times per second
[23:59:14 CEST] <vans163> kepstin: I wonder why, if the video stream does not have anything related to prebuffering..
[23:59:17 CEST] <furq> s/memory/battery/
[23:59:41 CEST] <vans163> kepstin: the video tag should play it as soon as it gets frames no?
[23:59:45 CEST] <furq> no
[23:59:49 CEST] <furq> it'll play as soon as its cache is full
[00:00:00 CEST] <furq> afaik you have no control over how big the cache is
[00:00:00 CEST] --- Thu Sep 15 2016


More information about the Ffmpeg-devel-irc mailing list