[Ffmpeg-devel-irc] ffmpeg.log.20150515

burek burek021 at gmail.com
Sat May 16 02:05:01 CEST 2015


[01:55:21 CEST] <squeegily> Are you supposed to use -preset and -tune together? Like -preset veryslow -tune animation
[02:04:24 CEST] <klaxa> you can do that if you want
[02:04:42 CEST] <dAnjou> tias
[02:05:08 CEST] <klaxa> afaik they tune different aspects of encoding
[06:00:02 CEST] <mozzarella> guys
[06:01:12 CEST] <mozzarella> I want to take a number of snapshots of a video, each snapshot spaced apart equally, can I do that with ffmpeg?
[09:05:49 CEST] <grepper> mozzarella: you could take a look at the 'select' filter, or just alter the framerate with -vf fps
[09:06:07 CEST] <grepper> iiuc
[09:08:49 CEST] <grepper> bbiab
[10:11:18 CEST] <mattfoo> i am currently working on a project that requires extremely low latency video streaming that will be displayed on an android 4.0.3 device. after trying many options i've been getting the best results with using flowplayer (flash) on the android. i'm getting fairly low latencies which is great but there's one hiccup. when i use pix_fmt = yuv420p, playback on a macbook pro is under 100ms but much, much worse on the android. if i use
[10:11:19 CEST] <mattfoo> pix_fmt = yuv422p, there is about 100-200ms added to the delay when playing back on a desktop but the playback on the android is reduced a bit. the problem is it isn't reduced enough. getting about 500ms to the android but i really need under 200ms. looking for any suggestions
[10:16:19 CEST] <mattfoo> note, this is over wifi. i'm using libx264, vbv-maxrate=1500, vbv-bufsize=200, -tune zerolatency, -preset ultrafast, -r 30, -vf scale=960:540, -pix_fmt yuv422p, -f flv rtmp://localhost/live/feed1
[10:18:10 CEST] <Mavrik> instead of zerolatency, try fastdecode
[10:18:10 CEST] <mattfo0> also on a side note i'm using nginx w/rtmp module for the media server
[10:18:15 CEST] <Mavrik> since it's obvious that your latency comes from HW decoder
[10:18:23 CEST] <Mavrik> not the encoder side
[10:20:46 CEST] <mattfo0> that's actually much slower
[10:21:04 CEST] <mattfo0> on both the desktop (trivial) but also on the android
[10:21:59 CEST] <mattfo0> is there any reason why yuv420p seems to be so much faster encoding than yuv422p?
[10:22:42 CEST] <mattfo0> the android can decode the yuv422p much faster than yuv420p but since the encoding takes longer it doesn't really seem to help anything
[10:26:47 CEST] <mattfo0> when i encode @ yuv420p, decoding on a laptop is insanely close to realtime.. ~100ms, perfect for my needs... but the decoding on the android is literally closer to 3 seconds
[10:29:29 CEST] <mattfo0> when i encode @ yuv422p, decoding on a laptop is a bit slower.. closer to 500ms. decoding on the android, however, is much faster than yuv420p clocking in at about 600ms
[10:32:20 CEST] <mattfo0> the nature of this project requires a latency of 200ms or less on the android so i'm really looking for any suggestions what-so-ever
[10:32:48 CEST] <mattfo0> it's over WiFI with plenty of bandwidth, the flash player i'm using is leveraging the hardware decoder
[10:56:50 CEST] <mattfo0> should also be noted that the laptop and android are both using the same flash player and are both on wifi
[13:28:17 CEST] <alexvf> hi all, ayone knows how can i detect a discontinuity in a DVB subtitle stream when AVPacket.duration is 0?
[13:54:56 CEST] <samskiter> hi, i understand ffmpeg has support for HW decoding of h.264, but is this the case on iOS? iOS 8 (finally) exposed the HW decoding to developers but Im unsure if ffmpeg has support for that&
[14:11:36 CEST] <alexvf> samskiter: i guess the support for HW decoding in ffmpeg you are talking about is through VDPAU
[14:12:06 CEST] <alexvf> samskiter: so if you have a VDPAU implementation in iOS 8 ...
[14:12:52 CEST] <samskiter> alexvf: i believe it is via videotoolbox.framework
[14:13:16 CEST] <samskiter> which was made available in ios8 (previously only was available in mac osx)
[14:13:44 CEST] <samskiter> alexvf: this patch seem to make soem mention of video toolbox http://ffmpeg.org/pipermail/ffmpeg-devel/2012-September/130717.html but it is a little old...
[14:16:22 CEST] <alexvf> samskiter: i see .. then i cannot help you much, i don't know anything about that
[14:17:07 CEST] <samskiter> alexvf: ok. thanks though. will lurk here a little :) &
[14:17:11 CEST] <alexvf> samskiter: hope anyone can help you
[14:17:20 CEST] <alexvf> :)
[14:20:00 CEST] <samskiter> alexvf: hopefully. maybe you could help me a ilttle in my understanding  (i may be asking in the wrong place here). what is the relationship between libav and ffmpeg? does ffmpeg use libav? the reason i ask is because gstreamer has some mention of support for HW accel (here: https://coaxion.net/blog/2014/09/gstreamer-with-hardware-video-codecs-on-ios/) and i understand it makes quite alot of use of libav&
[14:20:35 CEST] <samskiter> and so my understanding is that the HW accel is actually coming from libav
[14:22:15 CEST] <alexvf> samskiter: well, libav is a fork from ffmpeg ... until some point they are pretty similar
[14:22:41 CEST] <alexvf> samskiter: but i don't know if they are fully interchangeable now
[14:23:29 CEST] <samskiter> alexvf: interesting, im using this library at the minute: https://github.com/durfu/DFURTSPPlayer which says it uses ffmpeg, but contains libav files, so the author is a little confused between the two?
[14:25:24 CEST] <alexvf> samskiter: i don't think so, but ffmpeg libraries are called libavformat, libavcodec and so on
[14:26:34 CEST] <alexvf> samskiter: the relation between ffmpeg and libav has been a little tortuous, you can search for it in google :)
[14:27:20 CEST] <samskiter> i was just thinking that more important than documenting open source libraries is documenting their politics ;)
[15:42:41 CEST] <samskiter> hi, asked this earlier but noone could help&
[15:42:44 CEST] <samskiter>  i understand ffmpeg has support for HW decoding of h.264, but is this the case on iOS? iOS 8 (finally) exposed the HW decoding to developers but Im unsure if ffmpeg has support for that&
[15:54:53 CEST] <Mavrik> samskiter, no, videotoolbox is not supported.
[15:55:14 CEST] <samskiter> thanks Mavrik its not supported on mac osx either?
[15:55:28 CEST] <Mavrik> on OS X there's VDA interface which is.
[15:55:37 CEST] <samskiter> ah, i see
[16:46:04 CEST] <mozzarella> grepper: I want pictures, not a video, is that what it does?
[16:49:10 CEST] <GT_> Hello guys, I'm having problem with ffmpeg on CentOS 6.6 When I try to open more files I'm getting a strange error Resource temporarily unavailable, did anybody had similar issue? And if yes, what was the problem? I tried the same thing on another two server and it's running fine, can't figure out what's the problem. Log: https://gist.github.com/anonymous/d1b60bcdcbc6afb7ccb0#file-gistfile1-txt-L82
[16:57:53 CEST] <grepper> mozzarella: sure, what ever you want to output.  http://tinyurl.com/no488h4   (ffmpeg.org)
[17:19:20 CEST] <WhiteBunny> Hello all
[17:19:27 CEST] <WhiteBunny> Do anyone here know how i can get this icons -> http://i.imgur.com/kSUsnRH.png
[18:19:40 CEST] <edoceo> I'm trying to make a grid of a bunch of videos+the audio.  When I merge I get loads of this error
[18:19:43 CEST] <edoceo> Error while decoding stream #2:0: Cannot allocate memory
[18:20:09 CEST] <edoceo> It switches between the stream (0,1,2,3) - the origin streams are Opus audio codec
[18:21:25 CEST] <edoceo> Here is a paste of the command and it's output http://edoceo.io/paste?p=P9HRaY
[18:26:38 CEST] <__jack__> edoceo: not enough memor
[18:26:40 CEST] <__jack__> y*
[18:32:58 CEST] <edoceo> But top and htop show that there is plenty of free rams
[18:35:38 CEST] <edoceo> I've tried on multiple computers, some with 4G and some with 16G  - all report the same error
[18:38:51 CEST] <__jack__> 64b systems right ?
[19:00:31 CEST] <edoceo> Yea, 64bits
[19:01:07 CEST] <edoceo> I tought maybe Opus was the issue, so I transcoded to mpegts first, then tried the grid and it still failed
[19:01:21 CEST] <edoceo> Transcodeing each source individually howver did not result in a decode memory error
[19:02:05 CEST] <edoceo> To me this feels like some other error, which the bubbles as a ENOMEM or something
[19:55:33 CEST] <jbmcg> when trying to trim a video using -ss and -t, is there a way to subtract from the duration, or a constant that represents the duration of the input video? I'm just trying to get only the last 5 seconds of the input video in my output video without having to use ffprobe or something to read in the duration
[19:56:18 CEST] <c_14> no
[19:56:20 CEST] <c_14> not without scripting
[21:08:13 CEST] <hydalgo> exit
[21:08:16 CEST] <hydalgo> /quit
[21:08:26 CEST] <houdini> hey guys, how can i zoom to left corner to video
[21:13:52 CEST] <c_14> scale/crop or crop/scale
[21:15:26 CEST] <houdini> crop/scale
[21:17:26 CEST] <houdini> i want to just see http://prntscr.com/75lbc1 red box of video
[00:00:00 CEST] --- Sat May 16 2015


More information about the Ffmpeg-devel-irc mailing list