[Ffmpeg-devel-irc] ffmpeg.log.20160809

burek burek021 at gmail.com
Wed Aug 10 03:05:01 EEST 2016


[00:05:47 CEST] <bray90820> llogan: Thanks
[00:10:00 CEST] <bray90820> licson: I ran it with a WMA2 file in the directory but it said "*.wmv: No such file or directory"
[00:10:54 CEST] <llogan> did you navigate to the directory first before running the command. did you modify the command at all? are there any *.wmv files in the directory?
[00:13:18 CEST] <bray90820> licson: I did
[00:13:18 CEST] <bray90820> http://pastebin.com/raw/Xu8VXmhm
[00:13:34 CEST] <bray90820> I did not modify it
[00:13:51 CEST] <bray90820> And all the files are AVI
[00:15:19 CEST] <bray90820> licson: I hd to change WMA to AVI
[00:15:25 CEST] <bray90820> I got it working now thanks
[00:15:43 CEST] <bray90820> But Could I make it check any file type?
[00:16:06 CEST] <ozette> complete output of mkv > m3u8 : https://paste.fedoraproject.org/404572/06945361/
[00:17:09 CEST] <llogan> bray90820: that's more of a bash question than ffprobe: for f in *;
[00:17:33 CEST] <bray90820> Alright
[00:17:34 CEST] <bray90820> Thanks
[00:18:41 CEST] <ozette> my question.. why does the playlist start at out194.ts
[00:20:36 CEST] <ozette> it's as if out.m3u8 is overwritten each 5 segments
[00:20:56 CEST] <ozette> and finally i end up with a playlist which starts at the last 5
[01:12:03 CEST] <llogan> ozette: -hls_list_size default is 5.
[01:12:13 CEST] <llogan> add "-hls_list_size 0"
[01:12:58 CEST] <llogan> http://ffmpeg.org/ffmpeg-formats.html#hls-1
[01:22:42 CEST] <ozette> llogan: ah, thanks, now you say.. i recall reading about that some time ago
[01:22:57 CEST] <ozette> have to wonder why the default is 5 though
[02:03:48 CEST] <jya> is there a way with ffmpeg to remux a mp4 and either add a proper edit list or rework the pts so that the first sample starts at 0 ?
[02:05:14 CEST] <c_14> the second should be done automatically
[02:07:15 CEST] <c_14> though you can try explicitly passing -start_at_zero or whatever it was called
[02:08:25 CEST] <llogan> -avoid_negative_ts make_zero
[02:21:22 CEST] <jya> c_14: I already have a mp4, whatever google used to generate those samples, the first sample is either 0.083333 or 0.0666667
[02:21:39 CEST] <jya> I'm trying to make them all start at the same time (don't really mind if it isn't 0).
[02:23:09 CEST] <jya> c_14: tried ffmpeg -i test.mp4 -start_at_zero -vcodec copy -acodec copy test2.mp4, this doesn't appear to change the first cts
[02:23:38 CEST] <c_14> add -avoid_negative_ts make_zero
[02:24:09 CEST] <jya> weird argument for a mp4 :)
[02:25:58 CEST] <jya> c_14: nope.. still starting at a non-0 value (it has an audio track which does start at 0 if that makes a difference)
[02:26:16 CEST] <c_14> it does
[02:26:27 CEST] <c_14> ffmpeg shifts all tracks equally
[02:26:43 CEST] <c_14> So if the tracks don't start at the same time it'll shift the one that starts first to 0
[02:27:02 CEST] <jya> hmmm.. any way to force it ? don't mind if a/v sync is slightly off
[02:27:49 CEST] <jya> the other issue is that my original samples are fragmented mp4, and I'd like to keep them that way...
[02:28:16 CEST] <c_14> fragmented how? there's a bunch of fragmenting options for the mp4 muxer
[02:28:46 CEST] <c_14> https://ffmpeg.org/ffmpeg-formats.html#mov_002c-mp4_002c-ismv
[02:28:58 CEST] <jya> c_14: it's those files there: https://github.com/w3c/web-platform-tests/tree/master/media-source/mp4
[02:30:17 CEST] <jya> i'm trying to fix the files because right now, the MSE w3c tests checks that when you add those files, the buffered range (the intersection of all tracks) starts at 0. but this is incorrect as audio starts at 0 but video at a different time. The test works because blink incorrectly use dts in place of pts
[02:31:09 CEST] <c_14> There's frag_keyframe, frag_duration and frag_size. You'll have to figure out which of those is right for the fragmentation (maybe check with boxdumper or something)
[02:31:48 CEST] <jya> will do thanks. those options above still do not work right.
[02:31:51 CEST] <c_14> As to the offset, either find the offset for the files and then seek to it (discarding part of the audio) or extract the audio and video to separate files (so the timestamps start at 0 for both) and then combine them again
[02:32:49 CEST] <jya> by default this file https://github.com/w3c/web-platform-tests/blob/master/media-source/mp4/test-v-128k-320x240-24fps-8kfr.mp4 has a first sample of 0.16667 (according to ffprobe) after doing ffmpeg -i test-v-128k-320x240-24fps-8kfr.mp4 -start_at_zero -avoid_negative_ts make_zero -vcodec copy -acodec copy test2.mp4, now the first sample has a time of
[02:32:49 CEST] <jya> 0.083008
[02:36:38 CEST] <c_14> Getting rid of the avoid_negative_ts option seems to make it start at 0.0000
[02:44:21 CEST] <jya> c_14: ah yes indeed.. awesome
[03:39:34 CEST] <KDDLB> hm
[03:39:48 CEST] <KDDLB> how do I change (not convert) the framerate on a video file?
[03:40:23 CEST] <KDDLB> e.g.: I have a 240FPS video, but its actually 30FPS
[03:43:07 CEST] <relaxed> KDDLB: man ffmpeg|less +/-r\\[
[03:45:26 CEST] <KDDLB> there we go
[03:45:32 CEST] <KDDLB> I need to reencode it, it looks like
[03:45:38 CEST] <KDDLB> I did -c:v copy and it didnt work
[03:45:57 CEST] <relaxed> whole command?
[03:46:18 CEST] <KDDLB> ffmpeg -r 240 -i Downloads/VID_20160808_213636.mp4 video.mp4
[03:46:40 CEST] <KDDLB> I tried with & ffmpeg -r 240 -i Downloads/VID_20160808_213636.mp4 -c:v copy video.mp4
[03:52:54 CEST] <darkliight> Hi, just starting to play with ffmpeg ... when I 'ffprobe -i bluray:.', I'm told there are '7 usable playlists, and then continues by choosing the longest playlist .... is there a way to just have ffprobe spit back a list of usable playlists and stop after that?
[03:53:28 CEST] <relaxed> KDDLB: you may have to re-encode
[03:53:39 CEST] <KDDLB> re-encoding works
[06:01:43 CEST] <budric> hi, can anyone shed some light on how to get ffmpeg to stream video to kodi?  Do I need to startup ffserver and do ffmpeg -> ffserver -> kodi, or is there an easier way?
[08:34:12 CEST] <the_cuckoo> hi - was wondering if there's any high level documentation concerning avfilter? I've had some success with it, but i'm having a problem with yadif/send_field and a progressive input... can paste code if anyone wants to review it (based on the ffmpeg sample), but docs would be fine
[09:17:19 CEST] <ahoo> it is possible to change the framerate of a video without re-encoding it?
[09:17:40 CEST] <ahoo> i wouldn't have thought that was possible.
[09:18:13 CEST] <ahoo> KDDLB: i don't think you can.
[09:18:45 CEST] <furq> sure you can
[09:18:48 CEST] <furq> not with ffmpeg though
[09:19:24 CEST] <furq> dedicated mp4/mkv muxers (mp4box, l-smash, mkvtoolnix) can do it with h264 streams
[09:19:36 CEST] <furq> it probably works with other formats/codecs but i've never had cause to find out
[09:19:51 CEST] <ahoo> so the framerate can be specified in the container and the stream is then played back at that rate?
[09:20:15 CEST] <furq> something like that
[09:20:15 CEST] <ahoo> oh never mind.
[09:20:26 CEST] <Mavrik> There's no specified farmerate for H.264
[09:20:34 CEST] <Mavrik> Video has to be retimestamped and audio adjusted as well.
[09:20:38 CEST] <furq> ^
[09:20:50 CEST] <furq> if it was as simple as setting a metadata flag then ffmpeg would be able to do it
[09:22:15 CEST] <ahoo> so
[09:22:23 CEST] <ahoo> it isn't possible without re-encoding.
[09:22:30 CEST] <furq> yes it is
[09:22:38 CEST] <ahoo> we just concluded it is not
[09:22:42 CEST] <ahoo> didn't we?
[09:22:42 CEST] <furq> no we didn't
[09:22:45 CEST] <ahoo> ok
[09:22:50 CEST] <ahoo> i see :3
[09:23:29 CEST] <ahoo> maybe different wording helps
[09:23:49 CEST] <ahoo> how about "it isn't possible without re-processing the streams"
[09:24:05 CEST] <furq> i guess
[09:24:21 CEST] <furq> but the resulting video will be untouched (other than the playback speed)
[09:24:47 CEST] <ahoo> oh
[09:24:51 CEST] <ahoo> is that the case?
[09:25:00 CEST] <furq> yeah
[09:25:01 CEST] <ahoo> then i understand. it is.
[09:25:06 CEST] <furq> you just need to rewrite the timestamps
[09:25:17 CEST] <ahoo> and the timestamps are in the container?
[09:25:26 CEST] <furq> that's my understanding of it
[09:25:28 CEST] <furq> with h.264 at least
[09:25:29 CEST] <ahoo> i see
[09:26:25 CEST] <furq> you will probably need to reencode the audio though
[09:28:24 CEST] <ahoo> hehe, or tolerate the reverb
[09:28:43 CEST] <ahoo> phasing/
[09:40:34 CEST] <durandal_1707> the_cuckoo: could you pastebin code?
[09:41:03 CEST] <the_cuckoo> durandal_1707: sure - gimme a couple of minutes while i rip out the specifics
[09:42:05 CEST] <durandal_1707> also what's actual problem?
[09:42:31 CEST] <the_cuckoo> when the input is progressive, i don't get anything out until i flush - and then it's only the last frame
[09:42:53 CEST] <the_cuckoo> when the input is interlaced, i get the correct result - twice the output
[09:46:37 CEST] <the_cuckoo> durandal_1707: http://pastebin.ca/3679427
[09:48:03 CEST] <the_cuckoo> the diagnostic on line 151 shows that all frames are presented and the one line 130 shows that i get EGAIN repeatedly - until i flush, then i get a frame
[09:48:17 CEST] <the_cuckoo> (being the last one)
[09:49:03 CEST] <the_cuckoo> i can work around it specifically for the yadif filter, but this is more of a generic filter which wraps any avfilter chain
[09:55:08 CEST] <the_cuckoo> furq: no, you don't need to re-encode the audio - but otherwise, yes, video has to be
[09:56:43 CEST] <the_cuckoo> audio isn't aligned to frame rate - the number of samples per audio frame is normally something like 1024 (aac) or 1152 (mp2/mp3), or it can be variable (vorbis)
[09:57:28 CEST] <durandal_1707> the_cuckoo: did you try to set option to deint only interlaced?
[09:58:27 CEST] <the_cuckoo> durandal_1707: yup - result[ L"mode" ] = L"send_field"; result[ L"parity" ] = L"auto"; result[ L"deint" ] = L"interlaced";
[09:59:26 CEST] <the_cuckoo> ffmpeg doesn't have the problem - just my wrapper
[09:59:49 CEST] <durandal_1707> you set pts?
[10:00:21 CEST] <the_cuckoo> av_frame->pts = frame->get_position(); - line 166
[10:01:05 CEST] <the_cuckoo> they're 0 based and increments in steps of 1 (being the ratio of the frame rate)
[10:01:50 CEST] <the_cuckoo> timebase stuff being specified on line 32
[10:28:06 CEST] <durandal_1707> the_cuckoo: you used code from examples directory as template?
[10:28:44 CEST] <the_cuckoo> yup - some of it is lifted verbatim and is pretty much unchanged (should find the comments match iirc)
[10:32:46 CEST] <durandal_1707> the example works here just fine with yadif, also your pastebin have only calling of get frame in flushing
[10:33:15 CEST] <the_cuckoo> ah - may have missed something in the cutting
[10:34:04 CEST] <the_cuckoo> hmm - yeah - can add that to the paste if i can edit - checking
[10:35:03 CEST] <the_cuckoo> hmmph - couldn't edit - http://pastebin.ca/3679490
[10:35:30 CEST] <the_cuckoo> added the main entry point to the class - do_fetch - at line 179
[10:35:56 CEST] <the_cuckoo> it exposes more internal plumbing details which may seem confusing (but are largely irrelevant)
[10:37:24 CEST] <the_cuckoo> (the code is lgpl btw - haven't pushed it to a public repo yet, but will do shortly)
[10:38:54 CEST] <the_cuckoo> the image is extracted on line 252
[10:40:13 CEST] <durandal_1707> I only see that you don't handle eagain from filtergraph
[10:40:40 CEST] <durandal_1707> in that case you need to feed next frame
[10:40:58 CEST] <the_cuckoo> we do - line 259
[10:41:08 CEST] <durandal_1707> and try again to receive frame
[10:41:19 CEST] <the_cuckoo> yup - all in the loop
[10:42:07 CEST] <Batman_> Hello, can anyone help me with this problem: http://stackoverflow.com/questions/38827978/ffmeg-dumping-a-rtsp-streaming-works-on-windows-but-not-in-ubuntu ? Thank you!
[10:42:54 CEST] <durandal_1707> yes but I nowhere see that you feed graph with another frame
[10:44:05 CEST] <the_cuckoo> ah :) - right - that's in the callback... line 139 - this is automatically inveked from the fetch on line 238
[10:44:17 CEST] <the_cuckoo> invoked even
[10:45:02 CEST] <the_cuckoo> was trying to hide some of the more confusing details there :) - the routing is complicated because the frame holds audio and video - we need to split the audio up to the resultant frame rate)
[10:46:03 CEST] <the_cuckoo> the frame being the frame_type_ptr - not the avframe of course
[10:52:09 CEST] <durandal_1707> in loop at line 230 you must ensure that buffersrc will receive frame when buffersink returns eagain
[10:53:36 CEST] <durandal_1707> using callback is confusing, add logs to see when you give and receive AVframe
[10:55:14 CEST] <the_cuckoo> yeah - i know - it's a little bit warped :) - i couldn't find a better way to approach it - but for sure, the frames are delivered - diagnostic on line 151 shows that during the collection of frame 0, the entire input is presented
[10:56:04 CEST] <the_cuckoo> the do/while on 230 doesn't restrict the number of frames given to you guys at all
[10:56:23 CEST] <the_cuckoo> it just keeps on going until eof is received
[10:58:46 CEST] <the_cuckoo> it's curious - does exactly the right thing with an interlaced input
[11:00:40 CEST] <durandal_1707> what happens with all progressive input?
[11:01:49 CEST] <the_cuckoo> well, just the problem i'm facing - the loop pulls through every frame from the input during the first frame - hits eof - flushes the context - receives the last progressive image
[11:02:19 CEST] <the_cuckoo> doesn't matter how long the input is
[11:02:45 CEST] <the_cuckoo> send_frame does the right thing afaict - passthrough of the input
[11:04:05 CEST] <the_cuckoo> oh - curious...
[11:04:15 CEST] <the_cuckoo> the result of that is 50 fps
[11:04:59 CEST] <the_cuckoo> that can't be right...
[11:06:16 CEST] <the_cuckoo> fps_out_ = av_buffersink_get_frame_rate( buffersink_ctx_ ); <- line 98
[11:06:48 CEST] <the_cuckoo> oh - we can't know that there can we?
[11:07:23 CEST] <the_cuckoo> you haven't seen anything from the input yet - you don't know if it's interlaced or progressive..
[11:09:34 CEST] <the_cuckoo> we do of course :)
[11:10:46 CEST] <the_cuckoo> umm - i wonder - perhaps the quick and dirty hack of detecting the progressive/yadif combination is the best route here - ie: just set it up to completely bypass your stuff
[11:11:04 CEST] <the_cuckoo> it breaks some stuff for us, but nothing which is critical at this point
[11:13:34 CEST] <the_cuckoo> still - doesn't explain why this works: ffmpeg -y -v 0 -i out.ts -vf yadif=send_field:auto:interlaced out2.ts (where out.ts is a 250 frame progressive input at 25fps - out2.ts is also a 250 frame progressive output at 25 fps)
[11:20:51 CEST] <the_cuckoo> is there some kind of reconfiguration going on internally in this case which we can detect?
[11:40:11 CEST] <the_cuckoo> durandal_1707: have to say - i really appreciated that you tried to help me out here :) - thinking i've zoned in on the problem - an incorrect solution will suffice for the time being (hopefully) - would still love to get to the bottom of it though :)
[11:45:33 CEST] <durandal_1707> the_cuckoo: you can just not feed it with progressive frames
[11:49:38 CEST] <the_cuckoo> yeah - but like i said - it's not a specific wrapper for yadif :) - it's supposed to expose all your video filters
[11:50:38 CEST] <the_cuckoo> that makes it a tad trickier to filter out yadif from the avfilter graph set up
[11:50:50 CEST] <the_cuckoo> (and it could get it wrong)
[11:54:00 CEST] <the_cuckoo> but to be clear, that will be my path ahead for the time being
[12:06:52 CEST] <the_cuckoo> Batman_: hmm - [rtsp @ 0x1f18340] UDP timeout, retrying with TCP would seem to be the problem
[12:08:10 CEST] <the_cuckoo> i'd probably test with the udp:// protocol myself (thus bypassing a lot of stuff in the process), but it certainly looks curious
[12:09:35 CEST] <the_cuckoo> https://ffmpeg.org/pipermail/ffmpeg-user/2014-June/022086.html <- may help?
[12:11:54 CEST] <Spring> Does libvpx have any options specific to it that aren't supported by h.264?
[12:16:41 CEST] <Spring> found that when ffplay's -loop is set to 0, and the trim is short (eg a few seconds long) upon looping the second time it slows down the playback. Is this normal?
[12:16:43 CEST] <the_cuckoo> Spring: yes - loads - ffmpeg -h full | less and search of libvpx
[12:17:17 CEST] <Spring> the_cuckoo, thanks for the tip
[12:19:17 CEST] <Spring> hmm, nothing returns for libvpx in the full output
[12:19:43 CEST] <Spring> oh, probably because the buffer isn't long enough :p
[12:20:36 CEST] <the_cuckoo> :D - hence less
[12:20:41 CEST] <the_cuckoo> or more
[12:30:32 CEST] <Spring> it's strange, as sometimes ffplay loops without slow down while other times not. For example I can reliably reproduce the issue when looping a portion of a clip near the beginning
[12:30:56 CEST] <Spring> but looping later in the clips the slow down isn't present
[12:36:47 CEST] <Spring> here's a captured WebM of the bug: https://a.uguu.se/pzzEqQOFmL88_ffplayloopslowdownbug%5BTrailers-UmbrellasofCherbourg%2Cusing%275%27to%278%27trim%5D.webm
[12:37:23 CEST] <Spring> excuse the long filename, but it shows the issue. Loop from '5' seconds to '8' seconds of the trailer.
[12:43:04 CEST] <Spring> pastebin of the ffplay command: http://pastebin.com/Fc0bSAXR
[13:21:08 CEST] <omegaenigma> anyone familiar with mixed resolution video overlay opacity settings?
[13:43:28 CEST] <Batman_> the_cuckoo: Thanks for the tip, I have to restart my PC, I'll try what you pointed
[13:46:27 CEST] <xeche_> Hello people. Anyone around to help me make sense of using hevc_qsv from command line? ffmpeg -codecs reports hevc_qsv available both for encoding and decoding, but actually using it gives me an error "[hevc_qsv @ 02659ac0] Could not load the requested plugin: 2fca99749fdb49aeb121a5b63ef568f7"
[13:46:58 CEST] <xeche_> I'm running it on an i7 Skylake, so I don't think there should be anything wrong in the way of hardware support.
[13:49:05 CEST] <BtbN> Are you on Linux?
[13:49:23 CEST] <xeche_> BtbN: Windows. :)
[13:50:33 CEST] <xeche_> I'm looking in the IntelSWTools folder, which comes with the Intel Media SDK installer. It has a folder with various DLLs, as well as a couple of subfolders that look very much like they're named after plugin UIDs
[13:51:57 CEST] <xeche_> However, the UID name I gave in the above error message is not a folder. So am I correct in assuming that ffmpeg, even if it is prebuilt with qsv support, and the Intel SDK DLLs are not enough to enable the QSV support?
[13:52:49 CEST] <BtbN> Must be some driver issue with Intel, yes. No idea how to deal with it, Intel Drivers and especially QSV are a giant mess.
[13:53:20 CEST] <BtbN> Are you sure your hardware even supports hevc encoding?
[13:55:55 CEST] <xeche_> BtbN: According to Intel's documentation, yes. Looks like QSV is supported on any Intel CPU new than 2nd or 3rd generation
[13:55:58 CEST] <xeche_> That's before Haswell.
[13:56:07 CEST] <BtbN> QSV, yes.
[13:56:10 CEST] <BtbN> But not HEVC.
[13:56:34 CEST] <BtbN> Try encoding h264 instead, if it works, you know what's the issue.
[13:56:44 CEST] <xeche_> Alright then. :)
[13:56:46 CEST] <xeche_> Thanks.
[13:57:08 CEST] <well0ne> hey
[13:57:45 CEST] <well0ne> i'm trying to concenate  h264+mp3 streams (single episodes) , to send em to a rtmp server, any codec settings are the same, they are properly encoded
[13:58:08 CEST] <well0ne> but when im streaming them and they change to the next file i'll alwys get  Non-monotonous DTS in output stream  when the file is starting
[13:58:24 CEST] <well0ne> as i said i properly encoded them ( i think so)
[13:58:28 CEST] <well0ne> how can i prevent that
[13:59:13 CEST] <xeche_> BtbN: Thanks for the input, sir. You are indeed right. h264_qsv did work.
[14:00:46 CEST] <BtbN> well0ne, of course you get a non-monotonous DTS. You just append an entirely new stream, whos timestamps start from the beginning again.
[14:01:23 CEST] <well0ne> okay, sure makes sense...
[14:01:49 CEST] <well0ne> is there any option to fix it, to get sure the timestamps are new generated?
[14:02:05 CEST] <well0ne> or is a re-encoding needed for that
[14:02:13 CEST] <BtbN> not without transcoding.
[14:02:18 CEST] <well0ne> damn
[14:02:43 CEST] <BtbN> It shouldn't be too much of a problem though, or does anything cease working because of it?
[14:03:36 CEST] <well0ne> no it seems fine to me, i had some sync issues before, but now as my framerates codecs and attributes are all the same, it seems to work, im just getting those messages when changing to the next file
[14:04:43 CEST] <xeche_> BtbN: Looking up some more documentation, Skylake CPUs should support both encoding and decoding in hardware. However, it does specify HEVC 8-bit decode / encode.
[14:04:52 CEST] <well0ne> but im confused, i stream series
[14:05:05 CEST] <well0ne> i have a series, where i just copy the streams to the rtmp server
[14:05:08 CEST] <well0ne> same codes
[14:05:13 CEST] <xeche_> BtbN: I'm not entirely sure what 8-bit refers to in this context. Bits per plane?
[14:05:16 CEST] <well0ne> but i dont get the dts messages there
[14:06:35 CEST] <well0ne> http://pastebin.com/btuGyTdu
[14:06:39 CEST] <well0ne> have a look here
[14:06:57 CEST] <well0ne> the codes and attributes should be the same, so i dont know what this causes
[14:09:44 CEST] <well0ne> i  thought maybe this is happening of different input lengths of audio/video, but i used -shortent
[15:05:45 CEST] <k_sze> I just tried to strip the audio stream from a video recorded with iPhone 6s Plus.
[15:06:26 CEST] <k_sze> I used `-map 0 -map -0:a:0 -copy_unknown`, because there are two streams besides the video and audio streams, of unknown codec.
[15:07:00 CEST] <k_sze> But I still got this image: "[mov @ 0x7fb17d811800] Unknown hldr_type for mebx / 0x7862656D, writing dummy values"
[15:07:54 CEST] <k_sze> Does anybody know what that's about?
[15:19:49 CEST] <the_cuckoo> why are you trying to preserve the other streams? isn't it enough to just use ffmpeg -i input.mp4 -acodec copy output.aac
[15:20:04 CEST] <furq> he's trying to remove the audio stream
[15:20:38 CEST] <the_cuckoo> k - then -vodec copy -an output.mp4
[15:20:46 CEST] <k_sze> s/image/message/
[15:21:42 CEST] <the_cuckoo> s/vodec/vcodec/
[15:26:27 CEST] <furq> mebx appears to be some kind of nonstandard apple extension (if you can imagine such a thing)
[15:26:33 CEST] <furq> you can probably ignore it
[15:31:18 CEST] <the_cuckoo> https://developer.apple.com/library/mac/documentation/QuickTime/QTFF/QTFFChap3/qtff3.html <- gets a mention here - Timed Metadata Sample Description
[15:31:55 CEST] <the_cuckoo> not clarifying it much for me though
[15:34:32 CEST] <ritsuka> that track contains the gps location recorded with the video
[15:34:45 CEST] <the_cuckoo> ah - cool
[15:39:20 CEST] <lethalwp> hello,  i was trying to play an hevc video with mpv, but it didn't want to hardware accellerate it. After some digging, it's ffmpeg that doesn't hwaccel YUV420P10LE; but it does for YUV420P.  I've recompiled by adding the 10LE and it seems to work fine for me.  Is there a reason hevc 10bit hwaccel isn't enabled?
[15:52:12 CEST] <rmoorelxr> is there such thing as an INFO frame? or is that for sure some third party weirdness i'm encountering
[16:00:44 CEST] <jkqxz> Infoframes are a thing in CEA-861 (and therefore HDMI).  What is the context?
[16:05:33 CEST] <rmoorelxr> receiving "raw h264 stream" that has I, P frames, and INFO frames
[16:05:45 CEST] <rmoorelxr> getting data and frame defs from third party
[16:06:04 CEST] <rmoorelxr> when I try to stream/write a file the result is looks scrambled
[16:07:02 CEST] <rmoorelxr> I get frame timestamps with the byte array but I don't know how to manually turn them into useable video
[16:07:30 CEST] <rmoorelxr> when I use provided libraries I can write to a file and it works, but I'm trying to avoid going to disk
[16:09:47 CEST] <jkqxz> That sounds like some third-party weirdness.  What do the data look like?  (An annex B bytestream, for example.)
[16:15:18 CEST] <rmoorelxr> INFO PPPPPPPPPP INFO PPPP I PPPPP INFO etc
[16:16:31 CEST] <xeche_> Hello again, sirs (maybe madames). Has anyone used av_image_alloc with PIX_FMT_QSV before?
[16:18:37 CEST] <jkqxz> xeche_:  Don't.  Make an AVHWFramesContext and allocate from that.  (See ffmpeg_qsv.c.)
[16:20:28 CEST] <xeche_> jkqxz: Thank you. Does this go for both encoding and decoding?
[16:20:31 CEST] <jkqxz> Or actually not that either because it hasn't merged from libav yet.  You have to make the surfaces yourself and them put them in the frames manually (setting buf[0] and data[3]).
[16:21:18 CEST] <xeche_> jkqxz: Alright, so the utility functions for working with encoder input may or may not be the right thing to use for QSV ?
[16:21:42 CEST] <xeche_> I'm usinga nightly build from about a week ago, so it's version 3.1.1 I believe.
[16:21:55 CEST] <xeche_> *using a
[16:22:24 CEST] <jkqxz> Yes.  Hardware surfaces are all special and require weird handling.  The hwcontext stuff makes that more consistent, but it is not merged to ffmpeg for libmfx/qsv.
[16:23:27 CEST] <xeche_> jkqxz: I see. I can handle the manual memory management, I think, but the sws_scale functionality might be a bit of a hassle.
[16:25:27 CEST] <jkqxz> swscale has no support at all for hardware surfaces.  You may be able to memory map them and make it work, but that's all up to you to do.
[16:27:51 CEST] <rmoorelxr> corrupted macroblocks in my garbage vid: http://pastebin.com/5Z42Q8rj
[16:30:36 CEST] <the_cuckoo> durandal_1707: (after some distractions) i've got the passthrough stuff working just fine :)
[16:32:07 CEST] <the_cuckoo> one other question regarding the avfilter api - if i want to key frame an effect (like say, have the video rotate for the first or last 25 frames), how would i achieve that? could i somehow specify the rotation for each frame or is there another way to do it?
[16:33:16 CEST] <durandal_1707> if video dimensions change, you generally can't
[16:33:33 CEST] <the_cuckoo> input video will remain the same
[16:33:59 CEST] <the_cuckoo> the output - that can change - i'll composite it on to a fixed background if necessary
[16:34:09 CEST] <durandal_1707> you will need to recreate filtergraph to be safe
[16:34:55 CEST] <the_cuckoo> k - so you can't set up animatable keys in the avfilter graph spec either?
[16:35:22 CEST] <durandal_1707> but if size is same there are commands and enable thing option
[16:35:54 CEST] <kepstin> some filters support either changing (some) parameters at runtime via messages, and some let you use expressions in parameters (rotate does, for example) - and you can write animations via the expression syntax
[16:36:18 CEST] <the_cuckoo> kepstin: ah - both sound good
[16:37:13 CEST] Action: kepstin notes that the rotate filter has fixed output size, and will clip the frame if you don't pick an output size big enough to hold the rotated image.
[16:37:15 CEST] <the_cuckoo> could i be lazy and ask for an example  of the expression thing? (i'll chase the messages thing in the api)
[16:37:34 CEST] <the_cuckoo> yeah - thought it did that
[16:37:41 CEST] <furq> the_cuckoo: http://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
[16:37:55 CEST] <kepstin> there's also some examples on the rotate filter specifically: https://www.ffmpeg.org/ffmpeg-filters.html#rotate
[16:38:06 CEST] <the_cuckoo> excellent - thanks guys
[16:38:21 CEST] Action: kepstin once made a video that just tilted back and forth every couple seconds, almost made him seasick
[16:38:32 CEST] <the_cuckoo> :)
[16:41:25 CEST] <Spring> can anyone reproduce this? http://pastebin.com/Fc0bSAXR
[16:43:32 CEST] <Spring> (ffplay playback slowdown after the first loop) for me this has affected every video I've tried. I've written down various start/end time combos that display the slowdown, and those that don't between 5-20 seconds.
[16:43:38 CEST] <durandal_1707> Spring: what happens?
[16:43:59 CEST] <Spring> durandal_1707, https://a.uguu.se/pzzEqQOFmL88_ffplayloopslowdownbug%5BTrailers-UmbrellasofCherbourg%2Cusing%275%27to%278%27trim%5D.webm
[16:44:06 CEST] <Spring> ^ video capture of issue
[16:50:00 CEST] <durandal_1707> Spring: so it happens only with loop option?
[16:51:05 CEST] <Spring> durandal_1707, yes, it doesn't display the slow down upon the first playback
[16:53:17 CEST] <durandal_1707> probably some kind of bug, you could use loop filter as alternative
[16:53:40 CEST] <durandal_1707> for short durations :)
[16:56:47 CEST] <xeche_> jkqxz: So how would I go about creating the surfaces. I mean what does the FFMPEG API actually do for me, where HW acceleration is concerned. At this point it seems like almost nothing.
[16:57:10 CEST] <Spring> durandal_1707, does the loop filter require setting its own start point? Just using loop=100 doesn't appear to do anything
[16:58:32 CEST] <durandal_1707> start point is frame number, should be 0
[16:59:33 CEST] <the_cuckoo> thanks kepstin and durandal_1707 - yeah - that does the rotation thing nicely in my wrapper - nice that it correctly handles scrubbing/reverse play too
[17:01:01 CEST] <durandal_1707> Spring: duration is in frames to, set it to some enough big number
[17:01:22 CEST] <durandal_1707> I mean size
[17:01:56 CEST] Action: the_cuckoo will need to look into the messages thing tomorrow
[17:01:59 CEST] <xeche_> jkqxz: I guess a more fruitful endeavor would be to actually find some example of how QSV / HEVC can be used with FFMpeg.
[17:02:03 CEST] <jkqxz> xeche_:  Yes, lavc is a pretty thin wrapper here.  The useful part is that the AVPacket side of it is in common with lavf, so you can use the encoded data there.  The AVFrame side is not currently very useful, though there is more stuff around for it pending merge (lavfi support, notably).
[17:02:33 CEST] <xeche_> jkqxz: Alright. So am I right in thinking this is why I need the libmfx wrapper?
[17:02:47 CEST] <xeche_> i.e. i need to work with Intel specific structs, types, etc
[17:03:13 CEST] <xeche_> http://ffmpeg.org/pipermail/ffmpeg-cvslog/2015-March/088245.html <- closes thing I've found to an example
[17:03:15 CEST] <Spring> durandal_1707, thanks a bunch, got it working now. BTW what did you mean 'for short durations', the loop filter isn't ideal for longer loops?
[17:03:19 CEST] <jkqxz> To deal with the frames in hardware surfaces, yes.
[17:03:45 CEST] <kepstin> Spring: it buffers the entire section of video to loop as raw video in ram, so yeah.
[17:03:47 CEST] <durandal_1707> Spring: it keeps everything in memory
[17:05:08 CEST] <Spring> hmm, may need to test on some high bitrate 1440p footage to make sure this is going to work.
[17:06:15 CEST] <jA_cOp> Hi. I'm using -map_metadata and the ffmetadata file format to add some tags to an OGG/Opus file, and it looks like a single "KEY=VALUE" string is capped at 1023 (perhaps followed by a single byte null terminator to make 1024) bytes? Does anyone know if this is really the case?
[17:06:22 CEST] <Spring> almost a GB, welp
[17:06:49 CEST] <jkqxz> xeche_:  Not sure about the H.265 support.  You need to do something with a special plugin for it (for licensing reasons, maybe?), but I don't know how that works at all.
[17:07:01 CEST] <jA_cOp> I'm trying to add album art in the METADATA_BLOCK_PICTURE format so this limit is a real blocker :(
[17:07:37 CEST] <xeche_> jkqxz: Okay. Thanks for all the help. I'll try to figure something out. :)
[17:07:57 CEST] <xeche_> If anybody knows of an example on how to work with QSV and FFMpeg, I'd be very grateful if you let me know.
[17:09:24 CEST] <jA_cOp> just found this: https://trac.ffmpeg.org/ticket/4833 I guess that's really the case then...
[17:09:29 CEST] <Spring> 1.3GB for 10 loops :/
[17:09:38 CEST] <Spring> I'm fine but I wonder about others
[17:09:56 CEST] <furq> jA_cOp: why not use something like mutagen
[17:10:07 CEST] <furq> using ffmetadata doesn't seem very robust in general
[17:10:36 CEST] <Spring> does the bug tracker get much attention for things like this? May end up submitting it anyway but wondering if it would realistically be noticed.
[17:11:16 CEST] <jkqxz> xeche_:  Use vaapi instead :P  (The libmfx implementation in ffmpeg being named "qsv" is unhelpful, there are other ways to access the quick sync hardware.)
[17:11:46 CEST] <furq> jkqxz: i'm pretty sure they said they were on windows
[17:12:22 CEST] <jkqxz> Hence the ":P".
[17:12:40 CEST] <jA_cOp> furq, the Python library? I was hoping to keep everything in one shell script (and one standalone CLI program to generate the METADATA_BLOCK_PICTURE format for a given file), but thanks, there are few CLI utilities that support adding metadata to OGG/Vorbis so I might use that! So far I've only found kid3-cli to work
[17:13:28 CEST] <durandal_1707> Spring: number of loops doesn't really matters, size does
[17:14:28 CEST] <durandal_1707> report bug anyway, I might look at it
[17:14:30 CEST] <furq> jA_cOp: ogg flac isn't widely supported from what i can tell
[17:14:39 CEST] <furq> not even libflac supports adding metadata to ogg flac
[17:14:48 CEST] <furq> mutagen supports it though which is nice
[17:15:23 CEST] <jA_cOp> Sorry, I meant OGG/Opus the second time too, but yeah, the situation is pretty crappy for Opus as well!
[17:15:42 CEST] <kepstin> the mutagen api is pretty nice for writing tiny little python scripts to do things, but they don't really provide a useful command-line tool :/
[17:15:58 CEST] <furq> oh nvm you said opus, not flac
[17:17:14 CEST] <xeche_> well, at least I can create a QSV hardware context and assign it to the AVCodecContext
[17:17:41 CEST] <xeche_> not that it gets me much further, but there's an av_qsv_alloc_context in the libmfx stuff, somewhere. If anyone's interested.
[17:23:08 CEST] <Spring> 'Submission rejected as potential spam' :D
[17:23:24 CEST] <Spring> 90.82% probability I'm a bot lol
[17:26:56 CEST] <Spring> tried three times to register. Is there any way around this?
[17:27:27 CEST] <Spring> on the trac.ffmpeg.org that is
[17:28:35 CEST] <durandal_1707> Spring: hmm, with stream_loop only way to do this is in 2 pass
[17:28:47 CEST] <durandal_1707> with ffmpeg
[17:29:37 CEST] <durandal_1707> You cut relevant part with trim and than use stream loop in resulted file
[17:29:55 CEST] <Spring> was using the loop only for the ffplay preview, rather than the video but thanks for the heads up
[17:30:08 CEST] <Spring> *rather than the encode
[17:30:11 CEST] <durandal_1707> lavfi doesn't supports seeking right now :(
[17:41:35 CEST] <Spring> also is stream_loop documented any place? couldn't find it in the filters docs
[17:42:01 CEST] <durandal_1707> its ffmpeg input option
[17:42:28 CEST] <Spring> oh, right of course
[18:20:17 CEST] <Spring> is there any benefit to lower CRF values if the max bitrate is set to less than the CRF would go in the first place? Or should they both be set to roughly the same ballpark quality?
[18:22:43 CEST] <Spring> as I see examples of this places where the CRF will be far lower than the max bitrate, which is confusing
[18:25:06 CEST] <peg> api usage question: I want to decode from PCM_ALAW and then encode into MP3, the problem is that the frame_size in unknown. How one can do you it ? what is the size chuncks of raw data I should give the decoder ?
[18:26:33 CEST] <peg> superdump: any idea ?
[18:50:02 CEST] <the_cuckoo> peg: i would have thought the number of samples for mp3 encoding would be known (1152 isn't it?) - when decoding, you just need to fill up a buffer, as soon as you have enough samples to encode, you take them out of the buffer and squirt them into the encoder
[19:11:10 CEST] <peg> first I have to fill a AVPacket with data to decode (from PCM_ALAW).
[19:11:36 CEST] <peg> and the question is what the size of this packet should be
[19:11:59 CEST] <kepstin> Spring: with x264, for the same final file size, a crf encode and a 2-pass vbr encode will be the same quality. 1-pass vbr mode is quite a bit different, and using vbv controls (max bitrate, buffer size) changes things a bit.
[19:12:57 CEST] <kepstin> Spring: the use of crf with vbv controls is a bit weird, but it's mostly useless when you're encoding many files and some will be smaller, some larger, but you want to make sure they have a max limit on bitrate.
[19:13:13 CEST] <kepstin> s/mostly useless/mostly useful/
[19:14:42 CEST] <kepstin> (are you looking at libvpx encoding? it's a bit different there, but what I said above still mostly applies)
[19:16:40 CEST] <kepstin> with libvpx doing vp8, you're required to specify bitrate with crf mode, and it's treated as a "maximum" value, and iirc using 2-pass is recommended even when using crf mode.
[19:19:06 CEST] <Spring> yeah, was coming from using vpx usage
[19:19:23 CEST] <Spring> I know 2 pass is practically a necessity there
[19:20:15 CEST] <Spring> but wondered if it made any real difference to have say CRF 8 with max bitrate of 10M if CRF 8 with no cap is like 32M bitrate
[19:21:30 CEST] <kepstin> in that case it should be identical to doing a vbr encode at 10M (there might be some local differences if parts of the video are harder/easier to encode)
[19:22:05 CEST] <kepstin> the idea of libvpx's crf mode is "make files that have this bitrate, but make the files smaller if they don't need that much bitrate to reach this min quality level"
[19:22:43 CEST] <Spring> is there some ballpark table of equivalent CRF value bitrates?
[19:23:12 CEST] <kepstin> i dunno, maybe on average, but it's really completely dependent on video content
[19:23:27 CEST] <kepstin> so you have to test on a representitive sample of the videos you plan to encode
[19:23:39 CEST] <Spring> I see, best to experiment I suppose.
[19:23:51 CEST] <kepstin> (this mode is really best used as a "set it and forget it" option when encoding many videos with the same settings)
[19:24:19 CEST] <kepstin> it was designed for use encoding youtube videos, really :)
[19:51:17 CEST] <kirked> in a video with a 4:3 SAR & 4:3 FAR, and 16:9 DAR, what's the proper command line parameter to extract a frame using DAR instead of FAR? (frame image is currently squished)
[19:51:25 CEST] <the_cuckoo> peg: iirc, you get the duration/number of samples from the avframe which is returned
[19:51:55 CEST] <peg> the_cuckoo: how ?
[19:52:25 CEST] <the_cuckoo> av_frame_->nb_samples
[19:53:47 CEST] <the_cuckoo> you'll also need to know the number of channels, the type of the audio (pcm16 presumably - 2 bytes) and the interleaving
[20:07:11 CEST] <the_cuckoo> interleaving is either packed or planar (packed for a stereo source is like the first sample is the first sample of the first channel [call it c0s0], next is c1s0, then c0s1, c1s1, etc - while planar is like c0s0, c0s1, .... c1s0, c1s1, ... etc - if that makes sense :))
[20:08:36 CEST] <the_cuckoo> channels is derived from av_frame->layout (there's a set of enums for mono, stero, 2.1 etc) and you'll need to map those to number of channels
[20:09:16 CEST] <Cryp71c> Not familiar w/ ffmpeg, working with some legacy code. Transcoding a video file using this set of options: ` -i /mnt/media/temp/af0c37c4777847699c4591bedbd8ba01/b2898100fb034dd88045a3c731f4e054 -vf "transpose=2, format=yuv420p" -metadata:s:v rotate=0 -codec:v libx264 -f mp4 -y /mnt/media/temp/af0c37c4777847699c4591bedbd8ba01/d6cc50b786594e0687869e6d2af87d5f`
[20:09:53 CEST] <Cryp71c> Works alright for small videos, but larger videos that are ~10 minutes long and ~250 mb are taking upwards of 50 minutes to transcode on our servers, seems a bit high?
[20:10:33 CEST] <rmoorelxr> its bc of your long pathnames
[20:12:32 CEST] <jkqxz> You haven't supplied any options (like bitrate or quality) to the libx264 encode there.  I've no idea what the defaults are, but they probably aren't what you want.
[20:15:21 CEST] <the_cuckoo> Cryp71c: i'd suggest -threads auto
[20:15:40 CEST] <the_cuckoo> but there is probably more tuning you can do
[20:16:28 CEST] <bencoh> or just change the preset ;)
[20:16:55 CEST] <bencoh> default for x264 is medium iirc (not sure though) ... which can be quite slow
[20:17:31 CEST] <furq> -threads auto is the default anyway
[20:17:41 CEST] <the_cuckoo> ah - k :)
[20:18:11 CEST] <the_cuckoo> hmm - nah - it says it's 1
[20:18:53 CEST] <furq> it's the default for x264
[20:19:00 CEST] <the_cuckoo> ah - gotcha
[20:24:34 CEST] <s00b4u> Hi, I am trying to use FATE but not able to
[20:25:13 CEST] <s00b4u> When I try this command: make fate SAMPLES=fate-suite/
[20:25:27 CEST] <s00b4u> I get error: "make: *** No rule to make target `fate'.  Stop."
[20:25:50 CEST] <s00b4u> Can anyone suggest possible reasons
[20:25:51 CEST] <s00b4u> Can anyone suggest possible reasons?
[20:26:12 CEST] <durandal_1707> from what directory?
[20:26:31 CEST] <s00b4u> I tried from top level source directory
[20:26:34 CEST] <s00b4u> ffmpeg
[20:27:05 CEST] <s00b4u> then also from ffmpeg_sources
[20:27:10 CEST] <s00b4u> same error
[20:27:16 CEST] <durandal_1707> and how you got source?
[20:27:30 CEST] <s00b4u> clone from Github
[20:27:33 CEST] <the_cuckoo> have you ran configure already?
[20:27:43 CEST] <s00b4u> yes, installation is fine
[20:27:51 CEST] <s00b4u> I can transcode videos
[20:28:35 CEST] <s00b4u> I think I should mention about the environment on which I am using FFMPEG
[20:28:47 CEST] <sofakng> does anybody know what is required to cross-compile d3d11va support on linux?
[20:29:07 CEST] <s00b4u> I am using it on "Bash on Ubuntu on Windows"
[20:29:18 CEST] <durandal_1707> omg
[20:29:36 CEST] <s00b4u> ??
[20:30:23 CEST] <the_cuckoo> i was wondering how well that would work myself :) - kinda cool that you got an ffmpeg build from it
[20:32:01 CEST] <durandal_1707> try just : make fate
[20:32:01 CEST] <s00b4u> Yes, this a kind of new platform
[20:32:25 CEST] <s00b4u> I just tried.. but same error
[20:32:48 CEST] <the_cuckoo> grep fate Makefile
[20:32:49 CEST] <s00b4u> It's actually feels like "real" linux
[20:33:44 CEST] <the_cuckoo> it's sandboxed isn't it? can you start native programs from it?
[20:34:21 CEST] <s00b4u> Its a bash shell running on top of windows kernel
[20:34:44 CEST] <s00b4u> so, the bash system calls are being translated according to windows kernel
[20:35:03 CEST] <durandal_1707> well something is obviously wrong
[20:35:41 CEST] <s00b4u> When I try "grep fate makefile", I get "No such directory or file"
[20:35:51 CEST] <the_cuckoo> upper case M
[20:36:41 CEST] <s00b4u> with upper case also
[20:36:44 CEST] <s00b4u> same issue
[20:37:21 CEST] <the_cuckoo> guessing your probably need to run configure again - and if that file's not there, then you're in the wrong directory
[20:37:36 CEST] <s00b4u> ok, I will try that now
[20:37:46 CEST] <Spring> does drawtext require setting a font? It says the default is 'Sans' but I don't have a font by that name installed.
[20:38:46 CEST] <bencoh> "so, the bash system calls are being translated according to windows kernel" err
[20:40:23 CEST] <Spring> nvm, thought it was the path.
[20:42:58 CEST] <s00b4u> @bencoh: https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subsystem-for-linux-overview/
[20:43:36 CEST] <s00b4u> What I mentioned earlier was like fitting an elephant in a cup
[20:44:12 CEST] <s00b4u> but in one line, its actually what i mentioned earlier
[20:44:32 CEST] <s00b4u> I quote from the blog: " By placing unmodified Linux binaries in Pico processes we enable Linux system calls to be directed into the Windows kernel."
[20:58:32 CEST] <kepstin> Spring: i'm pretty sure the drawtext stuff uses fontconfig, where 'Sans' is set up as an alias that'll pick some reasonable sans-serif font that you have installed.
[21:11:29 CEST] <Cryp71c> jkqxz, regarding your suggestion on picking values for bitrate or quality for libx264, this page seems to have info on what I need? https://trac.ffmpeg.org/wiki/Encode/H.264
[21:11:35 CEST] <Cryp71c> wanted to make sure before I read the whole thing
[21:11:59 CEST] <furq> you can probably stop reading at the end of the crf section
[21:12:03 CEST] <Cryp71c> bencoh, when you had mentioned preset earlier were you referring to some of the default values that are mentioned on that page ^ ?
[21:12:28 CEST] <furq> Cryp71c: http://dev.beandog.org/x264_preset_reference.html
[21:13:54 CEST] <the_cuckoo> furq: re: threads - wouldn't -threads auto before the -i also apply to the input decode? (i don't use ffmpeg command line that often - more interested in the libs - but i thought that was how it worked anyway) - could benefit Cryp71c if the bottleneck is in the decode?
[21:14:34 CEST] <Cryp71c> furq, thanks, are crf / two pass abr mutually exclusive? The phrasing suggests they are two possible rate control modes and you would typically choose between them?
[21:14:40 CEST] <furq> i don't think multithreaded decoding works with h264
[21:14:46 CEST] <furq> Cryp71c: yes
[21:14:53 CEST] <furq> and two-pass will naturally take twice as long
[21:15:03 CEST] <furq> for little or no benefit with x264
[21:15:35 CEST] <Cryp71c> Which one is used by default? I'm comparing each of the two examples with the command this legacy code is using and can't tell?
[21:15:41 CEST] <Cryp71c> (cant tell which one we're using)
[21:15:46 CEST] <furq> the defaults are -crf 23 -preset medium
[21:15:54 CEST] <furq> which are usually pretty reasonable
[21:16:02 CEST] <the_cuckoo> furq: yeah - threads work with h264
[21:16:28 CEST] <Cryp71c> Does 10 minutes / 250 Mb taking 50 minutes to transcode sound typical?
[21:16:33 CEST] <furq> no
[21:16:43 CEST] <furq> what cpu is that
[21:16:46 CEST] <Cryp71c> 1 sec
[21:16:59 CEST] <kepstin> Cryp71c: without knowing anything about what computer you're using to encode, can't say anything about whether that's typical.
[21:17:00 CEST] <furq> also what resolution is the input video
[21:19:00 CEST] <furq> kepstin: i would say that's pretty atypical for something that was described as a server
[21:19:09 CEST] <furq> unless the input is 4k or something
[21:19:40 CEST] <kepstin> hey, "server" doesn't mean much, it could be a 10yr old sparc or something ;)
[21:19:50 CEST] <Cryp71c> This is on an Azure VM, getting the specs from our devops guy
[21:19:55 CEST] <furq> 10 year old sparcs are pretty atypical though
[21:20:05 CEST] <furq> at least i hope so
[21:20:20 CEST] <kepstin> hmm, VM. So you're probably dealing with low core count and maybe shared scheduling :/
[21:20:23 CEST] <Cryp71c> and the video is 1920x1080, 30fps
[21:20:29 CEST] <furq> is the cpu contended
[21:20:39 CEST] <furq> that will obviously slow things down a lot
[21:21:57 CEST] <Cryp71c> No its not supposed to be on this tier of VM hosting
[21:25:16 CEST] <jookiyaya> BIG NEWS: amd destroys intel http://imgur.com/a/EJlNF
[21:25:52 CEST] <rmoorelxr> is that safe to click on
[21:26:06 CEST] <furq> christ not this again
[21:26:47 CEST] <kepstin> rmoorelxr: unless you'll get in trouble at work for having some meaningless graphs on your screen, yeah.
[21:27:47 CEST] <furq> rmoorelxr: it turns out a cpu with hashing extensions is faster at sha1 hashing than some cpus without hashing extensions
[21:27:54 CEST] <furq> imagine that
[21:27:56 CEST] <rmoorelxr> lol
[21:28:49 CEST] <jookiyaya> rmoorelxr yes it is safe
[21:28:51 CEST] <jookiyaya> it's just a picture
[21:29:38 CEST] <furq> you might as well link a list of cpus ranked by TDP with the 9590 at the top
[21:29:40 CEST] <kepstin> as far as I can tell, there's no DRM keys hidden in that picture, so the cops can't pick you up for DMCA violations for downloading it.
[21:30:16 CEST] <furq> i know someone who owns a 9590 ;_;
[21:30:38 CEST] <furq> the crying face is inaccurate because it's really funny
[21:31:23 CEST] <furq> he bought a 9590 and crossfire 7990s and then bought a chinese 500W psu to run it all
[21:31:33 CEST] <furq> and then had the audacity to complain when it exploded
[21:31:56 CEST] <rmoorelxr> haha and those 500W are really just aspirational
[21:43:08 CEST] <iive> jookiyaya: i asked you to stop with that bulshit.
[21:43:55 CEST] <iive> furq: actually somebody claimed that the test is doctored, because there are intel cpu with sha1 extension and its speed is like it is using pure software calculation.
[21:44:22 CEST] <furq> that wouldn't surprise me
[21:44:45 CEST] <furq> although even if it is remarkably fast at sha1, who cares
[21:44:45 CEST] <jookiyaya> iive who said that
[21:44:52 CEST] <iive> yeh, and that was told to jookiyaya the first time he posted it.
[21:45:08 CEST] <iive> and he had posted same link at least 2 times since.
[21:45:36 CEST] <iive> so, i'm tempted to ban him as spammer.
[21:45:52 CEST] <jookiyaya> iive techincally it's different link
[21:46:09 CEST] <jookiyaya> i changed the image site
[21:46:12 CEST] <furq> iive: i think he just told you to do it
[21:48:20 CEST] <iive> 2016-08-07 14:55:04 (+0300)<jkqxz>      jookiyaya:  Zen adds the hardware instructions for hashing functions, so obviously it can do them at close to the memory line rate.  (Skylake and Goldmont have them too - presumably that benchmark is deliberately not using them on the 6700K to make the results more amusingly misleading.)
[21:48:22 CEST] <jkqxz> I thought that was rigged initially, but on further investigation Skylake actually doesn't have the SHA extensions.  Apollo Lake (Goldmont, the low power core for cheap devices) will have them very soon, though, and should therefore come at or close to the top of that table.
[21:54:14 CEST] <iive> ok, good to know.
[22:03:50 CEST] <jookiyaya> they should add extension so it does x265/x265 encoding better
[22:04:16 CEST] <jookiyaya> or is that too hard to do
[22:32:03 CEST] <kepstin> jookiyaya: there was some stuff in avx2 that helped x264 a bit, but I dunno about x265
[22:32:15 CEST] <jookiyaya> oh
[22:32:17 CEST] <jookiyaya> what is avx2
[22:32:41 CEST] <kepstin> of course, the xen 'apu' (integrated graphics) processors will probably include a hardware h265 encoder, but that's completely distinct from x265.
[22:33:38 CEST] <jookiyaya> then what software would it help with
[22:33:54 CEST] <jookiyaya> if it doesn't assist the x265
[22:40:26 CEST] <furq> kepstin: summit ridge isn't an apu
[22:40:50 CEST] <furq> it is also a really wonky name
[22:41:15 CEST] <kepstin> sure, but they are gonna release an apu with xen cores at some point.
[22:41:46 CEST] <furq> i preferred it when they named their uarches after racetracks
[22:42:00 CEST] <kepstin> i liked it back when they named them after horse breeds.
[22:43:11 CEST] <furq> i liked it when you could unlock the multiplier with a pencil
[22:43:47 CEST] <furq> instead of unlocking the multiplier by paying intel an extra £100
[23:26:20 CEST] <bencc> how to choose a good cpu for transcoding vp8 to h264?
[23:26:31 CEST] <bencc> are there features to check?
[23:27:10 CEST] <bencc> is it important to have a graphics card or on-board gpu?
[23:44:56 CEST] <kepstin> bencc: basically no graphics cards support decoding vp8, so you need cpu for that. And in most cases, using a cpu encoder for h264 will provide better results than a gpu/dedicated hardware encoder
[23:45:27 CEST] <kepstin> so the general recommendation is to use a reasonably modern intel chip, ideally with multiple cores, and don't worry about gpu at all.
[23:46:38 CEST] <kepstin> hyperthreading seems to help with x264, fwiw.
[00:00:00 CEST] --- Wed Aug 10 2016


More information about the Ffmpeg-devel-irc mailing list