burek021 at gmail.com
Thu Jun 14 03:05:02 EEST 2018
[00:10:42 CEST] <kepstin> it's the faststart option that makes ffmpeg rebuild the file a second time
[00:11:50 CEST] <kepstin> (it's not reading the inputs twice - it reads them once writing to a temp file, then reads the temp file and rewrites it)
[00:31:22 CEST] <AyrA> @kepstin Windows Resource Monitor tells otherwise. I don't mean moving the moov atom which is done at the end, I am fully aware that this takes additional time. I'm talking about input reads.
[00:32:18 CEST] <AyrA> According to the resman it reads the input file completely while not writing to the destination at all, then suddenly ffmpeg starts writing and reads the entire source again.
[00:32:53 CEST] <nicolas17> and then it does the same for the next input file?
[00:32:58 CEST] <AyrA> yes
[00:33:15 CEST] <kepstin> It should not be reading the whole file during initial probe, that's very strange
[00:33:16 CEST] <AyrA> it stops counting frames in the console and then eventually writes them.
[00:33:33 CEST] <AyrA> I'm only experiencing this when using the concat filter
[00:33:49 CEST] <AyrA> just "-c copy" a single file is not doing this
[00:34:04 CEST] <AyrA> The input files are from a GoPro if that helps
[00:34:38 CEST] <kepstin> That's concat format, not filter. Still very strange.
[00:35:15 CEST] <AyrA> Here is the info of one of the input files: https://pastebin.com/ruHwxUZq
[00:35:53 CEST] <AyrA> could it have something to do with this line that is printed for every input file: "[mov,mp4,m4a,3gp,3g2,mj2 @ 000000000357f640] Auto-inserting h264_mp4toannexb bitstream filter"
[00:37:46 CEST] <nicolas17> the concat format will probe each file to get its duration
[00:37:58 CEST] <nicolas17> but it shouldn't need to read the whole file for that
[00:38:04 CEST] <nicolas17> does ffprobe read the whole file?
[00:38:49 CEST] <kepstin> Those are normal mp4 not fragmented it looks like, so seeking to end to find length should be fast
[00:38:50 CEST] <AyrA> "ffprobe.exe -i T:\Media\Gopro\2018-06-06\Test\GH020048.MP4" takes about 0.5 seconds so I assume no it doesn't.
[00:39:37 CEST] <AyrA> By the way I just checked, the "Auto-inserting [...] bitstream filter" line is not printed when copying a single file, only when concatenating
[00:39:43 CEST] <nicolas17> yeah
[00:39:48 CEST] <nicolas17> see docs for the concat format
[00:39:56 CEST] <nicolas17> auto_convert
[00:40:03 CEST] <nicolas17> "If set to 1, try to perform automatic conversions on packet data to make the streams concatenable. Currently, the only conversion is adding the h264_mp4toannexb bitstream filter to H.264 streams in MP4 format. This is necessary in particular if there are resolution changes."
[00:40:19 CEST] <transhuman> kepstin, interesting enough the same ffmpeg script runs on windows with no stuttering of output!
[00:40:40 CEST] <nicolas17> maybe you can try -f concat -safe 0 -auto_convert 0
[00:41:41 CEST] <AyrA> does this argument needs to be before or after the input file?
[00:41:53 CEST] <nicolas17> those are input options, so before the -i
[00:41:56 CEST] <nicolas17> see where you have -safe now :)
[00:42:56 CEST] <nicolas17> another possibility is that it's trying too hard to find info for the unknown streams (timecode, meta)
[00:43:51 CEST] <AyrA> I doubt it. It does print some information about not being able to figure out the format of these streams but this is only done once per process not per file so I assume it skips over it
[00:50:51 CEST] <AyrA> I tried the auto_convert option but that doesn't helps. Output is playable but the input is still processed wtice.
[00:50:54 CEST] <AyrA> *twice
[00:51:42 CEST] <AyrA> I looked at the network traffic (input is on a networked drive) and when ffmpeg does the first pass where it doesn't writes anything it only reads with half the speed compared to when it actually starts processing the file
[00:53:09 CEST] <nicolas17> hmm
[00:53:31 CEST] <nicolas17> Windows mounted drive? SMB?
[00:53:36 CEST] <AyrA> yes
[00:53:44 CEST] <AyrA> Connected via gbit ethernet
[00:54:05 CEST] <AyrA> Throughput is usually around 80-100 MB/s
[00:54:06 CEST] <nicolas17> I thought maybe ffmpeg is trying to read only a piece of the file but the network filesystem reads it whole, but that can't be it with SMB
[00:55:24 CEST] <AyrA> I want to add here that there is almost 0 CPU usage of ffmpeg during this process.
[00:56:01 CEST] <AyrA> Could it be that it does some very weird seeking?
[00:59:12 CEST] <AyrA> It looks like ffmpeg processes a few frames before reading the file blindly. The console progress line always reports 51 processed frames before it stops rendering frames
[01:02:52 CEST] <AyrA> I played around a bit and this command shows similar network traffic patterns: "ffprobe.exe -i T:\Media\Gopro\2018-06-06\Test\GH020048.MP4 -count_frames"
[01:03:14 CEST] <nicolas17> you mean that ffprobe reads the file twice?
[01:04:09 CEST] <furq> are you sure it's reading the whole thing
[01:04:14 CEST] <AyrA> Not sure, but it shows the same network usage while the CPU usage stays at almost 0. Difference is that ffprobe suddenly drops from about 20MB/s to 6MB/s
[01:04:24 CEST] <furq> if those mp4s aren't faststart then it'll have to seek to the end and then back to the start
[01:04:32 CEST] <furq> so maybe fseek over smb doesn't work properly or something
[01:05:00 CEST] <AyrA> @furq Seeking should work. I can play the mp4 files directly in VLC without any issues or wait times.
[01:05:23 CEST] <furq> is this a recent ffmpeg
[01:05:32 CEST] <AyrA> Just downloaded it a few hours ago
[01:05:38 CEST] <furq> weird
[01:05:59 CEST] <furq> lavf is definitely smart enough to not read the entire file over actual network protocols
[01:06:18 CEST] <nicolas17> yeah but this is a local file as far as lavf is concerned
[01:06:22 CEST] <furq> i assume it does the same for local files (or files that are presented as local) but i've never had cause to investigate
[01:06:53 CEST] <AyrA> "ffplay.exe T:\Media\Gopro\2018-06-06\Test\GH010048.MP4" <-- This works flawless too, apart from the fact that it can't open my audio device for some reason
[01:07:23 CEST] <furq> yeah ffplay isn't very good
[01:07:31 CEST] <nicolas17> if you suddenly seek near the end of the video in ffplay, it's quick?
[01:07:36 CEST] <furq> it's there for debugging more than anything else
[01:07:36 CEST] <AyrA> yes
[01:07:43 CEST] <AyrA> almost instant
[01:09:55 CEST] <AyrA> I want to say here that ffprobe is still going as of now.
[01:10:41 CEST] <AyrA> It has been counting frames for almost 10 minutes now. The video is 11 minutes long
[01:10:43 CEST] <nicolas17> x_x
[01:16:07 CEST] <AyrA> It's done. Seems that counting frames takes about as much time as playing it normally would for whatever reason.
[01:16:16 CEST] <AyrA> I try the same with a local copy now
[01:29:24 CEST] <AyrA> I'm logging process activity of ffprobe now. It seems to read the file in two steps
[01:29:50 CEST] <AyrA> The first step is to always get 0x7FFF bytes
[01:30:09 CEST] <AyrA> The second step is to read around 70 kb, probably somehow determined from the first step
[01:31:02 CEST] <nicolas17> and yet it ends up fetching the whole file over the network? something is wrong at the smb level then :)
[01:32:11 CEST] <AyrA> It doesn't fetches the whole file automatically over the network. The two steps I described are repeated around 100 times per second which is insanely inefficient even for local disks
[01:32:32 CEST] <nicolas17> oh I thought you meant those were the total reads
[01:34:10 CEST] <nicolas17> let's see
[01:34:13 CEST] <nicolas17> ffprobe test.mp4
[01:34:54 CEST] <AyrA> Here are all the read operations performed within 1 second (local file): https://pastebin.com/yJx8p8CF
[01:36:40 CEST] <AyrA> The process has finished now
[01:37:07 CEST] <AyrA> I can give you the process monitor log file but it's just the section I sent you repeated over and over again
[01:39:08 CEST] <nicolas17> https://paste.kde.org/pplkjhgur this is what I see on Linux
[01:39:46 CEST] <nicolas17> note the input file is 9.5GB, and it clearly didn't read it all
[01:40:42 CEST] <nicolas17> -count_frames unsurprisingly reads the whole file
[01:40:48 CEST] <nicolas17> using 32KB chunks is reasonable
[01:43:40 CEST] <nicolas17> maybe you're somehow getting no readahead?
[01:43:57 CEST] <AyrA> 32kb seems rather small for today's world. If your HD is capable of transferring 80MB/s it would result in over 2000 context switches which is kind of expensive
[01:45:04 CEST] <AyrA> The problem is also merely the first pass. The second pass which actually processes the file and writes to output is faster
[01:45:27 CEST] <AyrA> I can quickly check how fast I can pull data from the share with 32kb chunks
[01:45:56 CEST] <nicolas17> what latency do you have to the file server?
[01:46:39 CEST] <AyrA> 1 or 2 ms probably. It sits next to me and is connected via a gigabit switch which is not busy at all.
[01:52:57 CEST] <nicolas17> it's like local caching / readahead / oplocks / something is not working properly in your network mount
[02:01:43 CEST] <AyrA> I just checked. If I read 32kb in a loop it gets me about 60 MB from the share so it's definitely not a caching/readahead/latency problem.
[02:02:17 CEST] <nicolas17> but isn't that the same reading pattern you're seeing from ffprobe?
[02:02:27 CEST] <AyrA> Increasing the buffer to 500 kb reads with 80 MB/s
[02:02:56 CEST] <AyrA> ffprobe has a weird reading pattern that is a two step process
[02:03:55 CEST] <nicolas17> you can make your test loop read 32768 bytes and then read randint(70000,80000) bytes
[02:10:28 CEST] <AyrA> I did. Reads about 70 mb/s
[02:10:45 CEST] <nicolas17> then what could ffprobe possibly be doing wrong? :/
[02:11:45 CEST] <AyrA> I don't know. It's either somehow not opening the file with caching enabled or it seeks around the file which tends to destroy lookahead
[02:12:02 CEST] <nicolas17> maybe compare the OpenFile or CreateFile calls between ffprobe and your test app, in Process Monitor
[02:16:21 CEST] <AyrA> The calls seem to be identical
[02:17:34 CEST] <nicolas17> and the ReadFile calls look similar?
[02:19:03 CEST] <AyrA> Sort of. For some reason the path gets translated into UNC in my application (C code using "fopen"). This could also just be an artefact from process monitor running elevated
[02:19:17 CEST] <nicolas17> I mean size-wise
[02:19:38 CEST] <nicolas17> fread could be doing its own buffering
[02:20:00 CEST] <nicolas17> (though I bet ffmpeg uses fread too)
[02:28:44 CEST] <AyrA> The pattern of fread differs from ffprobe
[02:29:20 CEST] <AyrA> For some reason, a single fread call reads 4k first and then the rest of the 32k buffer. Same for the second read which has a random size
[02:29:31 CEST] <nicolas17> ah ffmpeg uses read()
[02:30:32 CEST] <AyrA> If you can tell me where this read call is from I can try to simulate it in my program
[02:30:40 CEST] <AyrA> I can't find it in my C documentation
[02:31:05 CEST] <nicolas17> I think it's POSIX rather than C standard lib, it's in unistd.h
[02:31:35 CEST] <AyrA> It's weird though, because the process log of fread() suggests that its performance should be worse than that of read()
[02:31:47 CEST] <AyrA> Since read is POSIX, how is it implemented in windows?
[02:31:48 CEST] <nicolas17> yeah, hm
[02:34:34 CEST] <AyrA> I can see that the header file is present for my compiler too but including it throws errors about off_... not being defined so I don't think I am supposed to use it
[03:30:07 CEST] <darkdrgn2k> hi all
[03:30:19 CEST] <darkdrgn2k> im trying ot stream a video to an RTMP server
[03:30:23 CEST] <darkdrgn2k> " ffmpeg -f mpegts -i /dev/video1 -f mpegts -vcodec copy -strict -2 -f flv -c:a aac "rtmp://10.100.80.1/tomesh/tv" "
[03:30:35 CEST] <darkdrgn2k> however it runs about 15 seconds then just stops any idea
[03:30:44 CEST] <darkdrgn2k> (this does not happen if i do for example hls)
[10:20:20 CEST] <bashprogfortysix> really cool ffplay has a mute option how does it do that using alsa , since with pulse i can go in pavucontrol and mix things but i can play music and mute at the same time and with mplayer its just using alsamixer volume when pressing 'mn'
[10:22:09 CEST] <bashprogfortysix> good times
[10:49:45 CEST] <michal_f> can ffmpeg output video from an EDL file? or any other method to make movie out of list of JPEG sequences ?
[10:50:21 CEST] <michal_f> this was my yesterday's quiestion, but I had to leave office and I can't see in logs if it was answered by anybody
[10:55:12 CEST] <furq> michal_f: doesn't look like it
[10:55:18 CEST] <furq> there's an open bug report requesting it
[10:55:30 CEST] <furq> if it's a plaintext format you could probably write a script to convert it to something else
[10:56:05 CEST] <michal_f> actually I generate it myself with python, so potentially I can make it whatever ffmpeg likes
[10:56:21 CEST] <michal_f> sending over STDIN perhaps ?
[11:00:00 CEST] <furq> if you actually need it to be vfr based on the timecodes then idk of a quick way to do it with ffmpeg
[11:00:34 CEST] <furq> one thing that would work is just giving ffmpeg the list of input files, creating a cfr file, then writing an mkvmerge timecode file and remuxing it
[11:00:39 CEST] <furq> either with mkvmerge or l-smash
[11:00:51 CEST] <furq> https://manpages.debian.org/stretch/mkvtoolnix/mkvmerge.1.en.html#EXTERNAL_TIMECODE_FILES
[11:01:15 CEST] <michal_f> it's a bit simpler actually. no specific timecodes, just sequenceA range 100-200, sequenceB range 50-120, ...
[11:01:22 CEST] <michal_f> thanks for link, reading now
[11:01:35 CEST] <furq> if the output is cfr then that's easier
[11:01:45 CEST] <furq> is it just a bunch of jpegs
[11:02:39 CEST] <michal_f> yes, just jpegs
[11:03:54 CEST] <furq> ffmpeg -pattern_type glob -i "foo[100,200].jpg" -pattern_type glob -i "foo[50-120].jpg" [...] -lavfi concat out.mp4
[11:04:03 CEST] <furq> er, 50,120
[11:04:33 CEST] <furq> hopefully that syntax works with ffmpeg's globbing
[11:09:42 CEST] <furq> nvm apparently that's a zsh extension
[11:10:39 CEST] <furq> !demuxer concat @michal_f
[11:10:40 CEST] <nfobot> michal_f: http://ffmpeg.org/ffmpeg-formats.html#concat-1
[11:13:44 CEST] <michal_f> furq: no worries, zhs here too :)
[11:13:54 CEST] <michal_f> zsh*
[11:43:16 CEST] <michal_f> excerpt from
[11:43:20 CEST] <michal_f> concat:
[11:43:36 CEST] <michal_f> "The timestamps in the files are adjusted so that the first file starts at 0 and each next file starts where the previous one finishes. Note that it is done globally and may cause gaps if all streams do not have exactly the same length."
[11:44:00 CEST] <michal_f> does that mean I'm out of luck if my sequences have different lenghts ?
[11:45:16 CEST] <klaxa> i think "streams" here is referring to media-streams such as video, audio, subtitle, data
[11:45:25 CEST] <klaxa> although i don't think data is timestamped?
[11:45:58 CEST] <klaxa> i.e. if you have a file where audio is longer than video it may lead to a video gap between the sequences you want to concat
[11:46:08 CEST] <klaxa> or at least that's how i understand it
[11:46:20 CEST] <michal_f> thanks
[11:55:00 CEST] <faLUCE> Hello. I have video1.mp4 with video=h264 1080P, audio=mpeg AAC (mp4a), and video2.mp4 with video=h264 720P, audio= mpeg AAC ( mp4a). They have the same content, but video1's duration is 1 hour, video2's duration is 1 hour and 10 minutes. How can I concat the last 10 minutes of video2 to video1 ? thanks
[11:56:17 CEST] <vlambda> Hello, I see "This channel is publicly logged" in the header, where can these logs be accessed?
[12:08:10 CEST] <michal_f> ok, trying with concat. I generated my file listing sequences, like this:
[12:08:34 CEST] <michal_f> file //framestore-1/frames/BazyleaRP_rgba_v2.%04d.jpeg
[12:09:12 CEST] <michal_f> ffmpeg fails with error: Could find no file with path xxxxx and index in the range 0-4
[12:09:58 CEST] <michal_f> Impossible to open xxxxxx
[12:29:31 CEST] <michal_f> https://superuser.com/questions/1075839/concatenating-multiple-jpeg-sequences-to-one-mp4-file/1075930#1075930?newreg=5b930ed0e0bd495a8177362ab9e9e5d8
[12:29:38 CEST] <michal_f> this works for me
[12:48:39 CEST] <urbicid> Hi all, its possible to show videos for something like
[12:49:11 CEST] <urbicid> "virtual desktop" and capture this desktop with ffmpeg and stream to rtmp server?
[12:50:37 CEST] <urbicid> or do you have better variant how to do this ? ;o
[12:51:21 CEST] <urbicid> show videos on "virtual desktop" *
[12:57:48 CEST] <furq> vlambda: http://lists.ffmpeg.org/pipermail/ffmpeg-devel-irc/
[13:50:58 CEST] <vlambda> @furq Thanks!
[14:25:20 CEST] <gvakarian> Hi, I know this is not an ffmpeg bug but I'd like to know what other ffmpeg users are doing. I often run ffmpeg on windows to transcode stuff to h265. While it's transcoding, even with the ffmpeg process set to lowest priority, it makes the nvidia driver lag and occasionally restart if it lags for several seconds. I didn't have this problem before Windows 10 and it's quite annoying, anyone else has it?
[14:55:46 CEST] <BtbN> are you using nvenc?
[15:00:22 CEST] <analogical> is FFmpeg able to create Matroska (MKV) files?
[15:00:35 CEST] <Mavrik> yes.
[15:08:44 CEST] <analogical> when I create an MKV file with FFmpeg how do I add both the video and audio streams?
[15:11:30 CEST] <karasu> Question about changing fframerate without re-encoding (while changing the duration too). Is there a simpler solution than this two steps command : 'ffmpeg -i 30fps.mp4 -c copy -f h264 30fps.h264' then 'ffmpeg -r 120 -i 30fps.h264 -c copy 120fps.mp4' ?
[15:17:17 CEST] <karasu> @analogical : https://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg
[15:18:20 CEST] <analogical> karasu, thanks!
[15:59:26 CEST] <michal_f> Filtergraph 'scale=1280:720' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
[15:59:27 CEST] <michal_f> -vf/-af/-filter and -filter_complex cannot be used together for the same stream.
[15:59:38 CEST] <michal_f> can anybody give tips ?
[15:59:46 CEST] <michal_f> how to resize output ?
[15:59:56 CEST] <furq> the error message is pretty clear
[16:00:05 CEST] <furq> you need to combine -vf and -filter_complex into one filterchain
[16:03:45 CEST] <michal_f> how do I refer output in complex_filter ?
[16:03:56 CEST] <michal_f> what I do is resize all inputs to 1280x720
[16:04:04 CEST] <furq> pastebin the command
[16:04:31 CEST] <michal_f> ok. just a second. it's giant :)
[16:12:21 CEST] <michal_f> I reduced it from original, as I try to concatenate dozens of sequences
[16:12:22 CEST] <michal_f> https://pastebin.com/yb97K5L6
[16:12:46 CEST] <michal_f> input streams are of varying sizes, either HD1080 or HD720
[16:13:18 CEST] <michal_f> (ignore 1st line)
[16:14:02 CEST] <furq> from that error i'm guessing you forgot to scale one of the inputs
[16:14:05 CEST] <kepstin> michal_f: you need to feed the output of the scale filter to the input of the concat filter. You do that by adding an output pad on the scale filter, like scale=1280:720[scaled0]; and then you use that pad as input to the concat filter, like [scaled0][scaled1][scaled2]concat=n=3
[16:14:18 CEST] <furq> oh right yeah
[16:15:33 CEST] <michal_f> great ! let me try
[16:16:34 CEST] <michal_f> that seems to be it! thank you guys, really appreciated
[16:21:22 CEST] <binarym> hi all. I want to generate a slidefrom from regular png file to HLS stream
[16:21:27 CEST] <binarym> i'm using the following command: /usr/bin/ffmpeg -framerate 1/10 -loop 1 -i feed_pics/feed_%d.png -r 25 -f hls -hls_time 6 -hls_init_time 6 -hls_list_size 10 -segment_list_flags +live -use_localtime 1 -use_localtime_mkdir 1 -hls_segment_filename chunks-%Y%m%d/%H%M%S.ts high.m3u8
[16:21:46 CEST] <binarym> it works but my problem is that my chunks are only 1 second long
[16:21:51 CEST] <binarym> i want 6 seconds chunks ...
[16:22:12 CEST] <binarym> i tried to play with the -framerate and -r but it doesn't affect the segmenter behaviour
[16:23:16 CEST] <furq> binarym: set the keyframe interval to match hls_time
[16:23:39 CEST] <furq> -g 150 -x264-params keyint-min=150
[16:24:02 CEST] <binarym> hmm, ok thanks furq , i gonna test this
[16:24:12 CEST] <furq> with that said six second chunks seems weird if your input framerate is one frame every 10 seconds
[16:25:52 CEST] <binarym> furq: i don't mind if i have a chunk lasting 10 seconds
[16:25:57 CEST] <binarym> but now, it last 1 second :)
[16:26:10 CEST] <binarym> the HLS stream is used to feed a proprietary equipment
[16:26:28 CEST] <binarym> and this proprietary shit doesn't work well with 1 second chunks :(
[16:27:25 CEST] <gallax> greets
[16:27:52 CEST] <gallax> what's counts more for 4k editing? 24 threads cpu or a big ass vega gpu?
[16:28:24 CEST] <furq> i'd rather have the cpu
[16:28:25 CEST] <Mavrik> CPU is always preferred.
[16:28:32 CEST] <gallax> thing is, I can get a 8GB gpu for half price of vega.
[16:28:35 CEST] <Mavrik> Especially since GPUs are useless for quality rendering at the end.
[16:28:39 CEST] <furq> if you're just working on one video then you don't need a fancy gpu
[16:28:42 CEST] <furq> (ignoring the quality concerns)
[16:28:44 CEST] <gallax> such a rx 580
[16:28:44 CEST] <kepstin> gallax: not enough info. If you're using ffmpeg - which has very limited gpu support for any filtering - definitely the cpu
[16:29:00 CEST] <gallax> kepstin: in general
[16:29:03 CEST] <Mavrik> gallax: you probably won't be using most of the GPU anyway
[16:29:05 CEST] <Mavrik> just the encoding block
[16:29:08 CEST] <gallax> converting etc.
[16:29:14 CEST] <Mavrik> which doesn't really change all that much between models :)
[16:29:15 CEST] <kepstin> if you're using commercial video editing software, well, this isn't the place to ask, but I think some does use gpu compute for effects rendering
[16:29:22 CEST] <furq> generally speaking, every card of the same generation has the same decode/encode block
[16:30:00 CEST] <gallax> kepstin: I was hopping that ffmpeg's reccomendations could be valid for comercial editing software.
[16:30:03 CEST] <kepstin> if you just want the hardware decoder, get the cheapest card with the appropriate encoder block.
[16:30:07 CEST] <furq> ^
[16:30:14 CEST] <furq> and encode on the cpu because it'll be much higher quality
[16:30:48 CEST] <gallax> I could forgo the vega GPU and get the 32 threaripper.
[16:30:58 CEST] <binarym> furq: it still generates 1 second chunks :(
[16:31:24 CEST] <furq> weird
[16:31:42 CEST] <gallax> kepstin: LUT and film grain take the biggest hit. Same as image stabilization.
[16:31:54 CEST] <kepstin> gallax: for general video editing, more cpu cores is probably better, but particular commercial software may also benefit from gpu for effects rendering, or have poor cpu scaling.
[16:32:13 CEST] <kepstin> ffmpeg's filters are mostly single-threaded cpu, which is kinda :/
[16:32:39 CEST] <gallax> OUCH!!!
[16:32:56 CEST] <gallax> time to code re-factoring
[16:33:09 CEST] <kepstin> (ffmpeg's cpu-based video encoders/decoders for modern codecs are all well threaded, of course)
[16:34:03 CEST] <TheAMM> (although libvpx can't autodetec your cores and you'll have to give it a hint)
[16:34:08 CEST] <TheAMM> (afaik)
[16:34:18 CEST] <TheAMM> (because every time I say something here, five people correct me)
[16:34:32 CEST] <kepstin> that, and don't use vp8 :)
[16:34:37 CEST] <furq> TheAMM: it might be different now since row-mt
[16:34:47 CEST] <furq> vpx is generally poor at multithreading
[16:34:51 CEST] <furq> but for 4k it'll be less of an issue
[16:35:01 CEST] <furq> ideally for vpx you want to split into chunks though
[16:35:01 CEST] <kepstin> furq: iirc still no auto-detect, but at least it does better threading if you tell it to.
[16:35:28 CEST] <gallax> I am trying to narrow it to the sweet spot
[16:35:28 CEST] <TheAMM> I'd use VP9 if it wasn't so damned slow on my old laptop, or my not so old desktop
[16:35:35 CEST] <furq> gallax: vapoursynth has a lot more multithreaded and opencl-aware filters
[16:35:36 CEST] <binarym> about my png to hls issue: i made my script generate a regular .mp4 file. ffprobe told me it's 25fps as expected
[16:35:45 CEST] <furq> and it's easy to use that alongside ffmpeg
[16:35:53 CEST] <gallax> an rx580 powerfull enough to pair with 32 threadripper.
[16:36:24 CEST] <kepstin> gallax: rx580 or vega is probably not necessary/suitable for video stuff, really
[16:38:11 CEST] <kepstin> also, amd's hardware video encoder isn't as good as nvidia's
[16:38:51 CEST] <FurretUber> I have recorded a video using vp8_vaapi and libopus and I have noticed one strange effect in the video: one video frame is repeated approximately one second later. On this case, the frame at 02:05:21 (two minutes, five seconds and 21 frames) was repeated at 02:06:18 (two minutes, six seconds and 18 frames). The sound have not suffered any changes, only the video
[16:39:09 CEST] <kepstin> FurretUber: what hardware?
[16:39:31 CEST] <FurretUber> Intel Core i3-6100U, Intel HD Graphics 520
[16:39:35 CEST] <FurretUber> https://pastebin.com/bqeD4HKT I built FFmpeg yesterday, it uses libopus from git and libva from Ubuntu 18.04 repository. I'm on Xubuntu 18.04
[16:39:55 CEST] <gallax> kepstin: that's why I am unsure. there are 8GB nvidia's at 300$ range.
[16:40:03 CEST] <gallax> kepstin: which card do you reccomend?
[16:40:04 CEST] <kepstin> gallax: I'd probably suggest a geforce 1050 if you want to stick with consumer hardware, that's probably the cheapest way to get the pascal encoder block
[16:40:05 CEST] <Mavrik> VRAM really doesn't make a difference :P
[16:40:37 CEST] <gallax> kepstin: doesn't have to be the cheapest. Can be a little bit more.
[16:40:45 CEST] <kepstin> gallax: spending more won't help
[16:41:15 CEST] <kepstin> unless you switch to a quadro card, which removes the concurrent stream limitations
[16:42:02 CEST] <gallax> kepstin: what about huge 4k textures??
[16:42:10 CEST] <gallax> where do those go into?
[16:42:42 CEST] <FurretUber> I'm not sure the problem is with the FFmpeg built from git or libva in Ubuntu has a problem
[16:42:59 CEST] <Mavrik> gallax: "huge" = 33MB :P
[16:43:05 CEST] <Mavrik> It's fine ;)
[16:43:51 CEST] <kepstin> gallax: most of the data is being streamed into/out of the gpu, not being held in gpu ram.
[16:43:53 CEST] <gallax> Mavrik: such as this one?? --> NVIDIA Quadro P2000
[16:46:02 CEST] <kepstin> the P2000 is advertised by nvidia as good for 2 hevc 4k streams (presumably at realtime), the P4000 bumps that up to 4 streams, fwiw.
[16:46:57 CEST] <kepstin> that said, I still recommend cpu video encoding in general, at least for a final encode.
[16:47:03 CEST] <gallax> kepstin: does that help during editing?
[16:47:20 CEST] <kepstin> gallax: with ffmpeg? no.
[16:47:25 CEST] <gallax> tough choices
[16:47:38 CEST] <kepstin> that's more interesting for people doing multistream encoding for live broadcast applications
[16:47:45 CEST] <gallax> kepstin: well, I am a cli/ffmpeg/mplayer user.
[16:47:53 CEST] <gallax> but it's for somebody else.
[16:48:30 CEST] <gallax> are there any desktop vid editing channels?
[16:49:06 CEST] <Cracki> handbrake, avisynth, cinerella, ... adobe premiere, final cut, avid, ...?
[16:49:53 CEST] <gallax> kepstin: so I should go for 32 threadripper
[16:50:12 CEST] <gallax> Cracki: or something more general such as linux audio #lau
[16:50:43 CEST] <Cracki> good q, I'm not aware of such a thing
[16:51:02 CEST] <kepstin> gallax: depending on the software you're using and what you're doing, you might not be able to fully utilize a big threadripper editing a single video
[16:51:07 CEST] <Cracki> all the platform specific stuff happens in #ffmpeg or whatever library takes care of the hardware abstraction
[16:51:32 CEST] <kepstin> probably be great for working on one while doing an encode on another tho
[16:53:50 CEST] <binarym> back with my png2hls problem ... in fact, the problem isn't the chunks lenght but the fact that ffmpeg encode it too fast. I need a kind of "real-time encoding" (since my input picture can change during the encoding and i want the modification to be directly seen on HLS stream, not 4 hours later)
[16:54:20 CEST] <kepstin> binarym: add the '-re' option then, that adds a sleep in the ffmpeg file input to try to make it run realtimeish
[16:54:35 CEST] <binarym> -re doesn't look to work with png
[16:54:37 CEST] <binarym> it hangs
[16:55:05 CEST] <binarym> and when i hit 'q', it complains about output file being empty
[16:56:43 CEST] <binarym> kepstin: oh ... maybe i was wrong. after a long pause, it starts working ...
[16:57:24 CEST] <kepstin> binarym: are you encoding with x264? with default options it has a very long delay (buffers a lot of frames internall)
[16:58:13 CEST] <kepstin> binarym: consider using -tune zerolatency if you want realtime stuff (with an associated loss in efficiency), or alternately there's a few lookahead settings you can reduce.
[16:59:59 CEST] <binarym> kepstin: yep, x264. And you're right. After waiting for buffer fill, it starts encoding. thanks !
[17:41:23 CEST] <vlambda> Hello, there is a job posting at my company that I'd like to share with the ffmpeg community, is this an appropriate place for this type of post? If there is another location or postings aren't welcomed, please let me know. Thanks
[17:50:38 CEST] <Hello71> probably not. can you imagine how busy #gcc would be
[17:50:52 CEST] <Hello71> or I guess #php or whatever
[17:51:52 CEST] <vlambda> I suspected as much, that's why I asked first ;)
[19:02:39 CEST] <wfbarksdale> I'm noticing a comment for this pixel format: AV_PIX_FMT_YUVJ444P, ///< planar YUV 4:4:4, 24bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV444P and setting color_range
[19:02:58 CEST] <Mavrik> Yees?
[19:03:00 CEST] <wfbarksdale> I still see this pixel format coming out of my frame decoder though...
[19:03:09 CEST] <wfbarksdale> in 3.4
[19:03:35 CEST] <wfbarksdale> is the expectation that future versions will just eliminate this?
[19:10:53 CEST] <wfbarksdale> just wondering if i can code for this now and avoid breaking later...
[19:11:28 CEST] <JEEB> wfbarksdale: it's been a really long thing to get rid of the J things
[19:11:43 CEST] <JEEB> mostly because there's an old cthulhu like being called swscale in the mess
[19:12:00 CEST] <wfbarksdale> lol, ok
[19:12:04 CEST] <JEEB> which currently utilizes pix_fmt for the in/out conversion graph
[19:12:26 CEST] <JEEB> nobody has wanted to touch that to properly make use of the color_range field
[19:12:52 CEST] <wfbarksdale> i see
[19:13:19 CEST] <wfbarksdale> no body wants to wake cthulhu
[19:14:27 CEST] <wfbarksdale> is there a resource you could point me to, for understanding the difference between the color ranges?
[19:16:00 CEST] <JEEB> most YCbCr content is limited range, aka "TV" range (16-235/240)
[19:16:11 CEST] <JEEB> most RGB content is full range, aka "PC" range (0-255)
[19:16:25 CEST] <JEEB> but both in theory can be both
[19:16:40 CEST] <furq> limited=tv=mpeg and full=pc=jpeg
[19:18:19 CEST] <wfbarksdale> you guys are the best, was having trouble deciphering what was meant by "219*2^(n-8)"
[19:19:11 CEST] <kepstin> wfbarksdale: with higher bit depths (e.g. 10bit, 12bit), the same thing applies but with different numbers. that expression calculates the numbers for arbitrary bit depth.
[19:22:28 CEST] <wfbarksdale> 235-16 = 219, so that makes sense, is 16 always a constant offset?
[19:23:05 CEST] <wfbarksdale> or can that change as well?
[19:26:42 CEST] <ntd> So, I'm using this CF to display a montage/grid/mosaic/whatever and it's working swell with rtsp/h264 sources: https://pastebin.mozilla.org/9087579
[19:27:20 CEST] <ntd> mjpeg sources, not so much
[19:28:10 CEST] <furq> i'd be very surprised if that worked well long-term
[19:28:29 CEST] <furq> ffmpeg will make no attempt to keep timestamps in sync and there's not much you can do to make it
[19:29:05 CEST] <furq> i guess i phrased that poorly, the issue is it will try too hard and eventually you'll get one discontinuity or dropout and it'll break the whole thing
[19:29:38 CEST] <ntd> I've tried to make some adjustments: https://pastebin.mozilla.org/9087580
[19:29:50 CEST] <ntd> output is a laaaaggy as frak, huuuge cpu usage
[19:30:00 CEST] <ntd> furq, cron restarts it every hours, is working fine
[19:30:21 CEST] <furq> if nothing else you'll definitely want setpts=PTS-STARTPTS, before every scale
[19:32:06 CEST] <ntd> yeah, sorry, setpts is there
[19:32:51 CEST] <ntd> my bad when pasting, looks like this: https://pastebin.mozilla.org/9087581
[19:33:38 CEST] <ntd> i seem to recall a recommendation to add a lavf demuxer for mjpeg but i can't find the details
[19:35:20 CEST] <ntd> any recommendations as to adjusting the CF for mjpeg or a non-300MB-bloatware-with-tons-of-branding program that will do the same?
[19:36:36 CEST] <wfbarksdale> sorry to bother, but the normal case for me when decoding h264 video seems to be that I get a frame with AV_PIX_FMT_YUV420P and color_range of AVCOL_RANGE_UNSPECIFIED, is the assumption here that the color range is the TV color range? and if it was full color range, it would come out at YUVJ420p?
[19:37:40 CEST] <furq> if range isn't set, yes
[19:41:43 CEST] <JEEB> &25
[19:53:55 CEST] <ntd> the resulting mplayer output reports lines like this: frame= 12 fps=0.4 q=-0.0 size= 36451kB time=00:00:02.75 bitrate=108582.5kbit
[19:54:18 CEST] <ntd> so obviously i'm getting low fps and the bitrate seems quite insane
[19:54:30 CEST] <furq> that bitrate is about what you'd expect for rawvideo
[19:55:01 CEST] <ntd> is there any way to specify to the cfgraph what the framerate and input bitrate should be like?
[19:55:02 CEST] <ntd> oh
[19:56:11 CEST] <ntd> still, the mjpeg sources when displayed in firefox are outputting 8 fps
[19:56:46 CEST] <ntd> could it be that ffmpeg doesn't know at which rate to pull/process them?
[19:57:27 CEST] <ntd> mplayer starts at about 1,8 fps them grinds down to 0.2 in ten seconds
[20:01:57 CEST] <ntd> full script output in case it helps: https://pastebin.mozilla.org/9087584
[20:03:47 CEST] <ntd> i was using vlc (vlm/mosaic) to do this hitherto but after upgrading from ub trusty to xenial vlc seems to think there's something wrong with the mjpeg sources
[20:38:19 CEST] <ntd> furq, ping
[20:39:13 CEST] <ntd> i have it running now with cpu usage under control. there is, however, a fifteen to twenty second delay
[20:40:19 CEST] <ntd> idk if this is related to the input (ffmpeg) or mplayer (output) though, anywhere i can make both skip any buffer?
[20:40:49 CEST] <ntd> well, i'm sure ffmpeg is getting the sources with only a 1.7 sec delay
[21:20:34 CEST] <Zexaron> Hello
[21:21:18 CEST] <Zexaron> when dynamic linking ffmpeg dlls in windows applications, are .lib files required as dependencies in the source ?
[21:21:23 CEST] <Zexaron> when building
[22:09:51 CEST] <Toffe> Hello guys
[22:11:36 CEST] <blue_misfit> hey folks, anyone familiar with the state of prores xq 12 bit in ffmpeg?
[22:12:09 CEST] <blue_misfit> every sample I see that is allegedly 12 bit shows as yuv444p10le
[22:12:31 CEST] <JEEB> no idea, is it in the open source release apple did for prores?
[22:12:34 CEST] <blue_misfit> any chance the ffmpeg prores decoder is doing something bad internally and outputting 10 bit when it should be outputting 12 bit or higher?
[22:12:45 CEST] <blue_misfit> not sure which release you're referring to
[22:12:55 CEST] <JEEB> they did a prores code dump at some point IIRC
[22:13:21 CEST] <Toffe> So basically I'm using FFMPEG now to output raw H264 data i thought from the usb camera. (ffmpeg -i /dev/video2 -f h264 -vcodec libx264 /dev/null -dump -hex). Everything looks perfect, but I never get ant NALUnit for frame PPS or SPS only 0x67 and 0x68, with raspberry pi i get 0x27 and 0x28 then it works.
[22:13:42 CEST] <Toffe> Is this a setting on ffmpeg or am i missing something bigger?
[22:14:08 CEST] <JEEB> umm
[22:14:13 CEST] <JEEB> why are you using -dump -hex?
[22:14:20 CEST] <JEEB> instead just outputting -f h264 into stdout
[22:14:21 CEST] <Toffe> Just to check the data in the terminal
[22:14:56 CEST] <Toffe> https://user-images.githubusercontent.com/8550684/41372273-115fc834-6f4d-11e8-8c3f-db820c20df48.png
[22:14:58 CEST] <JEEB> I think there's a flag to try and force the initialization packets to be in-band but I would have thought -f h264 would have handled that if needed
[22:15:51 CEST] <Toffe> I must say, I'm a real noob with linux and ffmpeg and h264, just started this project today so sorry for my bad explanations
[22:17:32 CEST] <JEEB> right the flag for the encoder avcontext
[22:17:33 CEST] <JEEB> AV_CODEC_FLAG_GLOBAL_HEADER
[22:17:42 CEST] <JEEB> probably ffmpeg.c has a flag for that
[22:18:06 CEST] <JEEB> but most definitely FFmpeg is receiving the initialization data
[22:18:08 CEST] <JEEB> from x264
[22:18:17 CEST] <JEEB> why would -f h264 be not dumping that is a different question
[22:18:44 CEST] <Toffe> I know that 0x67 and 68 are initialization data
[22:18:45 CEST] <Toffe> also
[22:19:05 CEST] <JEEB> oh you noted it the other way
[22:19:29 CEST] <Toffe> Dont really know if 0x27 and 0x28 is needed? On raspivid h264 outputs it and the decoder wont show images before it gets those first two frames
[22:19:40 CEST] <Toffe> So i guessed that they are important
[22:19:51 CEST] <Toffe> So thats why i ask why they are not in ffmpeg =)
[22:21:08 CEST] <JEEB> now to find wherever the hell the nal unit type list was in h.264
[22:21:12 CEST] <JEEB> to figure out WTF you're talking about
[22:21:22 CEST] <Toffe> Hehehe
[22:21:26 CEST] <Toffe> https://yumichan.net/video-processing/video-compression/introduction-to-h264-nal-unit/
[22:21:30 CEST] <JEEB> ah here we are
[22:21:48 CEST] <JEEB> also 0x27...
[22:21:49 CEST] <JEEB> wat
[22:21:54 CEST] <Toffe> 0x27 is FRAME SPS - 0x67 is PICTURE SPS
[22:21:55 CEST] <JEEB> the list only goes 0-31
[22:22:18 CEST] <JEEB> or that is probably different from ID
[22:22:34 CEST] <Toffe> So, for 0x67, we have: forbidden_zero_bit = 0, nal_ref_idc = 3, nal_unit_type = 7
[22:23:29 CEST] <Toffe> 0x67 = 0110 0111 = 0 (forbidden bit, must be 0) 11 = nal_ref_idc, meaning 1 = field, 2=frame, 3=picture. 00111 = the id (0-30something)
[22:23:42 CEST] <JEEB> let's find the sequence parameter set first so I can find the right table from H.264...
[22:23:44 CEST] <Toffe> 00111 = 7 = Sequence parameter set
[22:24:23 CEST] <JEEB> oh so you had things stuck together?
[22:24:37 CEST] <JEEB> no wonder the field value seemed too big for the table 7.1 :P
[22:25:11 CEST] <JEEB> so yes, it was table 7.1 that I just found some time ago
[22:25:16 CEST] <JEEB> you just confuzzled me hard with a larger value
[22:25:19 CEST] <Toffe> NALUPacket is a byte with the first 5 bits are the table yeah :)
[22:25:26 CEST] <Toffe> haha, sorry :)
[22:25:31 CEST] <JEEB> page 65 of H.264 specification, btw
[22:25:33 CEST] <JEEB> freely available
[22:25:45 CEST] <Toffe> 2sec lemme get that on my screen
[22:25:54 CEST] <JEEB> http://www.itu.int/rec/T-REC-H.264-201704-I/en
[22:27:05 CEST] <JEEB> but yes, SPS|PPS|picture
[22:27:23 CEST] <JEEB> that sounds like "OK" for initializing decoding
[22:27:45 CEST] <JEEB> the whole frame/field/picture business sounds like metadata for "field/frame or just a picture"
[22:28:01 CEST] <JEEB> (as in the latter one doesn't take comment on if it's progressive or not)
[22:28:29 CEST] <Toffe> Wierd, the initializing on raspberry's pi camera starts with 0x27 and 0x28 then it works. 2sec let me fire it up and grap a screen of that output
[22:29:03 CEST] <JEEB> if you are feeding NAL units to a camera it probably wants something specific
[22:29:13 CEST] <JEEB> it doesn't mean what libx264 outputs is incorrect
[22:29:25 CEST] <JEEB> esp. if it's some field like field VS frame VS "just an image"
[22:30:25 CEST] <Toffe> JEEB i am not feedint anything to a camera, I am feeding NAL units to a H264 decoder
[22:30:43 CEST] <Toffe> JEEB: https://image.ibb.co/nwi8xy/Capture.jpg
[22:30:43 CEST] <JEEB> ok, you just put it as if you were initializing a hardware encoder
[22:30:54 CEST] <JEEB> "the initializing on rpi's camera starts with"
[22:31:08 CEST] <Toffe> yah in that image you see what the camera sends me :)
[22:31:13 CEST] <Toffe> my bad :P
[22:31:26 CEST] <Toffe> that image = sucess on frame decoding
[22:31:47 CEST] <Toffe> this one fails: https://user-images.githubusercontent.com/8550684/41372273-115fc834-6f4d-11e8-8c3f-db820c20df48.png
[22:31:54 CEST] <JEEB> ok, FFmpeg does have MMAL support for the decoder, and it seems to work for people with x264-generated streams
[22:31:58 CEST] <JEEB> you might want to try that
[22:32:09 CEST] <Toffe> Does it add latency?
[22:32:16 CEST] <JEEB> the idea is to test it
[22:32:24 CEST] <JEEB> if the same hw decoder through FFmpeg's API works
[22:32:29 CEST] <JEEB> it's clearly something in between
[22:32:31 CEST] <Toffe> aah
[22:32:53 CEST] <Toffe> I'm just googling mmal :P
[22:33:51 CEST] <JEEB> I think that was the interface to generally utilize on rpis through FFmpeg
[22:34:15 CEST] <JEEB> mmal:requires --vo=gpu (Raspberry Pi only - default if available)
[22:34:16 CEST] <JEEB> yea
[22:34:23 CEST] <JEEB> that is in mpv, which utilizes FFmpeg in the background
[22:35:11 CEST] <Toffe> hmm, so i can do -f h264_mmal
[22:35:37 CEST] <gallax> what's the reason of the channel traffic increasing many fold recently?
[22:35:39 CEST] <JEEB> that's a demuxer/muxer thing
[22:36:00 CEST] <JEEB> -c sets decoder or encoder
[22:36:04 CEST] <JEEB> -f sets demuxer or muxer
[22:36:09 CEST] <gallax> I mean it's better, more conversation. Used to be very quite.
[22:36:23 CEST] <klaxa> really?
[22:36:26 CEST] <JEEB> gallax: I think there's always been spikes every now and then, pretty much daily :P
[22:36:56 CEST] <gallax> or perhaps more ppl needing it for various platforms perhaps
[22:40:39 CEST] <Toffe> is there a way to limit the framerate the camera sends data?
[22:40:50 CEST] <Toffe> like 1 fps instead to debug
[22:41:02 CEST] <Toffe> -framerate 1 dosent seem to change anything
[22:41:06 CEST] <saml> what's good throughput metric for ffmpeg?
[22:41:14 CEST] <saml> bits processed per second?
[22:43:48 CEST] <JEEB> generally for video it's pictures processed per second
[22:44:01 CEST] <JEEB> of course you need to properly know what you're comparing and how
[22:44:11 CEST] <JEEB> that's why asking that without context is generally pretty derp
[22:50:38 CEST] <saml> derp derp derp
[22:51:08 CEST] <saml> i'm dynamically adding more workers based on queue size (i run ffmpeg inside workers. tasks are fetched from queue)
[22:51:25 CEST] <saml> i wanted some kind of metrics to capture throughput of those workers
[22:51:42 CEST] <Toffe> Can you ffmpeg the raspberry pi camera somehow? to check if its the camera who is the problem?
[22:53:24 CEST] <saml> pictures per second is good but there are different dimensions of pictures
[23:02:48 CEST] <Toffe> added '-c:v', 'h264_mmal', before input now still the same
[23:03:49 CEST] <Toffe> oh i should not need that, its a rpi build in hw decoder. I am not trying to decode the data from the camera
[23:11:18 CEST] <JEEB> btw, -c:v copy is "just copy bit stream from input"
[23:13:49 CEST] <pt__> sometimes when i encode a small 1-5 minute section of a file with -i and -t there is "empty timeline" in for example vlc, where if i skip to the end in vlc the file stops, instead of vlc playing to the exact end, does anyone know why this is and ho to fix it?
[23:25:40 CEST] <saml> why can't ffmpeg read my mind and do the best
[23:26:11 CEST] <furq> i've seen that question asked a lot but never as honestly as that
[23:26:13 CEST] <furq> so thanks
[23:27:59 CEST] <saml> how do I remove silent portion of audio and matching video as well?
[23:28:19 CEST] <saml> something like silenceremove filter but that applies to video as well so a-v sync is good
[23:28:48 CEST] <furq> use silencedetect and then trim/atrim
[23:29:39 CEST] <saml> that sounds like a script
[23:31:17 CEST] <saml> https://stackoverflow.com/a/25698675 someone did it. it's even java so it's enterprise
[23:33:10 CEST] <Toffe> JEEB: I got image! haha :D finally. Just found someone elses parameters they used to stream a h264 movie and it worked on this camera. Problem is latency! soo high latency, need 200ms max as i got with raspberry pi camera :P
[23:35:21 CEST] <Toffe> https://image.ibb.co/h8pqcy/Capture.jpg
[00:00:00 CEST] --- Thu Jun 14 2018
More information about the Ffmpeg-devel-irc