[Ffmpeg-devel-irc] ffmpeg.log.20170623

burek burek021 at gmail.com
Sat Jun 24 03:05:01 EEST 2017


[00:00:08 CEST] <bitmod> anyone here?
[00:00:11 CEST] <c_14> no
[00:00:54 CEST] <bitmod> sweet, i'm just wondering if it's possible to create gif frames (also in GIF format) using ffmpeg?
[00:01:04 CEST] <bitmod> with text
[00:01:04 CEST] <c_14> just use gif as output?
[00:01:22 CEST] <bitmod> c_14: er what do you mean?
[00:01:30 CEST] <c_14> ffmpeg <blah> out.gif
[00:02:20 CEST] <bitmod> c_14: i need to create the gif frames first, e.g. https://paste.ubuntu.com/24927705/
[00:02:41 CEST] <bitmod> so i need to run that command 100 times (varying the text input)
[00:04:36 CEST] <furq> why not just make a subtitle file and use -vf subtitle
[00:04:41 CEST] <furq> or subtitles
[00:04:48 CEST] <c_14> ^
[00:05:05 CEST] <c_14> or output all the frames individually to something lossless (png) and then use image2 input to make the gif
[00:05:13 CEST] <furq> yeah
[00:06:02 CEST] <bitmod> c_14: oh that sounds like it could work, what's the image2 command to take a bunch of PNGs and convert to a gif?
[00:06:04 CEST] <c_14> I can also think of a really terrible way if you like race conditions
[00:06:20 CEST] <c_14> ffmpeg -f image2 -i img%04d.png
[00:07:26 CEST] <c_14> the name is obviously to your discretion, it accepts scanf sequences or with -pattern_type glob '*' globs (and maybe '?'?)
[00:07:57 CEST] <bitmod> c_14: hmm should "ffmpeg image2 -i ./frames/*.png" work?
[00:08:06 CEST] <c_14> with -pattern_type glob yes
[00:08:24 CEST] <c_14> as long as you make sure your shell doesn't expand the *
[00:10:18 CEST] <bitmod> c_14: that converts all pngs to out.gif?
[00:10:51 CEST] <c_14> yeah
[00:11:01 CEST] <c_14> you might need to fiddle with the -framerate input option
[00:11:21 CEST] <bitmod> c_14: i get this error when i run that command "ffmpeg -f image2 -i frames/*.png -pattern_type glob" -> http://sprunge.us/PPeL
[00:11:31 CEST] <zerodefect> Question about progressive scan to interlaced conversion.  I've been looking at the source for the 'interlace' filter. Admittedly, I don't really understand the code, but one thing I found interesting is that the pixel samples look to be run through a low-pass filter during the conversion process.  Is somebody able to explain why the filter is needed?
[00:11:32 CEST] <c_14> pattern_type goes before -i
[00:11:51 CEST] <c_14> and it looks like your shell is expanding the *
[00:11:54 CEST] <c_14> surround the filename with ''
[00:14:00 CEST] <bitmod> c_14: damn that's fast xP
[00:14:11 CEST] <bitmod> awesome man, that's helped me so much, you've got no idea
[00:14:41 CEST] <bitmod> i've spent the whole day trying to optimize this process, and ffmpeg makes it easy
[00:15:36 CEST] <bitmod> c_14: is there an automatic overwrite too?
[00:15:42 CEST] <c_14> -y
[00:15:48 CEST] <bitmod> thanks!
[00:22:49 CEST] <bitmod> c_14: hmm, is there anyway to retain the quality of bit more?
[00:22:57 CEST] <c_14> from png to gif?
[00:23:02 CEST] <bitmod> c_14: yeah
[00:23:16 CEST] <c_14> you can use the palettegen/paletteuse filters
[00:23:53 CEST] <bitmod> i shoul've been a little clearer about my usecase, my images only have 2 colors, the background, and the text color, but they need to be rgb
[00:23:59 CEST] <bitmod> is that doable?
[00:24:46 CEST] <c_14> palettegen/paletteuse might help. Though the default palette should have both black and white so I don't really see how
[00:25:41 CEST] <bitmod> c_14: sorry, i meant to say they only need 2 colors (i.e. they can be customized with at most two colors)
[00:27:45 CEST] <c_14> yeah, I mean normally you'd use gray8 or monochrome for that but since you need rgb there's not much you can do
[00:28:42 CEST] <c_14> the quality for that shouldn't be terrible though?
[00:28:55 CEST] <c_14> since the main quality problems with gif are the limited palette
[00:29:06 CEST] <c_14> what do you mean retain the quality?
[00:30:34 CEST] <bitmod> c_14: https://imgur.com/a/MGMZd
[00:31:19 CEST] <c_14> you mean the blur around the characters?
[00:31:29 CEST] <bitmod> c_14: yup
[00:31:34 CEST] <furq> you'll want to use paletteuse/palettegen and probably play around with the dithering options
[00:31:36 CEST] <c_14> those aren't present in the gif?
[00:31:49 CEST] <c_14> yeah
[00:31:53 CEST] <iive> it' better to remove dither
[00:31:56 CEST] <c_14> since there are transitions between colors there
[00:32:15 CEST] <iive> because it makes compression worse
[00:32:25 CEST] <furq> paletteuse will use dither by default
[00:32:40 CEST] <iive> it shouldn't
[00:32:47 CEST] <furq> !filter paletteuse
[00:32:48 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#paletteuse
[00:32:55 CEST] <furq> looks like it does to me
[00:33:10 CEST] <bitmod> that's a single frame: https://imgur.com/a/LlxyX
[00:33:18 CEST] <bitmod> if i could retain that quality that would be ideal
[00:34:03 CEST] <furq> as we both said, use palettegen
[00:34:14 CEST] <furq> it'll use the default 256-colour palette otherwise
[00:34:25 CEST] <furq> which is obviously no good if you only have a bunch of similar shades of blue
[00:35:18 CEST] <bitmod> what is the .mkv file?
[00:35:41 CEST] <furq> that's whatever your input is
[00:35:59 CEST] <furq> you create a palette from your input with palettegen, then use your input and the generated palette with paletteuse
[00:37:51 CEST] <bitmod> ah ok, nice
[00:38:14 CEST] <bitmod> how would i incorporate the paletteuse into my current ffmpeg command -> "  1 ffmpeg -f image2 -pattern_type glob -y -i 'frames/*.png' movie.gif"?
[00:38:47 CEST] <bitmod> would i just add "-i palette.png -lavfi paletteuse output.gif" to the end?
[00:38:56 CEST] <furq> you need to run two commands
[00:39:58 CEST] <bitmod> my one first, then the paletteuse command?
[00:40:12 CEST] <furq> http://vpaste.net/rYpGx
[00:40:13 CEST] <furq> like that
[00:40:27 CEST] <furq> with filter options to taste
[00:43:33 CEST] <bitmod> furq: damn, too slow i think
[00:43:45 CEST] <bitmod> 0.87s
[00:47:47 CEST] <furq> oh i'm dumb
[00:48:08 CEST] <furq> http://vpaste.net/zn7gs
[00:50:02 CEST] <bitmod> furq: ah much quicker, awesome
[00:50:09 CEST] <bitmod>  really appreciate the help btw
[00:50:17 CEST] <bitmod> you too c_14
[00:53:04 CEST] <bitmod> guys also another question, would this process be faster if i had a better cpu?
[00:53:25 CEST] <c_14> I'd guess you're probably io-bound
[00:53:28 CEST] <c_14> check top/htop or something
[00:53:52 CEST] <c_14> I don't think it'd benefit from multi-threading though
[00:54:05 CEST] <c_14> the gif encoder is single-threaded
[00:54:23 CEST] <bitmod> c_14: ah right
[00:54:46 CEST] <bitmod> are there any hardware-side upgrades i could do to make the process faster?
[00:55:28 CEST] <c_14> if it really is io-bound, just shoving the images into a ramfs (tmpfs) will help
[00:55:52 CEST] <c_14> other than that
[00:55:58 CEST] <c_14> write raw bmp or something instead of png
[00:57:47 CEST] <bitmod> c_14: wow i just realized my file is only 3kB
[00:57:52 CEST] <c_14> or raw tiff I guess
[00:58:43 CEST] <bitmod> not sure if i'm a good enough programmer for that (writing raw tiffs)
[00:58:56 CEST] <c_14> ffmpeg can output that instead of png
[00:59:26 CEST] <c_14> ffmpeg <args> -compression_algo raw out.tiff
[00:59:38 CEST] <bitmod> would it be faster to generate than my current c++ code? (0.27s)
[01:00:08 CEST] <c_14> hmm?
[01:00:42 CEST] <c_14> didn't you generate the png's for the input to the gif generation with ffmpeg?
[01:00:53 CEST] <bitmod> c_14: at the moment i'm using cairo to generate the 120 PNGs, and it takes 0.27s
[01:01:08 CEST] <bitmod> c_14: yeah exactly, i thought you meant use ffmpeg to generate tiffs instead of pngs
[01:02:24 CEST] <c_14> I have no idea if it'd be faster or not, I've never benchmarked it
[01:02:28 CEST] <c_14> but zlib is pretty slow
[01:03:22 CEST] <bitmod> c_14: what uses zlib?
[01:03:26 CEST] <furq> png
[01:03:34 CEST] <bitmod> ah ok
[01:03:43 CEST] <c_14> though I believe you can write uncompressed png
[01:03:58 CEST] <c_14> not sure what cairo does
[01:04:15 CEST] <furq> pretty sure cairo will output bmp
[01:04:44 CEST] <furq> i assume you could just have your code write frames to stdout and pipe them into ffmpeg
[01:04:47 CEST] <furq> that should save some time
[01:04:58 CEST] <bitmod> the thing is, i wanted to use this library (https://www.lcdf.org/gifsicle/) to create my gifs, as it is extremely fast, but in order to use it i need my frames to be .gifs, which cairo can't do. If i could output .gif frames and then use gifsicle, i think i could reduce total time < 0.5s
[01:05:28 CEST] <bitmod> furq: oh is that a possiblity?
[01:05:28 CEST] <bitmod> didn't think about that
[01:05:46 CEST] <c_14> yeah, just use -f image2pipe -i pipe:0
[01:06:24 CEST] <thebombzen> bitmod: you can also just write GIFs in the gif format (rather than as frames) with FFmpeg, and then batch optimize them. Say, with gifsicle you could do gifsicle --batch --unoptimize --optimize=3 file.gif
[01:06:36 CEST] <thebombzen> or gifsicle -U -O3 -b file.gif, or as a pipe, gifsicle -U -O3
[01:06:39 CEST] <furq> that seems like it'd be much slower
[01:06:57 CEST] <furq> and also probably lower quality
[01:07:18 CEST] <thebombzen> if you output invidivdual GIF frames and put them together the file size will be enormous and it will also probably be slower
[01:07:20 CEST] <furq> you'd need to palettegen each gif frame you write with ffmpeg anyway
[01:07:30 CEST] <furq> or paletteuse rather
[01:07:37 CEST] <furq> since they'll all use the same palette
[01:07:53 CEST] <thebombzen> it seems better to just write the GIF images inside the GIF format as an animation, and optimize with gifsicle
[01:08:09 CEST] <durandal_1707> there is option to change palletes
[01:08:34 CEST] <furq> thebombzen: better than what
[01:08:41 CEST] <furq> i feel like you're comparing this with something that isn't happening
[01:09:02 CEST] <thebombzen> individual frames
[01:09:12 CEST] <thebombzen> better than individual frames
[01:09:21 CEST] <furq> could you be less specific
[01:09:28 CEST] <thebombzen> ?
[01:09:43 CEST] <bitmod> honestly i don't know how they do it, but converting 120 .gif frames (non-animated) into an animated gif takes a total of 0.06s using gifsicle
[01:10:04 CEST] <bitmod> and good quality too
[01:10:23 CEST] <c_14> I'm pretty sure you can just concat gifs pretty easily
[01:10:37 CEST] <c_14> as long as they're not recoding the stream it'll be as fast as io
[01:10:45 CEST] <furq> gifsicle is good but you have no good method of outputting the source frames for it to use
[01:11:15 CEST] <durandal_1707> bitmod: post comparasion
[01:13:38 CEST] <bitmod> c_14, furq: yeah the issue is generating the .gifs in the first place
[01:13:43 CEST] <thebombzen> bitmod: but the gif file you produce is probably enormous
[01:13:57 CEST] <bitmod> durandal_1707: ah ok, i'll have to look that up
[01:14:03 CEST] <thebombzen> if you optimize it, it's probably significantly less fast
[01:14:48 CEST] <bitmod> thebombzen: yup fair point, it's 400kb (200 or 300kb too big)
[01:14:56 CEST] <furq> bitmod: like i said, pipe bmp frames to stdout and into ffmpeg
[01:15:00 CEST] <furq> if that's still too slow then shrug
[01:15:01 CEST] <bitmod> this is what i'm going for btw https://imgur.com/a/QQkzH
[01:15:28 CEST] <bitmod> furq: ok, i'll give that a shot
[01:15:42 CEST] <furq> you might also want to consider generating the frames with ffmpeg
[01:15:47 CEST] <furq> not sure if that'll be quick or easy though
[01:16:11 CEST] <c_14> I doubt it'd be faster since you'd have to go through process initialization for every frame
[01:16:13 CEST] <bitmod> that gif above is 120kB, and loads (including the whole generation process) in about 600 milliseconds
[01:16:39 CEST] <furq> well you could presumably just generate subtitles
[01:16:43 CEST] <bitmod> is there no way to create 100 gifs in parallel?
[01:16:58 CEST] <c_14> fork 100 processes/threads
[01:17:59 CEST] <c_14> probably won't be faster since you'll just blow up your load though
[01:18:06 CEST] <c_14> a thread-pool would be more effective
[01:18:17 CEST] <bitmod> hmm what about with a language like elixir? i know it's not the fastest, but maybe that would work
[01:18:23 CEST] <furq> huh
[01:20:34 CEST] <jasonfloyd> Hi there!  I'm trying a very simple mp4 to mxf conversion, and receiving errors.  Command line: "ffmpeg -i ${input_file} -c:v copy -an -f mxf output.mxf", error: "[mxf @ 0x284ba40] AVC Intra 50/100 supported only [mxf @ 0x284ba40] could not get h264 profile av_interleaved_write_frame(): Operation not permitted" - any idea?
[01:22:16 CEST] <c_14> you probably need to reencode the video
[01:22:41 CEST] <jasonfloyd> I've tried using -c:v libx264 instead of copy, same issue.
[01:23:11 CEST] <jasonfloyd> Multiple mp4 inputs, all give the same error.
[01:23:40 CEST] <c_14> you probably need 10-bit x264
[01:23:54 CEST] <c_14> though I'm not sure why it wants avc-intra
[01:24:29 CEST] <jasonfloyd> Any pointers to 10-bit x264 info?
[01:24:58 CEST] <c_14> either download https://www.johnvansickle.com/ffmpeg/ or build it yourself
[01:25:13 CEST] <c_14> though according to the code it should take AVC Baseline as well and AVC High 422 Intra
[01:25:18 CEST] <c_14> maybe try setting -profile:v high ?
[01:25:31 CEST] <jasonfloyd> I'm building x264 and ffmpeg myself, is it just a config option for x264?
[01:25:35 CEST] <c_14> yeah
[01:25:38 CEST] <c_14> --enable-10bit
[01:25:40 CEST] <c_14> afaik
[01:25:52 CEST] <c_14> eh
[01:25:55 CEST] <c_14> --bit-depth=10
[01:25:56 CEST] <bitmod> c_14: what about using graphicsmagick convert? could that be fast?
[01:26:01 CEST] <c_14> couldn't have been more wrong
[01:26:13 CEST] <c_14> bitmod: maybe, never really used it
[01:26:35 CEST] <furq> it's pretty rare that imagemagick is the right answer
[01:26:35 CEST] <bitmod> c_14: i put this code into a loop of n=120 (gm convert -size 380x100 xc:white  -font Noto-Serif -pointsize 64 -fill blue  -draw "text 25,65
[01:26:38 CEST] <jasonfloyd> -profile: v high - same issue.
[01:26:46 CEST] <bitmod>     '03:12:47:5$i'" ./frames/frame$i.gif)
[01:26:54 CEST] <bitmod> woops that's a bit long
[01:26:56 CEST] <bitmod> one sec
[01:27:05 CEST] <c_14> jasonfloyd: what about baseline?
[01:27:40 CEST] <bitmod> here we go: https://paste.ubuntu.com/24928681/
[01:27:49 CEST] <jasonfloyd> also same with baseline.
[01:27:58 CEST] <bitmod> but it takes 1.6s to generate those, which is slower than my python code (0.7s)
[01:28:08 CEST] <jasonfloyd> trying to rebuild with 10bit x264
[01:28:27 CEST] <furq> bitmod: try generating an srt and using the subtitles filter
[01:28:42 CEST] <furq> that'll do it without spawning 120 processes
[01:29:06 CEST] <c_14> furq: don't think srt will cut it with that styling
[01:29:08 CEST] <c_14> he'll need ass
[01:29:27 CEST] <bitmod> furq: generate the srt with ffmpeg?
[01:29:38 CEST] <furq> it's the same format for every frame so i don't see why not
[01:29:50 CEST] <furq> you can use ass styling on srt with the subtitles filter
[01:29:57 CEST] <furq> bitmod: with a shell script or python or whatever
[01:30:28 CEST] <c_14> hmm, I guess you could use font size tags and 2 lines of text
[01:30:33 CEST] <c_14> might even work
[01:30:54 CEST] <furq> there's only one line of text in that command he just pasted
[01:31:02 CEST] <furq> and one size
[01:31:13 CEST] <c_14> he showed this https://imgur.com/a/QQkzH earlier
[01:31:15 CEST] <furq> if he wants a copy of that gif from earlier then yeah
[01:31:17 CEST] <furq> that's more complicated
[01:33:01 CEST] <bitmod> can ffmpeg convert srt to gif?
[01:33:16 CEST] <c_14> it can overlay srt onto an input and output as gif
[01:33:25 CEST] <c_14> in your case the input would be the color pseudo demuxer
[01:33:30 CEST] <furq> yeah you can do this in one command after you generate the srt
[01:33:39 CEST] <furq> and it should be very fast
[01:33:43 CEST] <c_14> (the color filter in the lavfi pseudo-muxer)
[01:34:24 CEST] <furq> -f lavfi -i color=white -vf subtitles=foo.srt,force_style=xxx out.gif
[01:34:27 CEST] <furq> something like that
[01:34:55 CEST] <furq> !source color @bitmod
[01:34:55 CEST] <nfobot> bitmod: http://ffmpeg.org/ffmpeg-filters.html#allrgb_002c-allyuv_002c-color_002c-haldclutsrc_002c-nullsrc_002c-rgbtestsrc_002c-smptebars_002c-smptehdbars_002c-testsrc_002c-testsrc2_002c-yuvtestsrc
[01:35:00 CEST] <furq> !filter subtitles @bitmod
[01:35:00 CEST] <nfobot> bitmod: http://ffmpeg.org/ffmpeg-filters.html#subtitles-1
[01:36:19 CEST] <bitmod> furq: well, i can generate 120 .srt files in 0.01s, so that's a good start xP
[01:36:29 CEST] <c_14> well, you only need 1
[01:36:34 CEST] <furq> you mean one 120-entry srt file, right
[01:37:09 CEST] <bitmod> furq: oh, should each unit of text be an a newline?
[01:37:42 CEST] <bitmod> thinking about it now that makes more sense
[01:38:06 CEST] <c_14> nonce\ntimestamp\ntext
[01:38:22 CEST] <c_14> timestamp range even
[01:39:03 CEST] <bitmod> c_14: like this right? https://matroska.org/technical/specs/subtitles/srt.html
[01:39:14 CEST] <c_14> yeah
[01:39:15 CEST] <furq> yeah
[01:40:09 CEST] <bitmod> what does 440 represent in "00:02:17,440 --> 00:02:20,375"? (and 375)
[01:40:16 CEST] <c_14> milliseconds
[01:40:40 CEST] <bitmod> ah ok
[01:46:57 CEST] <bitmod> almost there
[01:48:26 CEST] <jasonfloyd> 10-bit x264 didn't make any difference when trying to convert to mxf.  "[mxf @ 0x284da40] AVC Intra 50/100 supported only [mxf @ 0x284da40] could not get h264 profile av_interleaved_write_frame(): Operation not permitted Error writing trailer of foo.mxf: Unknown error occurred"
[01:48:56 CEST] <jasonfloyd> it seems like a pretty straightforward command: ./ffmpeg -i ${input_file} -c:v copy -an -f mxf ${input_file%.*}.mxf
[01:49:05 CEST] <jasonfloyd> ffmpeg v3.1.9
[01:56:12 CEST] <bitmod> c_14, furq: ok, i've got my .srt file generator
[02:00:23 CEST] <c_14> jasonfloyd: ffmpeg -f lavfi -i testsrc=s=1920x1080 -c:v libx264 -profile:v high10 -pix_fmt yuv420p10le -f mxf out.mxf <- works for me
[02:01:29 CEST] <kierenj> hi - I have a complex filter which is 1) taking 5 different-duration input videos, 2) taking segments of each, 3) concatenating some of those segments and 4) overlaying them into a grid... I get a 0 byte output file, lots of CPU, and lots of "Buffer queue overflow, dropping" messages.  could anyone take a peek please? - https://pastebin.com/UyMEJhf4
[02:01:34 CEST] <c_14> -profile:v high222 -pix_fmt yuv422p10le works as well
[02:02:10 CEST] <c_14> this is one of _those_ filter_complexes isn't it
[02:02:28 CEST] <kierenj> might be .. sorry, I'm new to ffmpeg and trying to build a cool timelapse grid array for YT :)
[02:05:41 CEST] <bitmod> c_14, how can i convert my srt to a gif?
[02:05:42 CEST] <c_14> you might want to use hstack/vstack
[02:05:59 CEST] <furq> 00:34:24 ( furq) -f lavfi -i color=white -vf subtitles=foo.srt,force_style=xxx out.gif
[02:06:02 CEST] <furq> 00:34:27 ( furq) something like that
[02:06:42 CEST] <furq> except that , should be a :
[02:06:51 CEST] <c_14> kierenj: and maybe add an explicit output pad and map that?
[02:08:11 CEST] <kierenj> humm, same result but htanks c_14. pushed my laptop battery over the edge so I'll be timing out shortly.. eek
[02:08:23 CEST] <kierenj> (still spins CPU, lots of messages, no output)
[02:08:33 CEST] <c_14> Also, I'd try with a simpler command
[02:08:35 CEST] <c_14> to test
[02:08:40 CEST] <c_14> it might just be too much at once
[02:08:54 CEST] <c_14> like maybe only do the upper left corner of the output you want
[02:08:58 CEST] <c_14> and then expand on that
[02:09:04 CEST] <kierenj> ok cool, it's configurable, one sec.. if my battery lasts
[02:09:47 CEST] <bitmod> furq: wopoos, i think i'm doing something wrong, that command is taking a very long time
[02:10:20 CEST] <kierenj> no luck... does "Buffer queue overflow, dropping." indicate an actual problem or just long runtime?
[02:11:10 CEST] <bitmod> furq: is this command right? "  1 ffmpeg -f lavfi -i color=white -vf subtitles=frame.srt:force_style=PrimaryColour=255 movie.gif"
[02:11:19 CEST] <c_14> I think it means it's dropping frames <- kierenj
[02:11:28 CEST] <furq> i don't think that's a valid primarycolour
[02:11:31 CEST] <c_14> check the bottom line of the ffmpeg output, there should be dropped=
[02:11:32 CEST] <furq> and you probably need to set a font
[02:11:57 CEST] <kierenj> it never finishes or outputs a byte, I just have to terminate it
[02:12:13 CEST] <c_14> with how many overlays/splits was that?
[02:12:19 CEST] <furq> pastebin the srt
[02:13:04 CEST] <bitmod> furq: http://sprunge.us/OaNA
[02:13:10 CEST] <kierenj> 8 overlays (2x4), down from 49
[02:13:19 CEST] <kierenj> I'll do 2..
[02:13:28 CEST] <c_14> 8 should be fine
[02:13:33 CEST] <c_14> do you have a script that generates this?
[02:14:10 CEST] <kierenj> little program, yes. no luck with 2
[02:14:18 CEST] <bitmod> c_14: still workin on it, i've never really used shell scripts before so i'm still tweaking it
[02:14:38 CEST] <bitmod> c_14: ah wrong person
[02:14:55 CEST] <kierenj> revised: https://pastebin.com/JCZaixq3
[02:15:21 CEST] <kierenj> not sure I'm trim/concat/setpts in the right order
[02:18:03 CEST] <c_14> it should be fine
[02:22:42 CEST] <c_14> and he's gone
[02:22:55 CEST] <c_14> and I was just about to tell him that it appears to maybe work here
[02:28:06 CEST] <c_14> the other script also works
[02:28:17 CEST] <c_14> if by "works" you mean spews errors for ~30s and then triggers an OOM-kill
[02:28:35 CEST] <c_14> 47.5 actually
[02:30:06 CEST] <c_14> and they're only warnings
[02:30:08 CEST] <c_14> harmless, really
[05:31:17 CEST] <antiPoP> hi, I want to make a video preview or a video by taking 10 1-second parts and putting them together. Is there any tutorial for that?
[06:11:36 CEST] <greatdex> hi
[07:02:55 CEST] <LokerPoker> Does ffmpeg offer space for beginners who want to contribute ?
[08:55:28 CEST] <Orochimarufan> Is there some way of extracting attachments (fonts) other than -dump_attachment?
[09:19:28 CEST] <c_14> with ffmpeg, probably not
[10:25:16 CEST] <Diego_> I've used ffmpeg -i 1.webm -i 2.webm -filter_complex "nullsrc=size=640x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright]; [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][upperright] overlay=shortest=1:x=320 [tmp2]; [tmp2][lowerleft] overlay=shortest=1:y=240" -c:a libfdk_aac -c:v libx264 -crf 20 -threads 2 -f m
[10:25:57 CEST] <Diego_> to compile 2 webm videos into a mosaic with mp4 re-coding. Both videos have audio and they are working, but after compiling into mosaic I only hear the audio from the first one. Why?
[10:27:36 CEST] <Diego_> Sorry, I've used this line, not the one I sent before: ffmpeg -i 1.webm -i 2.webm -filter_complex "nullsrc=size=640x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][upperright] overlay=shortest=1:x=320" -c:a libfdk_aac -c:v libx264 -crf 20 -threads 2 -f mp4 -preset ultrafast output.mp4
[10:43:10 CEST] <durandal_1707> Diego_: because you said so in graph
[10:43:46 CEST] <durandal_1707> you need amix to mix audio
[11:04:53 CEST] <thebombzen> Diego_: consider using hstack and vstack rather than overlay/pad
[11:05:21 CEST] <thebombzen> these stack videos horizontally and vertically, but are much faster with less overhead than overlay/pad
[11:17:08 CEST] <Diego_> Are there any tutorial about how using hstack or vstack for a similar purpose?
[11:18:17 CEST] <Diego_> thebombzen: And audio + video will be mixed with that or should I need amix as durandal_1707 suggested?
[11:18:42 CEST] <thebombzen> hstack and vstack are easy to use, just read the documentation
[11:19:00 CEST] <thebombzen> and durandal's advice was to use the amix audio filter
[11:19:21 CEST] <thebombzen> you need to use the amix audio filter if you want to mix the audio together
[11:20:14 CEST] <Diego_> Okay. I will give it a try. Thanks to both of you :D
[13:10:13 CEST] <thebombzen> should we even be getting this error? [null @ 0x55b785680720] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 5 >= 5
[13:10:44 CEST] <thebombzen> I feel like vf_null should not spit out error messages about invalid dts
[13:10:59 CEST] <thebombzen> er, not vf_null, I mean the null muxer
[13:14:39 CEST] <jkqxz> It's coming from the generic lavf muxer code.  If you really wanted it not to appear for the null muxer then you'd need specific hacks there.
[13:15:13 CEST] <thebombzen> I guess that makes sense, it just feels awkward that the null muxer is complaining about muxing info
[13:15:22 CEST] <thebombzen> for reference: http://sprunge.us/PAEd
[13:17:07 CEST] <BtbN> it's not
[13:21:31 CEST] <thebombzen> it's not what?
[13:28:28 CEST] <Diego_> Hey! I've been testing hstack and vstack for a while. Its working like a charm for my mosaic outputs and you were right thebombzen , its kinda fast. Btw, I must keep "-preset ultrafast" or I will lose too much speed compilation. Now I'm facing 2 new "errors" and I don't know how to handle them properly
[13:30:56 CEST] <Diego_> First of all, I'm compiling 2 videos and 2 audios in the same mosaic using "amix", so I don't have to compile both audio and video into a single video before creating the mosaic. Secondly, one of my videos has like 3 secs without a image (probably due to a bug with the recording software), but audio its perfect. If I mix that video and its corresponding audio, I have audio from sec 0 to sec 3, but no video. Thats right, because the vid
[13:32:23 CEST] <Diego_> the problem is that when I mount the mosaic that video does not start showing empty screen for 3 secs, instead starts at 0:00 with the first available frame and it creates a delay between image and audio. I've been looking for "setpts" options, but I can't find a solution with that filter to make it work properly
[13:32:26 CEST] <Diego_> any suggestion?
[13:35:02 CEST] <Diego_> I don't know if it was clear enough :/
[13:39:21 CEST] <thebombzen> Diego_: try adding setpts=PTS+3/TB
[13:39:41 CEST] <thebombzen> it's a video filter that increases the timestamps by 3 seconds
[13:53:18 CEST] <Diego_> thebombzen: When I compile both videos, I get this error spammed like hell: https://pastebin.com/SmnZSWsj I suspect that its because those empty frames. There is no option to allow inserting videos even with empty frames instead to get first frame?
[13:53:34 CEST] <Diego_> thebombzen: aw! sorry. Didnt see your filter option, I will test it now
[13:59:41 CEST] <Diego_> thebombzen: Nope, its not working :( with that option I get both videos freezed, but audio still out of sync. Without setting the PTS filter, I get both videos freezed for those seconds, then they start to play and audio is sync, but I don't want videos to get freezed. I want them sync and if they have empty frames, to keep empty frames
[14:00:10 CEST] <thebombzen> You could also try using a source and the concat filter
[14:02:07 CEST] <thebombzen> for example, in your complex filter, you can use color=c=black:s=1280x720:d=3:r=24 to generate 3 seconds of all black 1280x720 video at 24 fps
[14:02:23 CEST] <thebombzen> then you could concat that with the video stream that's 3 seconds off with the concat filter
[14:05:06 CEST] <Diego_> Okay. I will give it a try. I was thinking about that, but didn't knew how to implement :P I'm still pretty new with this. Furthermore, I have to make this as automatic process in PHP no matter how many videos should be compiled. Funny times are comming...
[14:05:26 CEST] <thebombzen> PHP? :(
[14:05:32 CEST] <thebombzen> I recommend something else
[14:05:45 CEST] <Diego_> I'm working with laravel (Symfony)
[14:05:57 CEST] <thebombzen> you should avoid PHP
[14:06:16 CEST] <Diego_> Why should I?
[14:06:16 CEST] <thebombzen> if something wants to you use it, then don't use that either
[14:06:32 CEST] <thebombzen> PHP is a terrible stain upon this earth whose blight rots good software everywhere
[14:07:44 CEST] <Diego_> I used to be a Java dev... but Java, lacks in hardware efectiveness and usage for web, even with new versions
[14:07:59 CEST] <Diego_> It consumes like hell
[14:08:15 CEST] <JEEB> I'd rather focus on the issue than the implementation
[14:08:21 CEST] <JEEB> or well, the language
[14:08:53 CEST] <Diego_> And I also hated PHP, but well... I need to focus on something and when I  started to work in this company, they were working in Symfony, so... I had to adapt
[14:08:56 CEST] <thebombzen> Java is not designed to be used for the web or as a server langauge. It's the wrong tool for that (even though you can)
[14:09:08 CEST] <JEEB> what
[14:09:17 CEST] <JEEB> at this point JVM is one of the best VMs around
[14:09:28 CEST] <JEEB> but OK, I'm not going to go there
[14:09:45 CEST] <Diego_> but its hardware usage still being huge for a production server if you just need a simple APP
[14:09:46 CEST] <thebombzen> yea, Java is the wrong tool for many things and that's its biggest problem, people don't know when not to use it
[14:10:15 CEST] <thebombzen> I am quite familiar with Java's language and it's really not amazing. But it's not PHP-bad
[14:10:44 CEST] <Diego_> well, I will give a try to black screen
[14:10:48 CEST] <Diego_> and lets see what happens
[14:13:28 CEST] <Diego_> Before doing something... I'm thinking what if I get empty frames from 00:00:30 to 00:00:35?
[14:13:57 CEST] <Diego_> I can't concat that (in an easy way)
[14:14:12 CEST] <Diego_> there should be another soultion to compile the video with empty frames :/
[14:14:34 CEST] <JEEB> if you need to stitch together various things dynamically you've left the realm of ffmpeg.c
[14:14:41 CEST] <JEEB> and need to make your own code using the APIs
[14:18:26 CEST] <thebombzen> Well to be honest this is getting so incredibly complex it's probaby easier to do the temporal chunks of video separately with the different commands
[14:18:59 CEST] <thebombzen> and then concatenate them together
[14:19:40 CEST] <Diego_> Okay, thank you anyway for all the information :) have a nice day ^^
[16:09:20 CEST] <sgermain> Hi, I'm trying to find some information about an error that's spamming my console: Invalid UI golomb code. Can anyone explain to me why this starts happening after ffmpeg has been up and running for about 5 hours? Thanks.
[16:11:30 CEST] <harold> hey guys
[16:11:59 CEST] <harold> i have a camcorder which is providing a live stream... and i have a stream of text coming in with x,y coordinate
[16:12:23 CEST] <harold> i want to play a blurred small circle on the live feed, as such: https://gyazo.com/c220fb28c6f4f33e3a261dad5e4ec022
[16:12:28 CEST] <harold> what's the simplest way to do that?
[16:12:32 CEST] <harold> s/play/place
[16:26:14 CEST] <RandomCouch> I'm trying to build ffmpeg for android on Ubuntu
[16:26:35 CEST] <RandomCouch> I know I've asked this before but it was for Windows and I wasn't successful
[16:26:52 CEST] <RandomCouch> but now I'm on ubuntu, and I'm looking for a good guide to use most recent versions of NDK and ffmpeg
[16:26:59 CEST] <RandomCouch> so I can get a .so binary file for android
[16:27:24 CEST] <JEEB> 1) get NDK 2) create a standalone toolchain 3) add standalone toolchain to PATH 4) if using GCC, that's pretty much it. if using clang, you also have to put https://git.libav.org/?p=gas-preprocessor.git;a=summary into PATH
[16:27:39 CEST] <JEEB> after that it's standard cross-compilation
[16:29:16 CEST] <JEEB> in FFmpeg's source dir `mkdir -p android_build && cd android_build && ../configure --prefix=/your/prefix --arch=armv7 --cpu=armv7-a --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi- --enable-shared`
[16:29:21 CEST] <RandomCouch> so I got NDK, I've extracted it
[16:29:39 CEST] <JEEB> that should give you a very basic ARMv7 config
[16:29:50 CEST] <RandomCouch> ok I'm trying that out
[16:30:04 CEST] <JEEB> RandomCouch: you need to create the standalone toolchain and add it to PATH
[16:30:09 CEST] <JEEB> have you done that?>
[16:30:15 CEST] <RandomCouch> No
[16:30:18 CEST] <RandomCouch> I'm not sure how to do that heh
[16:30:23 CEST] <JEEB> the NDK dir contains
[16:30:39 CEST] <JEEB> build/tools/make_standalone_toolchain.py --install-dir /where/you/want/your/standalone/toolchain --arch arm --api 21
[16:30:40 CEST] <RandomCouch> the toolchains folder
[16:30:50 CEST] <RandomCouch> ah ok
[16:30:58 CEST] <JEEB> that creates a stand-alone ARM toolchain with the API level 21
[16:31:03 CEST] <JEEB> (which is what I'm using)
[16:31:12 CEST] <JEEB> you can set it lower if you need to
[16:31:18 CEST] <JEEB> https://source.android.com/source/build-numbers
[16:31:22 CEST] <JEEB> what the API levels map to :P
[16:31:24 CEST] <RandomCouch> nice, 21 is exactly what I need
[16:31:41 CEST] <JEEB> usually I have something like /home/my_username/ownapps
[16:31:46 CEST] <JEEB> and I put my toolchains there
[16:31:56 CEST] <JEEB> in subdirectories
[16:33:02 CEST] <JEEB> also since you don't set --cc in the FFmpeg configure that uses GCC 4.9 which google no longer really supports
[16:33:32 CEST] <RandomCouch> the make standalone toolchain tool gave me an error
[16:33:39 CEST] <RandomCouch> couldn't find AndroidVersion.txt in the target folder
[16:33:50 CEST] <JEEB> huh
[16:34:08 CEST] <JEEB> it should actually error if it /has/ the directory already :P
[16:34:42 CEST] <RandomCouch> lol yeah I just messed up, it deleted a whole directory now for some reason..
[16:35:17 CEST] <RandomCouch> Trying again
[16:37:55 CEST] <JEEB> but yea, the only android specific thing is getting the toolchain, really
[16:37:59 CEST] <JEEB> after that it's business as usual
[16:38:14 CEST] <JEEB> cross compilation and in this case for android
[16:38:16 CEST] <RandomCouch> I never really built ffmpeg for any platform
[16:38:24 CEST] <RandomCouch> always just downloaded the bin for windows
[16:40:11 CEST] <RandomCouch> Alright I just ran the make standalone toolchain command
[16:40:15 CEST] <RandomCouch> and everything went well I believe
[16:40:35 CEST] <JEEB> ok
[16:40:50 CEST] <JEEB> now I recommend you make a script that adds that to your PATH
[16:40:50 CEST] <RandomCouch> so let
[16:40:56 CEST] <RandomCouch> ok
[16:41:01 CEST] <RandomCouch> an .sh script?
[16:41:13 CEST] <JEEB> you'd be sourcing it so while yes but not exactly
[16:41:20 CEST] <JEEB> let me post mine
[16:42:10 CEST] <guther> Is there a way to tell ffmpeg to take the best available stream from a playlist?
[16:42:38 CEST] <JEEB> guther: I'd think it does that by default
[16:42:54 CEST] <JEEB> since it tries to select the "best" video and audio tracks (one of each) by default
[16:43:06 CEST] <JEEB> in quotation marks because what is "best" is 100% up to the code that decides it
[16:43:07 CEST] <guther> it takes always low-lowext res in my tests
[16:45:01 CEST] <guther> With vlc foo.m3u i get hi-res, with ffmpeg -i foo.m3u i get lowres
[16:45:01 CEST] <JEEB> RandomCouch: ok it was literally just a single export line :P
[16:45:23 CEST] <JEEB> export PATH=${PATH}:${HOME}/ownapps/ndk-toolchain/armv7/bin:${HOME}/ownapps/android-ndk
[16:45:35 CEST] <__deivid__> Hi! Is there a way to tell ffmpeg to read in big buffers/chunks from the files? I'm transcoding LOTS of files with a cluster over NFS
[16:45:47 CEST] <JEEB> the first one is the standalone toolchain you made, and the second is the actual NDK dir
[16:48:13 CEST] <JEEB> RandomCouch: after you modify the paths you can try running `source thatfile.sh`
[16:48:27 CEST] <JEEB> and then check it with `echo "${PATH}"`
[16:48:52 CEST] <JEEB> and then try writing arm-linux-<tab><tab><tab> if it looks OK
[16:48:59 CEST] <JEEB> it should bring out the NDK toolchain
[16:49:55 CEST] <ac_slater> hey guys, If I have a data stream and a video stream in an MPEGTS container, do the PTS and/or DTS of both streams have to relate?
[16:50:20 CEST] <ac_slater> ie - use a global PTS/DTS or can the streams truly be independent ?
[16:55:47 CEST] <guther> __deivid__ https://www.ffmpeg.org/ffmpeg-protocols.html#cache
[16:56:56 CEST] <__deivid__> guther: that's not what I meant. I have like 7TB of video on a NFS server and I'm converting them with multiple workers
[16:57:19 CEST] <__deivid__> I **think** ffmpeg is doing lots of small read() instead of fewer, bigger reads
[16:57:39 CEST] <__deivid__> gstreamer has this configurable with 'blocksize'
[16:57:54 CEST] <JEEB> yea, I think someone recently added some stuff for that although I wouldn't be surprised if there already was some general parameter for it
[16:58:24 CEST] <JEEB> yea, "avformat/aviobuf: add support for specifying minimum packet size and marking flush points"
[16:58:28 CEST] <JEEB> got merged
[16:58:47 CEST] <JEEB> although I think that might be for writing
[16:59:11 CEST] <JEEB> there's a lot of parameters for IO
[16:59:31 CEST] <JEEB> and if you're doing your own API usage you can make your own IO thing even (I've done that for MS's IStream API)
[16:59:47 CEST] <RandomCouch> JEEB:  I accidentally entered wrong paths into my PATH
[16:59:51 CEST] <RandomCouch> how can I remove them lol
[16:59:55 CEST] <JEEB> just close the terminal
[17:00:01 CEST] <JEEB> and re-open
[17:00:03 CEST] <RandomCouch> I'm connected via ssh
[17:00:06 CEST] <RandomCouch> alright
[17:00:27 CEST] <JEEB> then just `echo "${PATH}"` to check that you're back to business
[17:00:36 CEST] <JEEB> also if you're doing things through ssh I recommend byobu/tmux
[17:00:44 CEST] <JEEB> for multi-tabbing goodness
[17:00:50 CEST] <RandomCouch> ooh
[17:00:53 CEST] <RandomCouch> using putty right now haha
[17:01:15 CEST] <JEEB> yea, that's the ssh client, but then on the server you have a thing handling multiple buffers etc
[17:01:24 CEST] <JEEB> but that's a discussion for another channel :P
[17:02:14 CEST] <RandomCouch> JEEB: in the extracted toolchains dir, I can't find any armv7 folder but I do find arm-linux-androideabi
[17:02:18 CEST] <RandomCouch> should I use that instead for my path?
[17:03:31 CEST] <JEEB> RandomCouch: I just make two subdirs myself. one for arm(v7) and another for arm64
[17:03:39 CEST] <JEEB> so it's just the /bin directory under the toolchain
[17:03:40 CEST] <JEEB> :)
[17:04:13 CEST] <RandomCouch> hmm but what do you put inside those dirs?
[17:04:52 CEST] <RandomCouch> export PATH=${PATH}:/home/dev/toolchains/NDK/arm-linux/androideabi/bin:/home/dev/NDK/android-ndk-r15
[17:04:56 CEST] <RandomCouch> this is my command right now
[17:05:01 CEST] <RandomCouch> is this ok?
[17:05:22 CEST] <RandomCouch> woops I meant arm-linux-androideabi *
[17:05:25 CEST] <RandomCouch> no slash there
[17:05:55 CEST] <JEEB> uhh, I think under <the path you used for the standalone toolchain>/bin exists?
[17:05:59 CEST] <JEEB> that's the first one
[17:06:18 CEST] <RandomCouch> oh ok
[17:09:17 CEST] <JEEB> the NDK's base directory isn't required for FFmpeg itself, but it's a common thing to add to PATH
[17:09:26 CEST] <JEEB> since many apps that then use NDK use the tools there
[17:11:44 CEST] <ac_slater> RandomCouch:  Just use this https://github.com/Bilibili/ijkplayer
[17:12:16 CEST] <JEEB> he's not looking for a player, I mean I'm developing a player for android as well (´4@)
[17:12:34 CEST] <JEEB> or well, mostly lurk nowadays
[17:12:40 CEST] <JEEB> others do features now :D
[17:12:58 CEST] <ac_slater> ah ok my bad. I found this the other day and they do the android integration well enough
[17:13:04 CEST] <ac_slater> ie - mediacodec, etc
[17:13:29 CEST] <JEEB> well mediacodec is simple enough
[17:13:33 CEST] <ac_slater> and ijkplayer can be used as a library
[17:13:35 CEST] <JEEB> enable jni and mediacodec
[17:14:01 CEST] <ac_slater> hmm JEEB you mean there is an official ffmpeg mediacodec interop?
[17:14:05 CEST] <JEEB> yes
[17:14:12 CEST] <JEEB> there's two modes
[17:14:22 CEST] <JEEB> output to surface or output to just a buffer
[17:14:29 CEST] <RandomCouch> I'm using android's MediaRecorder class to record videos of the gameplay of the Unity game I'm developing
[17:14:33 CEST] <JEEB> libmpv uses the latter, and then we build on top of that
[17:14:36 CEST] <RandomCouch> it records the screen and microphone input
[17:14:47 CEST] <RandomCouch> I want to then append another video to the recorded video
[17:14:50 CEST] <RandomCouch> something like an outro
[17:15:04 CEST] <ac_slater> JEEB: awesome!
[17:15:06 CEST] <ac_slater> thanks
[17:15:13 CEST] <RandomCouch> but that outro is not recorded within the app, and the videos recorded in the app do not have a stable frame rate and bitrate
[17:15:22 CEST] <RandomCouch> so appending is challenging
[17:15:30 CEST] <RandomCouch> because the outro vid has a fixed frame rate and bitrate
[17:15:43 CEST] <JEEB> most modern containers are just OK with VFR
[17:15:54 CEST] <RandomCouch> I was using mp4parser library for android to append but since it's not working I'm going with ffmpeg for android
[17:15:59 CEST] <JEEB> since the whole assumption of "frame rate" goes to the dirt bin with anything modern
[17:16:13 CEST] <JEEB> (you have timestamps and durations usually)
[17:16:14 CEST] <RandomCouch> VFR = variable frame rate?
[17:16:18 CEST] <JEEB> yes
[17:16:20 CEST] <RandomCouch> gotcha
[17:16:38 CEST] <RandomCouch> when I combine my videos using mp4parser, the audio is combined but there is no image
[17:16:46 CEST] <JEEB> and heck, there are containers like FLV or MPEG-TS or Matroska where most common frame rates are actually not save'able with 100% precision
[17:16:46 CEST] <ChocolateArmpits> broadcast doesn't suffer vfr ills
[17:17:31 CEST] <RandomCouch> JEEB:  so I added the paths to my PATH
[17:17:37 CEST] <RandomCouch> not sure what to do next to build android binary
[17:17:45 CEST] <RandomCouch> and then use it in my android studio project
[17:18:20 CEST] <JEEB> have you checked that you have arm-linux-<tab><tab> giving  you stuff in the terminal?
[17:18:45 CEST] <JEEB> stuff like gcc and clang etc
[17:19:33 CEST] <RandomCouch> not getting anything
[17:20:06 CEST] <JEEB> did you source that script and check that the directories are really in PATH?
[17:20:07 CEST] <RandomCouch> wait nevermind
[17:20:13 CEST] <RandomCouch> I'm getting arm-linux-androideabi-
[17:20:17 CEST] <JEEB> yup
[17:20:19 CEST] <RandomCouch> when I hit tab after arm-linux-
[17:20:22 CEST] <JEEB> yup
[17:20:36 CEST] <JEEB> alright
[17:20:47 CEST] <JEEB> let's add one more thing to just start using clang from the get-go
[17:20:48 CEST] <RandomCouch> there are many options, clang and gcc are there
[17:21:18 CEST] <JEEB> move to the bin directory of the standalone toolchain that you added to the PATH
[17:21:43 CEST] <JEEB> and do what these three lines do https://github.com/mpv-android/buildscripts/blob/master/download.sh#L61..L63
[17:21:54 CEST] <JEEB> that adds gas-preprocessor
[17:22:00 CEST] <JEEB> and makes it executable
[17:22:23 CEST] <JEEB> clang can't do the ARM assembly that's in FFmpeg without that tool in the PATH
[17:24:23 CEST] <JEEB> after that git clone FFmpeg (`git clone https://git.videolan.org/git/ffmpeg.git`), move to its directory and scroll quite a bit up in this chat :)
[17:24:36 CEST] <JEEB> because I posted an example buildroot mkdir + configure line
[17:24:54 CEST] <RandomCouch> JEEB: hm when I run the wget command I get
[17:24:56 CEST] <RandomCouch>  Saving to: index.html?p=gas-preprocessor.git;a=blob_plain;f=gas-preprocessor.pl;hb=HEAD.3
[17:25:07 CEST] <RandomCouch> and then when I try the chmod it can't find the gas-preprocessor.pl
[17:25:30 CEST] <JEEB> RandomCouch: then you forgot the \ at the end
[17:25:36 CEST] <JEEB> it's two-line separated single command
[17:25:53 CEST] <RandomCouch> wget "https://git.libav.org/?p=gas-preprocessor.git;a=blob_plain;f=gas-preprocessor.pl;hb=HEAD" \ -O gas-preprocessor.pl
[17:25:56 CEST] <RandomCouch> like this correct?
[17:26:03 CEST] <JEEB> if it's a single line you don't need the \
[17:26:09 CEST] <RandomCouch> oh
[17:26:31 CEST] <RandomCouch> ok that worked
[17:26:35 CEST] <JEEB> nice
[17:27:12 CEST] <RandomCouch> do I clone that git repo inside the bin folder too
[17:27:16 CEST] <RandomCouch> or it doesn't matter?
[17:27:29 CEST] <JEEB> no
[17:27:37 CEST] <JEEB> don't do anything else in the bin dir
[17:27:42 CEST] <RandomCouch> alright
[17:28:00 CEST] <JEEB> the main thing is now that you can call gas-preprocessor from the command line from PATH
[17:29:34 CEST] <JEEB> now clone FFmpeg and create a build root (you generally don't build things in their source tree as-is unless you absolutely need to)
[17:29:48 CEST] <JEEB> also it enables quick clean-up :)
[17:30:00 CEST] <RandomCouch> so after cloning ffmpeg, I create another directory somewhere for the build?
[17:30:11 CEST] <JEEB> or within the ffmpeg dir, which is what the lazy me usually does
[17:30:19 CEST] <JEEB> main thing is that you don't build right there in the source directory itself
[17:30:26 CEST] <RandomCouch> haha gotcha
[17:30:42 CEST] <RandomCouch> i got ffmpeg cloned in /home/dev/toolchains/ffmpeg
[17:30:50 CEST] <RandomCouch> and created a dir for build in /home/dev/toolchains/ffmpeg_build
[17:31:00 CEST] <JEEB> ok
[17:31:28 CEST] <JEEB> then your path towards the configure from the ffmpeg_build dir would be ../ffmpeg/configure . to which you then add parameters
[17:32:27 CEST] <thomedy> okay... i am trying to decompress audio..i can do it with sox and thats awesome...but im trying to figure out how manysamples per second i havebecausesi tried 44100, 16 bits signed interger encode with 2 channels and the bits dont add up
[17:32:41 CEST] <JEEB> --prefix=/your/ffmpeg_prefix --arch=armv7 --cpu=armv7-a --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi- --enable-shared --disable-static --cc=arm-linux-androideabi-clang
[17:32:52 CEST] <thomedy> so im trying to figure out whats going on can someone show me how to query ffmpeg -i  with raw audio to know my info
[17:32:54 CEST] <JEEB> ^ RandomCouch this should build with clang for ARMv7
[17:33:03 CEST] <thomedy> im missing something
[17:33:06 CEST] <RandomCouch> sweet, what should my prefix be ?
[17:33:16 CEST] <RandomCouch> Is it an important detail or I can put in whatever?
[17:33:30 CEST] <thomedy> the song is 253.6 seconds... and based on the m ath i should have a little under 400000000 bits but i have a little over so im missing something when i run to raw
[17:33:37 CEST] <JEEB> it's the directory where the binaries will be installed to together with the pkg-config files that help you to build against it
[17:33:48 CEST] <JEEB> so you will get a bin/ and lib/ and include/
[17:33:49 CEST] <JEEB> etc
[17:33:50 CEST] <JEEB> there
[17:33:52 CEST] <JEEB> under that prefix
[17:33:56 CEST] <RandomCouch> ah ok
[17:34:40 CEST] <JEEB> that should then output the configuration (or an error)
[17:34:52 CEST] <JEEB> and if there's an error there's a config.log that you can read
[17:35:55 CEST] <RandomCouch> awesome, thanks JEEB ! the command just executed and only gave me a warning
[17:35:57 CEST] <RandomCouch>  WARNING: arm-linux-androideabi-pkg-config not found, library detection may fail.
[17:36:16 CEST] <Diego_> Yo. Can I resize an input video without maintaining aspect ratio?
[17:36:18 CEST] <JEEB> yea, that only matters if you are building the FFmpeg against things that have pkg-config files
[17:36:33 CEST] <RandomCouch> ah ok, so it doesn't matter in my case if building for android?
[17:36:46 CEST] <JEEB> well it matters if you first build X,Y,Z for android and those have pkg-config files
[17:36:54 CEST] <JEEB> and then want to build FFmpeg with those
[17:37:01 CEST] <JEEB> with just base FFmpeg, no it doesn't matter
[17:37:20 CEST] <JEEB> RandomCouch: after that you can just make -jX where X is the amount of cores you have + 1
[17:37:23 CEST] <RandomCouch> ah, ok, have you used ffmpeg with android java ?
[17:37:29 CEST] <JEEB> yes
[17:37:52 CEST] <RandomCouch> do you know of a good guide on how to package the binary with your java app and use it ?
[17:38:00 CEST] <JEEB> and if the build process finishes with make, then `make install` to install the completed thing into the prefix
[17:38:47 CEST] <thomedy> i just noticed when i ffmpeg -i input.raw i m getting raw video even though its an audio file that might have something to do with it
[17:39:00 CEST] <RandomCouch> make command is running
[17:40:25 CEST] <JEEB> RandomCouch: basically you have to copy the required libraries into the correct directory and then use standard JNI within your app to load those libraries
[17:41:02 CEST] <JEEB> also usually you might need your own wrapper on top of FFmpeg which you then call from the JNI side
[17:41:22 CEST] <JEEB> we have that in mpv-android but unfortunately the NDK integration we have is crude on that side (doesn't use pkg-config)
[17:41:44 CEST] <JEEB> it just grabs certain libraries from your build prefix and builds against them :)
[17:41:50 CEST] <thomedy> maybe not becuase when i run ffmpeg -i input -f s16le -acodec pcm_s16le output.raw which which forces the rw codec nd bit depth... (ive done enough research to see whats going on here) it still reads as rawsvideo when i run ffmpeg -i
[17:43:17 CEST] <RandomCouch> JEEB:  I gotta say I am not completely understanding all of this haha, a bit overwhelming but I really appreciate your help
[17:43:56 CEST] <RandomCouch> make command is still running
[17:44:00 CEST] <RandomCouch> I didn't expect it would take this long
[17:44:18 CEST] <JEEB> usually after you get things running you start limiting your configuration
[17:44:24 CEST] <JEEB> since FFmpeg contains a metric truckload of stuff
[17:44:30 CEST] <JEEB> but only after you get it working ;)
[17:44:41 CEST] <RandomCouch> ah
[17:44:44 CEST] <JEEB> we've had enough cases where people start with --disable-everything --enable-some-things
[17:44:48 CEST] <RandomCouch> all I really need is the concat functionality
[17:45:12 CEST] <RandomCouch> for mp4 videos h264 encoded
[17:45:19 CEST] <RandomCouch> and aac audio
[17:45:23 CEST] <JEEB> well you'll see after you get the API code done
[17:45:33 CEST] <RandomCouch> alright :)
[17:45:39 CEST] <JEEB> what APIs and components you'd be using etc
[17:46:56 CEST] <RandomCouch> I've seen some other links on ffmpeg page for building for android, like this for example http://bambuser.com/opensource
[17:47:18 CEST] <RandomCouch> are these easier ways of building ffmpeg for android
[17:47:38 CEST] <RandomCouch> assuming all I need is the concat functionality, I probably don't need the latest ffmpeg version
[17:47:54 CEST] <JEEB> no
[17:48:01 CEST] <JEEB> it's the same exact way of building FFmpeg
[17:48:05 CEST] <JEEB> which you've already done :D
[17:48:14 CEST] <RandomCouch> ahh I see
[17:50:20 CEST] <RandomCouch> awesome, thanks again JEEB, make command is still running, I will run `make install` after that and then I should find the .so binary files in the prefix folder correct?
[17:50:32 CEST] <JEEB> yes, and headers under <prefix>/include
[17:50:55 CEST] <RandomCouch> what will I use the headers for?
[17:51:11 CEST] <JEEB> true, for nothing really unless you write your wrapper in C/C++ in your app
[17:51:14 CEST] <JEEB> :)
[17:51:23 CEST] <JEEB> and pkg-config .pc files are created under <prefix>/lib/pkgconfig
[17:51:48 CEST] <JEEB> which enable you to get the various linking etc flags, although those also are unrelated if you do not make a wrapper in C/C++ and just do everything in JNI
[17:52:34 CEST] <JEEB> but yea, 'grats on your first FFmpeg build :)
[17:56:38 CEST] <RandomCouch> haha thank you :)
[17:56:44 CEST] <RandomCouch> I will just do it all in JNI
[17:57:32 CEST] <RandomCouch> will try to follow this repo https://github.com/WritingMinds/ffmpeg-android-java in how they integrated ffmpeg in java
[17:58:07 CEST] <JEEB> lol
[17:58:30 CEST] <JEEB> that looks like just running the command line or so
[17:58:31 CEST] <RandomCouch> is that bad?
[17:58:40 CEST] <JEEB> not necessarily
[17:58:42 CEST] <RandomCouch> yeah kinda
[17:58:51 CEST] <JEEB> if it works for you, sure
[17:58:51 CEST] <RandomCouch> it's all I really need, just need to run a concat cmd
[17:59:04 CEST] <RandomCouch> also getting a few errors at the end of make command
[17:59:06 CEST] <RandomCouch> src/cmdutils.c:100: error: undefined reference to 'stdout' src/ffmpeg_opt.c:897: error: undefined reference to 'stderr' src/ffmpeg.c:2774: error: undefined reference to 'stdout' src/ffmpeg.c:1718: error: undefined reference to 'stderr' src/ffmpeg.c:3864: error: undefined reference to 'stderr' src/ffmpeg.c:4793: error: undefined reference to 'stderr'
[17:59:18 CEST] <JEEB> huh
[17:59:30 CEST] <JEEB> oh
[17:59:31 CEST] <JEEB> right
[17:59:38 CEST] <JEEB> you were building the tools as well
[17:59:46 CEST] <JEEB> I have those disabled so I wouldn't have noticed that
[17:59:50 CEST] <JEEB> even if it was broken
[17:59:50 CEST] <RandomCouch> ahh
[17:59:59 CEST] <RandomCouch> so I can still run make install?
[18:00:02 CEST] <JEEB> no
[18:00:04 CEST] <RandomCouch> or do I have to disable the tools first
[18:00:05 CEST] <RandomCouch> ok
[18:00:12 CEST] <JEEB> well you just said you want to use the tools
[18:00:21 CEST] <JEEB> but you might want to remove and recreate the build dir
[18:00:26 CEST] <RandomCouch> alright
[18:00:34 CEST] <JEEB> and re-configure with without the disable-static/enable-shared
[18:00:36 CEST] <RandomCouch> I gotta go on lunch break, thanks a lot JEEB  I will continue this in a bit
[18:00:39 CEST] <JEEB> as ffmpeg by default builds static
[18:00:53 CEST] <JEEB> try with that, and if it still fails then on that NDK you can't build ffmpeg.c
[18:01:00 CEST] <JEEB> which you may put in as a bug report if ye want
[18:01:04 CEST] <alphabitcity> I'm using ffmpeg to transcode an RTP stream to RTMP. Is it possible to, at the same time, encode ID3 tags at arbitrary stream positions (e.g. timed metadata)? Thank you!
[18:05:56 CEST] <Diego_> Hi there. I have a video with 240x240 (in codec information says that real size is 240x256), but when I compile using "scale=1280x480" filter, the video gets resized keeping the aspect ratio. How can I scale without the aspect ratio? I've tried --video_size before the -i option, but it throws an error "Option video_size does not found"
[18:06:31 CEST] <JEEB> Diego_: just use setdar or setsar filters after the scale
[18:06:48 CEST] <harold> Hey guys, so I have some arbitrary compressed video stream and another text stream at the same time with a single x,y value. I want to output a video stream pretty much on the fly which is basically the video stream I had coming but with some blurring in a 10px radius circle at the x,y point that was given. Is ffmpeg the tool to do this?
[18:07:03 CEST] <JEEB> there might have even been an option in the scale filter
[18:07:58 CEST] <JEEB> Diego_: I recommend you pass through the manual and start with the scale filter's manual after which you can move to the rest https://www.ffmpeg.org/ffmpeg-all.html#scale-1
[18:10:49 CEST] <Diego_> Thank you JEEB
[18:26:25 CEST] <ac_slater> JEEB: I think I asked you this before, but in an mpegts container, do the streams have to have related timestamps
[18:29:30 CEST] <alphabitcity> Anyone know if ffmpeg has a notion of timed metadata? I've Googled around -- many have asked, but can't find an answer.
[18:29:47 CEST] <ac_slater> ie - inserting metadata at a specific time?
[18:30:06 CEST] <alphabitcity> yea, ideally while transcoding .. e.g. every 5 seconds insert a timestamp
[18:30:36 CEST] <ac_slater> hmm I don't think that exists but someone else here might know
[18:30:41 CEST] <BtbN> I don't think containers even support that
[18:30:52 CEST] <ac_slater> I currently carry arbitrary data along with video streams in mpegts
[18:31:04 CEST] <BtbN> metadata is global
[18:31:06 CEST] <ac_slater> and since it's a program stream, it has timestamps
[18:31:32 CEST] <BtbN> every frame has a timestamp
[18:31:37 CEST] <BtbN> don't need a seperate stream for that
[18:31:57 CEST] <alphabitcity> interesting
[18:32:27 CEST] <alphabitcity> how would i access a frame's timestamp?
[18:32:46 CEST] <BtbN> "access"?
[18:32:58 CEST] <alphabitcity> read
[18:33:08 CEST] <BtbN> are you using the C api?
[18:33:27 CEST] <alphabitcity> no, I am not
[18:33:48 CEST] <alphabitcity> sorry..i'll google around
[18:33:53 CEST] <BtbN> I'm not sure what you are asking for then. ffprobe can dump you a list with all frames, including their timestamps
[18:34:07 CEST] <BtbN> But what exactly are you even trying to achieve?
[18:34:45 CEST] <alphabitcity> trying to insert the current timestamp every ~5 seconds into a video and then transcode and forward via RTMP
[18:35:02 CEST] <BtbN> you mean insert visually?
[18:35:25 CEST] <BtbN> And by timestamp you mean wall clock time?
[18:35:26 CEST] <alphabitcity> no, in some sort of data channel .. to then be read via HLS playback https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HTTP_Live_Streaming_Metadata_Spec/Introduction/Introduction.html
[18:35:29 CEST] <alphabitcity> yes
[18:35:56 CEST] <BtbN> The video timestamps are arbitrary, only means to ensure steady playback
[18:36:23 CEST] <alphabitcity> i see, then it wouldn't work for us
[18:37:11 CEST] <BtbN> doesn't each hls segment carry its creation time anyway?
[18:37:21 CEST] <BtbN> like, its file creation time
[18:37:28 CEST] <BtbN> usually http servers send that along
[18:38:50 CEST] <alphabitcity> we want the time to be before it's transcoded to hls
[18:39:05 CEST] <alphabitcity> https://lists.ffmpeg.org/pipermail/ffmpeg-cvslog/2013-December/072450.html
[18:39:12 CEST] <alphabitcity> is a discussion that i believe is related
[18:39:26 CEST] <alphabitcity> although hard for me to tell if support for timed id3 metadata made it in
[18:41:06 CEST] <BtbN> what do you mean, before it's transcoded to hls?
[18:41:20 CEST] <BtbN> So the creation time of the original video?
[18:41:37 CEST] <alphabitcity> sorry, we're using ffmpeg to transcode an rtp stream to rtmp. right now we have a system that injects AMF data. hoping to replace that with ffmpeg, but I don't think ffmpeg supports AMF
[18:41:53 CEST] <BtbN> if it's just a static video, it's not too impossible
[18:42:03 CEST] <BtbN> but for a live stream, I don't think there is any supported format
[18:42:45 CEST] <alphabitcity> it's a live stream .. i guess rtmp only supports timed metadata via AMF (action message format), but that seems to be an old spec and I don't believe ffmpeg can encode AMF
[19:13:10 CEST] <RandomCouch> Hey JEEB I'm back, so when I execute the make -j2 command, I get a lot of warnings for deprecated   code
[19:13:26 CEST] <RandomCouch> for example src/ffserver.c:3686:17: warning: 'codec' is deprecated (declared at src/libavformat/avformat.h:89
[19:16:25 CEST] <BtbN> feel free to fix them
[19:16:51 CEST] <RandomCouch> I feel very free to do so, but I don't feel experienced enough to do it lol
[19:17:04 CEST] <aasif> Hello, everyone.  I'd like to learn about FFmpeg and its documentation. I tried diving in the huge codebase first. But it's too technical. I would highly appreciate any advice. Thanks.
[19:17:41 CEST] <RandomCouch> aasif:  I guess your first step would be figuring out what you want to use ffmpeg for
[19:40:39 CEST] <JEEB> RandomCouch: deprecation warnings in internal things (as in, not coming from your code) are OK.
[20:15:30 CEST] <kerio> furq this album weighs 2GB
[20:15:35 CEST] <kerio> feels good man
[20:33:56 CEST] <RandomCouch> JEEB is there a way or a place where I can get precompiled ffmpeg binaries for android
[20:34:21 CEST] <RandomCouch> everything I try doesn't seem to be working, and every git repo of an ffmpeg android project is outdated
[20:34:43 CEST] <JEEB> if you can't compile the command line app(s) then nobody else can most likely either
[20:34:48 CEST] <ac_slater> does anyone know the proper way to mux data with b-frames? As in, I'm trying to do av_interleaved_write_packet to mux h264 data into a mpegts container. Do I have to explain to the muxer that there is going to be B-frames ?
[20:34:49 CEST] <RandomCouch> my builds don't seem to work
[20:35:00 CEST] <RandomCouch> hmm
[20:35:44 CEST] <JEEB> well, I build the libraries (shared in my case) just fine with the same way
[20:35:45 CEST] <ac_slater> RandomCouch: I think you need to practice compiling things for android before you get FFMPEG working. It's not going to be much different, but testing out the native (NDK) stuff with a smaller library might be easier
[20:36:00 CEST] <RandomCouch> when I clone and run this project on my device
[20:36:02 CEST] <RandomCouch> https://github.com/WritingMinds/ffmpeg-android-java
[20:36:07 CEST] <RandomCouch> it works with ffmpeg
[20:36:38 CEST] <RandomCouch> I guess I'll just add my own code to it
[20:37:24 CEST] <JEEB> so your binary works with taht thing or what?
[20:37:32 CEST] <RandomCouch> well it's not my binary
[20:37:41 CEST] <RandomCouch> it's been packaged in that repo
[20:37:49 CEST] <JEEB> right
[20:37:52 CEST] <RandomCouch> so when I run the app ffmpeg binary gets injected
[20:38:02 CEST] <JEEB> also do check its license
[20:38:03 CEST] <DHE> ac_slater: you need to submit the AVPackets in ascending DTS order, which should be the order they come out of the encoder
[20:38:05 CEST] <JEEB> it's GPLv3
[20:38:18 CEST] <DHE> also use a container that supports b-frames. AVI most famously does not and any attempt to use them is a hack
[20:38:19 CEST] <RandomCouch> hm, does that mean I cannot use this for a product?
[20:38:53 CEST] <ac_slater> DHE: Interesting. I'm trying to demux then remux. So I'm not getting packets from an encoder
[20:38:59 CEST] <JEEB> GPLv3 is basically "if you use this thing you're making your own thing GPLv3 as well. plus you cannot add licensing to it that stops modified versions of it from running on devices"
[20:39:03 CEST] <ac_slater> DHE: which is probably my issue
[20:39:31 CEST] <DHE> ac_slater: still, the reverse basically applies. you should be getting the AVPackets in ascending DTS order and submit them to the decoder in that same order, what comes out will be AVFrames in regular chronological order
[20:39:45 CEST] <JEEB> RandomCouch: anyways I didn't see where that thing builds FFmpeg or with which NDK, but for normal API usage things work for me just fine with the binaries you built
[20:40:08 CEST] <DHE> the kicker is that, like encoding, the decoder will buffer some frames and you can't rely on getting 1 in, 1 out all the time.
[20:40:29 CEST] <ac_slater> DHE: yea I noticed that. Though, I'm not doing any decoding. Just demux + remux
[20:40:35 CEST] <RandomCouch> JEEB: the .so files are in these folders https://github.com/WritingMinds/ffmpeg-android-java/tree/master/FFmpegAndroid/obj/local
[20:40:51 CEST] <JEEB> the actual build stuff is @ https://github.com/WritingMinds/ffmpeg-android
[20:40:56 CEST] <RandomCouch> yeah
[20:41:02 CEST] <JEEB> seems like some older version of FFmpeg with random dependencies added
[20:41:09 CEST] <JEEB> most definitely not what you want
[20:41:17 CEST] <RandomCouch> hmm
[20:41:26 CEST] <DHE> ac_slater: did you copy the codecpar between the two AVStreams ?
[20:41:33 CEST] <ac_slater> hmm yea.
[20:41:39 CEST] <RandomCouch> I only have today to get this done, and if I can't we have to go with another solution which will compromise some functionality in the application I'm working on
[20:42:14 CEST] <JEEB> then you started way too late and the binaries that project gives you are not OK for any closed source usage on any level :P
[20:42:18 CEST] <RandomCouch> You'd think concatenating two videos files together on android would be simple :(
[20:42:36 CEST] <JEEB> btw, did building ffmpeg fail with your build
[20:42:46 CEST] <JEEB> or only ffprobe/ffserver?
[20:42:47 CEST] <RandomCouch> yeah I couldn't fix those errors
[20:42:55 CEST] <JEEB> the stderr one?
[20:42:56 CEST] <ac_slater> DHE: sorry, I'm talk to you directly, but yea I did copy codecpar
[20:42:59 CEST] <RandomCouch> even though I reconfigured ffmpeg and removed enable_shared
[20:43:01 CEST] <RandomCouch> yeah
[20:43:07 CEST] <RandomCouch> I was using NDK r15
[20:43:12 CEST] <JEEB> ok, that might have been something that changed in some NDK version then
[20:43:16 CEST] <RandomCouch> maybe I could try earlier version
[20:43:21 CEST] <JEEB> lessee what this thing uses :P
[20:43:49 CEST] <JEEB> unless they just forked FFmpeg and patch it
[20:44:09 CEST] <JEEB> oh, at least it seems to point towards the main repo
[20:45:16 CEST] <RandomCouch> uses a lot of thigns I'm not familiar with lol, like the ButterKnife injecting thing
[20:45:22 CEST] <RandomCouch> makes me realize how little I know
[20:48:52 CEST] <ac_slater> DHE: for some reason, after I copy codecpar between the inputstream and the muxer, I set codec_tag to 0 as one of the examples did
[20:52:13 CEST] <JEEB> RandomCouch: I'm on r14b and I just built it just fine
[20:52:36 CEST] <JEEB> exact FFmpeg revision is feb13aed794a7f1a1f8395159e9b077351348a34
[20:52:45 CEST] <JEEB> but I would be surprised if something happened since
[20:53:13 CEST] <JEEB> other than the prefix I just set `--arch=armv7 --cpu=armv7-a --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi- --disable-debug --disable-doc --disable-ffprobe --disable-ffserver --cc=arm-linux-androideabi-clang`
[20:53:23 CEST] <JEEB> the debug ones really don't matter
[20:54:34 CEST] <JEEB> RandomCouch: I had other issues with hardware decoding on R15 which is why I didn't use it, but I'll laugh out loud if this change was also between R14b and R15 :P
[20:56:28 CEST] <shincodex> so uh tcp over rtsp
[20:56:33 CEST] <shincodex> + a shit router
[20:56:38 CEST] <c_14> eeeh
[20:56:43 CEST] <shincodex> why would h264 give me half assed frames
[20:56:43 CEST] <c_14> I'm not sure tcp over rtsp is supported
[20:56:49 CEST] <c_14> unless you mean rtsp over tcp
[20:56:51 CEST] <shincodex> pixelated blotched broken
[20:57:00 CEST] <shincodex> Yes what you said
[20:57:13 CEST] <c_14> encoder issues?
[20:57:23 CEST] <c_14> not enough bitrate?
[20:57:23 CEST] <shincodex> Negative
[20:57:28 CEST] <shincodex> VLC fixes all these problems
[20:57:41 CEST] <c_14> vlc as source or sink?
[20:57:43 CEST] <shincodex> Unless theere just duplicating non corrupt frames fooling the user
[20:58:04 CEST] <shincodex> VLC player is used to test same stream and its not experiencing corruption
[20:58:18 CEST] <JEEB> RandomCouch: latest git (04aa09c4bcf2d5a634a35da3a3ae3fc1abe30ef8) works too
[20:58:27 CEST] <JEEB> I can switch to R15 as well and see if that fails
[20:58:30 CEST] <shincodex> i mean this: 		av_dict_set(&options, "rtsp_transport", "tcp", 0);
[20:58:38 CEST] <shincodex> and i am going to test latest source but....
[20:58:42 CEST] <shincodex> but uh
[20:58:59 CEST] <shincodex> #define LIBAVFORMAT_VERSION_MAJOR  57 #define LIBAVFORMAT_VERSION_MINOR  66 #define LIBAVFORMAT_VERSION_MICRO 104
[20:59:07 CEST] <shincodex> im pretty certain thats only a few months old
[20:59:23 CEST] <JEEB> you can get a nicer version for humans out of one of the avutil functions
[20:59:34 CEST] <shincodex> oh yeah
[20:59:35 CEST] <JEEB> av_version_info()
[20:59:36 CEST] <c_14> I meant using vlc as a a player vs. using vlc as a streaming server
[21:00:02 CEST] <shincodex> We use vlc player to test if similar network streams have issues
[21:00:12 CEST] <shincodex> than lib av code we use
[21:00:45 CEST] <c_14> so on one hand a vlc playing the stream on the other some custon libav* demuxer/decoder/renderer?
[21:01:26 CEST] <shincodex> vlc merely for testing for issue
[21:01:37 CEST] <shincodex> i wouldn't say that we have much custom going on
[21:01:45 CEST] <shincodex> its literally a for loop with
[21:01:50 CEST] <RandomCouch> Thanks JEEB, I'm going to give it another try with NDK r14b
[21:02:09 CEST] <shincodex> avformat_open_input , formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO, avcodec_find_decoder, avcodec_open2, etc
[21:02:39 CEST] <JEEB> &35
[21:02:47 CEST] <shincodex> av_read_frame, avcodec_decode_video2
[21:04:00 CEST] <JEEB> RandomCouch: R15 did work for me, though...
[21:04:04 CEST] <JEEB> for ARMv7
[21:04:07 CEST] <RandomCouch> hmm
[21:04:13 CEST] <RandomCouch> I must be doing something wrong then
[21:04:15 CEST] <JEEB> I just switched the toolchain symlinks
[21:04:24 CEST] <RandomCouch> I found this though https://github.com/IljaKosynkin/FFmpeg-Development-Kit
[21:04:37 CEST] <RandomCouch> claims to be an easy building process for android
[21:04:40 CEST] <JEEB> uhh
[21:04:45 CEST] <JEEB> the building really isn't hard
[21:04:53 CEST] <JEEB> it's not magic
[21:04:58 CEST] <JEEB> I'm definitely not a magician
[21:05:15 CEST] <RandomCouch> I'm a bit of a noob with UNIX, so I'm downloading the sources then transfering them to my ubuntu droplet with ssh then working with the CLI
[21:07:00 CEST] <c_14> shincodex: tried a different player besides vlc? can't really think of much that should go wrong there
[21:07:12 CEST] <JEEB> anyways, you may try r14b in case google silently switched the binaries for r15 to something else
[21:07:40 CEST] <shincodex> Vlc is removing the corruption
[21:07:49 CEST] <JEEB> but in general it's just 1) get NDK tarball and extract it 2) make the stand-alone toolchain(s) 2) use the stand-alone toolchain after adding its bin/ into PATH
[21:11:45 CEST] <JEEB> RandomCouch: like - this stuff works for multiple people in the mpv-android team https://github.com/mpv-android/buildscripts/blob/master/download.sh#L44..L64
[21:21:02 CEST] <c_14> shincodex: still not sure what could be causing the corruption, tried anything besides rtsp?
[21:28:07 CEST] <shincodex> Im purposely setting up a crap throttled network
[21:28:22 CEST] <shincodex> and its gotta be rtsp
[21:34:01 CEST] <Guest27311> how do i use filter complex properly? .\ffmpeg.exe -i .\in.gif -i .\palette.png -map 0:0 -filter_complex "[1]palet teuse;[0]scale=-1:150" -r 10 -y out.gif
[21:35:40 CEST] <c_14> shincodex: I just want to check if the problem is rtsp or maybe unrelated
[21:36:35 CEST] <shincodex> [rtsp @ 000001F5BA95E620] CSeq 15 expected, 14 received.
[21:36:51 CEST] <shincodex> me wonders if i get too many of those
[21:36:56 CEST] <shincodex> i will get corruption
[21:37:08 CEST] <shincodex> and me guesses that is packet reodering or frame reordering for timing
[21:37:23 CEST] <c_14> Guest27311: [0][1]paletteuse,scale
[21:38:02 CEST] <Guest27311> "Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_scale_1"
[21:38:07 CEST] <Guest27311> @c_14
[21:38:51 CEST] <c_14> does it work without the scale?
[21:38:59 CEST] <Guest27311> c_14: yes
[21:40:49 CEST] <c_14> shincodex: you're getting duplicate RTSP commands
[21:41:27 CEST] <Guest27311> found an example and got it working with this but I don't understand the syntax. .\ffmpeg.exe -i .\in.gif -i .\palette.png -filter_complex "scale=-1:150 [x]; [x][1:v] paletteuse" -r 1 0 -y out.gif
[21:42:05 CEST] <c_14> you're scaling the input video first, storing the output in [x] and then using that along with the palette ([1:v]) as inputs to paletteuse
[21:46:01 CEST] <RandomCouch> I just ran the make command like 15 minutes ago
[21:46:05 CEST] <RandomCouch> and it's still running xD
[21:47:01 CEST] <RandomCouch> JEEB, can you um, show me you use the .so files with JNI? :D
[21:47:06 CEST] <Guest27311> you can specify how many threads you want for compiling with -j8
[21:47:16 CEST] <RandomCouch> ahh interesting
[21:47:16 CEST] <Guest27311> -j4
[21:47:30 CEST] <RandomCouch> I don't want to cancel it now though because I feel like it should be done soon
[21:47:31 CEST] <Guest27311> not sure if make has that implemented or what
[21:47:37 CEST] <RandomCouch> yeah
[21:47:39 CEST] <RandomCouch> it does
[21:48:28 CEST] <guther> How can i cut out a sequence from a video? I thought of something like: 'ffmpeg -ss 00:10:00.00 -i INPUT.ts -t 00:10:00.00 -ss 00:30:00.00 -i INPUT.ts -t 00:10:00.00 -c copy OUT-10-20+30-40.ts'
[21:51:12 CEST] <ChocolateArmpits> guther your command line implies you want the video to oly be 10 minutes long while the inputs are 30 minutes long
[21:51:52 CEST] <ChocolateArmpits> also this won't work without use of filters and transcoding
[21:52:22 CEST] <ChocolateArmpits> unless you use a concat file where the times are specified there, ffmpeg will then interpret all streams as continuous
[22:04:28 CEST] <guther> Could it be done with -vf trim?
[22:06:57 CEST] <RandomCouch> my make command is STILL running o.O
[22:07:02 CEST] <RandomCouch> this is unbelievable
[22:07:10 CEST] <RandomCouch> I'm just gonna leave my work PC on and check it again on monday
[22:07:26 CEST] <furq> you can just abort it and run make -j8
[22:07:29 CEST] <furq> it'll pick up where it left off
[22:07:55 CEST] <RandomCouch> I'm not sure how many cores my droplet has though :/
[22:08:00 CEST] <DHE> or if you have a good CPU, -j 16. specify number of available threads
[22:08:02 CEST] <furq> cat /proc/cpuinfo
[22:08:04 CEST] <DHE> got `lscpu` available?
[22:08:07 CEST] <RandomCouch> oh nice
[22:08:09 CEST] <RandomCouch> thanks furq
[22:08:11 CEST] <RandomCouch> trying now
[22:08:29 CEST] <kerio> furq: https://i.imgur.com/gdlecpZ.png are you upset
[22:08:41 CEST] <furq> lol it's alac
[22:08:49 CEST] <RandomCouch>  cpu family      : 6 model           : 63 model name      : Intel(R) Xeon(R) CPU E5-2650L v3 @ 1.80GHz stepping        : 2
[22:08:54 CEST] <RandomCouch> am I looking for stepping?
[22:09:01 CEST] <furq> no you're looking for how many of those show up
[22:09:09 CEST] <RandomCouch> oh ok only one did
[22:09:16 CEST] <furq> bad luck then
[22:09:20 CEST] <RandomCouch> myea
[22:09:23 CEST] <RandomCouch> digitalocean droplet
[22:09:30 CEST] <RandomCouch> lowest price one lol
[22:09:48 CEST] <DHE> if you have the entire CPU to yourself, that's good for -j 24
[22:09:51 CEST] <JEEB> RandomCouch: literally System.loadLibrary() :P that's how you do JNI anywhere
[22:09:51 CEST] <furq> kerio: i am now really laughing at the idea of an italian radiohead fan
[22:09:56 CEST] <kerio> why :<
[22:10:04 CEST] <RandomCouch> awesome thanks JEEB
[22:10:15 CEST] <RandomCouch> I'm gonna prob do some work on this over the weekend and see if I can make any progress
[22:10:22 CEST] <RandomCouch> You've been a great help JEEB
[22:10:29 CEST] <iive> RandomCouch: 16 threads, use -j16
[22:10:37 CEST] <RandomCouch> oh sweet thanks I'll try that
[22:10:42 CEST] <kerio> o fug
[22:10:50 CEST] <kerio> alac is 30kbps bigger than flac
[22:10:58 CEST] <DHE> I have 12 cores + hyperthreaded for -j 24 (assuming you have a dedicated server and not sharing the CPU)
[22:10:59 CEST] <kerio> 34, rather
[22:11:25 CEST] <iive> RandomCouch: the total numbers of cpu's is a few lines bellow.
[22:11:27 CEST] <kerio> does the flac encoder have a compression speed parameter
[22:11:28 CEST] <furq> pablo 'anney
[22:11:35 CEST] <furq> bellissimo
[22:11:59 CEST] <furq> if you mean the cli encoder then -8
[22:12:06 CEST] <kerio> can i get that from within ffmpeg
[22:12:10 CEST] <furq> -q:A
[22:12:12 CEST] <furq> a
[22:12:34 CEST] <kerio> it doesnt change a fucking thing
[22:13:05 CEST] <furq> the ffmpeg flac encoder is builtin
[22:13:23 CEST] <furq> you should probably use libflac if it's something you care about
[22:13:28 CEST] <furq> although i can't imagine you do
[22:14:08 CEST] <JEEB> RandomCouch: if you've never looked at the APIs I don't give you a high chance of success. but sure, it's /possible/
[22:14:29 CEST] <durandal_1707> furq is again promoting nonffmpeg software
[22:14:47 CEST] <kerio> furq: -8 shaved 3 kbps :o
[22:14:54 CEST] <furq> https://www.youtube.com/watch?v=_F-Cy2YqAW4
[22:15:05 CEST] <kerio> so thats 3267kbps versus 3304
[22:15:15 CEST] <furq> also yeah -8 isn't really worth it most of the time but some people get extremely angry about it
[22:15:21 CEST] <RandomCouch> JEEB: I'll give it a shot with what I know and of course if I have to I will take a look at the APIs and see how to use them, I have developed native android stuff before so I have no issues working with Java, gradle and android studio
[22:15:40 CEST] <kerio> furq: i guess it only makes sense to do -0 and -8
[22:15:46 CEST] <furq> well -5 is the default
[22:15:50 CEST] <kerio> D:
[22:15:52 CEST] <RandomCouch> I never loaded binaries into my android project though so that's gonna be something new to me
[22:15:53 CEST] <furq> above that you get massively dimishing returns
[22:16:03 CEST] <furq> and diminishing ones too
[22:16:31 CEST] <kerio> furq: anyway, i decided that this version of ok computer is ok computer
[22:16:33 CEST] <durandal_1707> ffmpeg encoder have some alien options for fast hardware encoding with high ratios of compression
[22:16:33 CEST] <kerio> released in 1997
[22:16:46 CEST] <furq> durandal_1707: you mean like flaccl
[22:17:08 CEST] <durandal_1707> furq: like it but gmfor cpu only
[22:20:15 CEST] <furq> i take it the builtin flac encoder is much better relative to libflac than the vorbis and opus encoders
[22:20:40 CEST] <furq> and also reversible if it turns out not to be
[22:57:08 CEST] <guther> Do i really have to ffmpeg ... OUT.1st & ffmpeg ... OUT.2nd & ffmpeg -i OUT.1st -i OUT.2nd FINAL.suf ??
[22:58:25 CEST] <DHE> wut?
[22:58:55 CEST] <DHE> ffmpeg is a bit complicated to use for advanced processing, but chances are what you want can be done in a single shot
[22:59:29 CEST] <durandal_1707> guther: what are you doing?
[23:00:01 CEST] <guther> i wanna cut out and drop some part in the middle
[23:00:37 CEST] <guther> Like from min5 till min30, and then from min 40-60
[23:01:14 CEST] <guther> ^^ that would be the parts to keep
[23:02:04 CEST] <DHE> https://ffmpeg.org/ffmpeg-filters.html#concat  read examples
[23:03:24 CEST] <ChocolateArmpits> guther, either use -ss -to specifiers and concat filter to connect ^ or use a concat file by specifying inpoint and outpoint for each part as per https://www.ffmpeg.org/ffmpeg-formats.html#concat-1
[23:04:07 CEST] <ChocolateArmpits> concat file of course requires an additional file to process, a single command line will be simpler
[23:04:17 CEST] <ChocolateArmpits> with concat filter
[23:05:00 CEST] <DHE> this method can be done in a single shot, but has the disadvantage that the start points for seeking may be inaccurate due to keyframe locations
[23:05:50 CEST] <DHE> more precision could be done with the select filter but now we're getting out of the traditional area of expertise
[23:07:11 CEST] <guther> Note that it is done globally and may cause gaps if all streams do not have exactly the same length.
[23:07:45 CEST] <guther> Does that mean that all chunks need to be the same size???
[23:08:01 CEST] <furq> you can use trim,setpts;concat
[23:08:06 CEST] <furq> idk how fast that is though
[23:08:10 CEST] <DHE> no, it means if you have varying framerates, audio sample durations, and so on.
[23:08:27 CEST] <ChocolateArmpits> DHE, input -ss can also be innacurate. I think a bigger disadvantage of inpoint/outpoint is that the initial time offset has to be accounted for too
[23:08:55 CEST] <DHE> ChocolateArmpits: I said that. the start points may be inaccurate
[23:09:56 CEST] <furq> http://vpaste.net/SfdkV
[23:09:57 CEST] <guther> How about -crf ? Does it influence concat?
[23:09:58 CEST] <furq> like that
[23:10:34 CEST] <DHE> guther: the pipeline is input file, filter, encoding, outputfile. crf is a parameter for the encoder.
[23:10:49 CEST] <DHE> well, insert decoder between input file and filter
[23:12:48 CEST] <guther> DHE, so - input file, encoding, filter, outputfile ?
[23:13:09 CEST] <DHE> no. the encoder produces compressed content and you can't filter that.
[23:14:29 CEST] <guther> furq, thx. That really looks complex. :(
[23:15:50 CEST] <DHE> guther: we warned you it could be done, but was complex
[23:16:11 CEST] <furq> i don't know if that will be faster
[23:16:13 CEST] <furq> i suspect it won't be
[23:16:57 CEST] <guther> mplayer has -edl , not quiet sure though if it works with mencoder, but i hoped for something like that
[23:17:29 CEST] <DHE> mencoder does have a feature like that, but mencoder is old and clunky now.
[23:18:20 CEST] <guther> It probably is, but -edl would be a nice feature to ffmpeg
[23:18:42 CEST] <durandal_1707> whats edl?
[23:18:47 CEST] <DHE> edit list
[23:18:58 CEST] <DHE> a text file of the form (starttime, endtime, action) where action might be Mute or Skip
[23:19:13 CEST] <guther> right
[23:19:13 CEST] <DHE> at least in mplayer/mencoder lingo
[23:32:35 CEST] <alexpigment> does anyone know what the simplest/quickest deinterlace method is in FFMPEG?
[23:32:49 CEST] <alexpigment> I just need a simple Bob for previewing
[23:33:19 CEST] <durandal_1707> what simple bob does?
[23:34:08 CEST] <durandal_1707> fastest deinterlacer in lavfi is w3fdif and its not bob
[23:36:17 CEST] <alexpigment> it's not bob = ?
[23:36:50 CEST] <alexpigment> does it have a bob mode?
[23:37:18 CEST] <durandal_1707> nope
[23:38:01 CEST] <alexpigment> ok, so yadif it is, then?
[23:38:26 CEST] <durandal_1707> yadif is not bob
[23:38:46 CEST] <alexpigment> i feel like there's a difference of terminology here
[23:38:48 CEST] <BtbN> why do you care for the method, if you just want it to be fast?
[23:39:06 CEST] <alexpigment> when I say "bob", i mean field doubling with the result of 2xFPS
[23:39:27 CEST] <BtbN> you said you want a simple and fast deinterlacer
[23:39:32 CEST] <alexpigment> BtbN: because it doesn't need to be good quality. but it needs to have the correct temporal resolution
[23:39:33 CEST] <furq> separatefields,scale
[23:39:41 CEST] <furq> is as close as you'll get afaik
[23:39:45 CEST] <alexpigment> furq: awesome, thank you
[23:40:14 CEST] <durandal_1707> doubleweave
[23:41:01 CEST] <BtbN> isn't weave deinterlacing just "do nothing"?
[23:41:06 CEST] <BtbN> and remove the interlaced flag
[23:41:13 CEST] <alexpigment> durandal_1707: i'll try that too. I presume it should be quicker than yadif?
[23:42:04 CEST] <alexpigment> hmm, I guess doubleweave requires that you manually specify tff or bff...
[23:42:32 CEST] <durandal_1707> separatefields,doubleweave
[23:42:50 CEST] <alexpigment> hmmm, that seems redundant, right?
[23:43:05 CEST] <furq> doesn't that give interlaced output
[23:43:26 CEST] <alexpigment> i'm going to test separatefields+scale and see how that works
[23:43:41 CEST] <furq> not very well
[23:43:48 CEST] <furq> but it meets all the criteria you said you have
[23:44:11 CEST] <durandal_1707> separatefields,setsar...
[23:44:12 CEST] <alexpigment> not very well isn't too big of a deal (I don't think)
[23:44:28 CEST] <alexpigment> is there any advantage to setsar vs scale?
[23:44:33 CEST] <furq> setsar will be much quicker
[23:44:40 CEST] <alexpigment> k
[23:44:56 CEST] <alexpigment> and if i remember my sar vs dar correctly, sar would be like 16x9 right?
[23:45:02 CEST] <furq> but the correct sar would be harder to work out
[23:45:02 CEST] <alexpigment> nm, that's dar
[23:45:04 CEST] <alexpigment> i'll figure it out ;)
[23:46:51 CEST] <furq> actually nvm you can just do setsar=sar*2
[23:47:07 CEST] <alexpigment> true
[23:47:14 CEST] <alexpigment> that makes this easier
[23:47:18 CEST] <alexpigment> thanks
[23:47:33 CEST] <furq> that'll look even uglier than scale but i'm guessing that's not a problem
[23:49:32 CEST] <alexpigment> i'll do some visual analysis in a bit, but quality isn't terribly important for this particular process. if the speed difference isn't huge though, i'll probably just go back to yadif
[23:50:07 CEST] <furq> i'd have thought separatefields,setsar would be basically instant
[23:50:22 CEST] <furq> so i guess the question is how slow yadif is
[23:50:46 CEST] <alexpigment> yeah, i'm setting up my tests right now. hopefully it should be instant
[00:00:00 CEST] --- Sat Jun 24 2017


More information about the Ffmpeg-devel-irc mailing list