[Ffmpeg-devel-irc] ffmpeg.log.20180905

burek burek021 at gmail.com
Thu Sep 6 03:05:01 EEST 2018


[02:03:15 CEST] <macstriker> hello
[02:03:22 CEST] <macstriker> from my ipcamera i can get two streams: 1280x720 and 704x576, idk why second size have another aspect ratio. so when i restream 1280x720 - i get fine image, but my internet channel cant handle such resolution
[02:03:28 CEST] <macstriker> so i use 704x576 and image looks shrinked
[02:03:36 CEST] <macstriker> is there any way to tell ffmpeg to change apsect ratio without video decoding?
[02:07:26 CEST] <furq> -aspect 16:9 will set it in the container if it's supported
[02:13:21 CEST] <unlord> super party people
[02:13:45 CEST] <unlord> how do I convert 12bit tiff to raw
[02:16:57 CEST] <atomnuker> ffmpeg -i <in> -c:v rawvideo -f rawvideo file.raw
[02:17:49 CEST] <unlord> nice try
[02:17:50 CEST] <unlord> [tiff @ 0x5572b6abcde0] This format is not supported (bpp=12, bppcount=1)
[02:26:46 CEST] <macstriker> furq: -aspect do nothing
[02:31:57 CEST] <macstriker> -vf scale=704:-1,setdar=16:9 -c:v libx264 working fine but with decoding
[03:06:37 CEST] <unlord> so I have imagemagick working, but I think it is dithering the file
[03:06:43 CEST] <unlord> how do I make imagemagick not change it at all
[04:06:21 CEST] <teratorn> I'm trying to locate an old blog/article I read once about basically every single corner-case, gotcha, bug and just whacky videoshit you can imagine? it dispelled a lot of common assumptions that people make about video..
[04:06:33 CEST] <teratorn> anyone have a clue what I'm talking about? having trouble finding it again
[04:08:24 CEST] <teratorn> annd, I found it :/ https://haasn.xyz/posts/2016-12-25-falsehoods-programmers-believe-about-%5Bvideo-stuff%5D.html
[04:11:53 CEST] <nicolas17> "Inspired by numerous other such lists of falsehoods" indeed
[04:12:05 CEST] <nicolas17> just like all the other such lists, it doesn't elaborate or any but one of the items :P
[04:13:01 CEST] <DHE> I assume the reader is intended to be at least mildly versed in the art
[04:13:55 CEST] <haasn> yeah some of these are clearly wrong
[04:13:59 CEST] <haasn> "the slower a scaling algorithm is to compute, the better it will be"
[05:47:37 CEST] <kepstin> haasn: the point of a list of falsehoods is that everything in the list is wrong, at least some of the time
[05:49:55 CEST] <haasn> but the quoted is always right!
[05:51:34 CEST] <kepstin> I mean, I can trivially prove that wrong by e.g. making an extremely slow to compute neural net... that implements nearest neighbor sampling.
[05:52:08 CEST] <kepstin> it's easy to make algorithms arbitrarily slow without making them better :)
[05:53:59 CEST] <fling> How do I perform dark frame subtraction?
[05:58:41 CEST] <haasn> kepstin: you forgot to account for the placebo effect
[11:40:39 CEST] <Mia> How can I loop the video as many times as necessary if it's shorter than 5 second
[11:40:58 CEST] <Mia> so let's assume I have a video tht's 1.5 seconds long, when it's converted it'll be looped 4 times
[11:41:14 CEST] <Mia> or if I have a video that's 2.6 seconds long it'll be looped 2 times
[11:41:25 CEST] <Mia> Is there any way to perform this operation?
[11:42:07 CEST] <BtbN> Just loop it and manually specify the total runtime
[11:42:21 CEST] <Mia> converting from gif to video
[11:42:29 CEST] <Mia> so I don't know how to do all this in single line
[11:42:59 CEST] <Mia> so - for an online service I'll be uploading video files - they'll be converted from gif files --- and I'm trying to automate the process
[11:43:09 CEST] <Mia> if videos are shorter than 5 seconds ,the service rejects them
[11:43:30 CEST] <Mia> so I'm trying to use ffmpeg to create those 5-second-or-more length looping gif videos
[11:43:35 CEST] <Mia> the input file is always a gif
[11:48:33 CEST] <fling> Mia: if your output is gif too then you could use -loop
[11:48:57 CEST] <fling> Mia: if not then there is -filter_complex loop=loop
[11:49:54 CEST] <fling> Mia: ffmpeg -i input.gif -loop 5 output.gif
[11:55:01 CEST] <fling> How do I perform black frame subtraction?
[12:07:35 CEST] <barhom> When writing master playlists, can I specify which bitrate it should write in the master.m3u8?
[12:26:15 CEST] <durandal_1707> fling: explain in more detail what you need
[12:37:07 CEST] <fling> durandal_1707: I have a noisy video source and I want to denoise it using dark frame subtraction.
[12:37:26 CEST] <fling> this will remove dead pixels and other static artifacts present on the video.
[13:06:50 CEST] <Nacht> I got a live audio stream where I want to add a still image along with it. Is it possible to just encode an 10 sec clip and then just loop it while only transcoding the audio to reduce CPU usage ?
[13:11:11 CEST] <ahoo> yes it is possible
[13:11:17 CEST] <ahoo> you need a hold frame for that
[13:11:33 CEST] <ahoo> i don't know how to do it with ffmpeg but it definetly is possible.
[13:12:38 CEST] <ahoo> that's for a still image
[13:12:53 CEST] <ahoo> i think if you loop a clip you cannot do it.
[13:13:46 CEST] <ahoo> why animate a music track image anyway?
[13:13:59 CEST] <ahoo> i personally use the cover pic as a still frame
[13:14:15 CEST] <Nacht> I managed it with stream_loop now
[13:14:30 CEST] <Nacht> -stream_loop -1 seems to do the strick
[13:14:45 CEST] <ahoo> what about the output size?
[13:15:00 CEST] <ahoo> is it only the size of the clip and the audio track?
[13:15:04 CEST] <ahoo> (roughly)
[13:15:36 CEST] <Nacht> Yeah roughly
[13:15:46 CEST] <ahoo> coo
[13:15:47 CEST] <ahoo> l
[14:00:57 CEST] <barhom> Trying to compile in libnpp, cuda, getting this after successful compilation and running ffmpeg;
[14:00:59 CEST] <barhom> ./ffmpeg: error while loading shared libraries: libnppig.so.9.2: cannot open shared object file: No such file or directory
[14:01:18 CEST] <barhom> EXPORT LD_LIBRARY_PATH=/usr/local/cuda/lib64 < helps fixing this, but I dont know why I need to export LD_LIBRARY
[14:01:24 CEST] <barhom> I already specified the path while compiling
[14:04:41 CEST] <ahoo> did you export LD_LIBRARY_PATH or LD_LIBRARY?
[14:04:55 CEST] <ahoo> i mean compile
[14:06:14 CEST] <barhom> ahoo: unsure, look here: https://0bin.net/paste/4Wychyykrfh+kzim#BPUfANl0viIIR3ZhoRoW6pdg3uwbeEgBNxM3o91JG4X
[14:06:37 CEST] <barhom> is it because I specifify multiple extra-cflags and extra-ldflags?
[14:16:37 CEST] <ahoo> i don't think so. but i also don't know anything.
[14:18:01 CEST] <ahoo> however, for being syntactically, i would move the --enable-* block up before the flags flags
[14:18:20 CEST] <ahoo> however, for being as syntactically correct as possible, i would move the --enable-* block up before the flags flags
[14:18:53 CEST] <ahoo> did you ld the binary?
[14:20:15 CEST] <ahoo> also, did you try to link it dynamically?
[14:20:27 CEST] <ahoo> err, static i mean.
[14:20:51 CEST] <ahoo> that way you can be sure that it's not a dependency that is not met.
[14:25:31 CEST] <furq> barhom: the nvidia stuff is dlopened
[14:25:59 CEST] <barhom> furq: what is dlopened?
[14:26:13 CEST] <furq> libnppig.so in this case
[14:27:22 CEST] <furq> if you're asking what dlopen is then https://manpages.debian.org/stretch/manpages-dev/dlopen.3.en.html
[14:27:55 CEST] <furq> in short, use ldconfig if the lib is in a nonstandard location and you don't want to have to set LD_LIBRARY_PATH
[14:37:52 CEST] <barhom> furq: ok I got it to work by adding the directory in /etc/ld.so.config.d/
[14:38:28 CEST] <barhom> furq: thanks, but I guess Im still asking how come Ive never had to do this before. Especially for x264, x265, etc
[14:38:50 CEST] <barhom> Especially because I install all my libraries in my home folder (according to ubuntu compilation guide)
[14:39:32 CEST] <furq> the nvidia libs aren't actually linked to the binary, they're loaded at runtime
[14:40:20 CEST] <BtbN> libnpp* is linked normally
[14:40:45 CEST] <BtbN> as is CUDA for any of the cuda-based filters
[14:41:04 CEST] <barhom> BtbN: got it, thanks for clarification
[14:41:09 CEST] <BtbN> And if it won't find the libs it means your CUDA stuff isn't installed properly, so the libs aren't in the search path.
[14:50:14 CEST] <barhom> BtbN: You saying that CUDA isntallation usually adds config to /etc/ld.so.config.d/ ?
[14:50:39 CEST] <BtbN> For me those libs just are in /usr/lib64
[14:50:43 CEST] <DHE> anything not installed into system directories like /usr/lib64 will need to add those
[14:50:49 CEST] <BtbN> Installed there by the systems cuda package
[14:51:22 CEST] <barhom> its my bad then, I installed CUDA with specific --PREFIX
[14:52:56 CEST] <DHE> usually I just let it build with the default --prefix=/usr/local and then it's just one entry for ldconfig for everything I custom build
[14:52:58 CEST] <BtbN> Don't use the nvidia .run files, specially not as root. They are well known to break your system, often beyond automatic repair.
[14:53:16 CEST] <BtbN> Use your distro packages
[14:53:22 CEST] <BtbN> pretty much all distros have cuda packaged
[14:55:24 CEST] <barhom> BtbN: good thing Im building in a read-only booted debian9
[14:55:36 CEST] <barhom> Im seriously hoping I dont need that 1.7gb shit just to enable libnpp
[15:03:32 CEST] <barhom> Next question people, you've all been very helpful. Im trying to use the NVIDIA card to do some transcoding jobs. It works well when I set in BIOS to use offboard GPU. Is it completely impossible to use onboard GPU but STILL talk to the nvidia GPU for transcoding?
[15:03:47 CEST] <barhom> I can see it in my "lspci" right now (booted back to onboard vga)
[15:04:27 CEST] <BtbN> I don't see why not, but depends on your system really
[15:04:42 CEST] <BtbN> On most laptops with multi-gpu setup the nvidia gpu is unable to do any video de/encoding to begin with
[15:07:29 CEST] <Mavrik> yeah, it's possible that the transcoding block might not even be there
[15:18:43 CEST] <barhom> So most probably I will be forced to use initialize the system with the nvidia card.
[15:18:45 CEST] <barhom> This will kill my IPMI
[15:19:08 CEST] <barhom> its running in a SM server
[15:21:57 CEST] <DHE> I've run my system with dual GPUs, a single X session with dual monitors, each on a different GPU. it's run pretty seamless...
[15:22:17 CEST] <DHE> so nvidia should support running with a discrete GPU as a resource even if it's not the boot GPU
[17:18:40 CEST] <analogical> anyone know how to fix a corrupt FLAC file?
[17:21:09 CEST] <kepstin> what kind of corruption, specificall?
[17:21:59 CEST] <kepstin> if the decoder reports e.g. a checksum error then probably you'd just have to re-encode it from the original source, since it wouldn't decode to the expected lossless result
[17:23:51 CEST] <analogical> This is the error message I get: "ERROR while decoding data state = FLAC__STREAM_DECODER_ABORED"
[17:24:17 CEST] <analogical> *ABORTED
[17:25:11 CEST] <kepstin> analogical: you could try using the 'flac' command line tool to test the file "flac -t file.flac" and see if it gives more useful output.
[17:25:49 CEST] <kepstin> note that the flac cli tool also has an option to skip sections of the file with errors when decoding, which may or may not be useful to you
[19:18:10 CEST] <lays147>  hello guys, I am using the following ffmpeg cli to concat some files, each file has 8 tracks of audio, but after the concat I only have one track, what am I missing? https://paste.kde.org/prbhjjmk0
[19:23:50 CEST] <raytiley> I'm adapting a command that used a filter_complex to stitch together to videos into a single mp4 file. Now I want to do multiple outputs for hls at different resolutions, but get the error that -filter_complex and  -vf are not supported in the same command. Having trouble wrapping my head around how to setup the -filter_complex so that it produces multiple scaled streams for the individual outputs
[19:24:25 CEST] <kepstin> lays147: unless you use -map options to say otherwise, ffmpeg will select only the first audio and video track from the input file
[19:24:35 CEST] <raytiley> Anyone able to point me in the right direction https://www.irccloud.com/pastebin/RzZcIdRl/ffmpeg-command.txt
[19:24:42 CEST] <kepstin> lays147: to copy all tracks from the first input, add "-map 0"
[19:26:36 CEST] <furq> raytiley: you'd need to put the -vf stuff into -filter_complex, which means running the filter_complex for each output
[19:26:47 CEST] <furq> or you could just pipe the filtered output into another ffmpeg if you want to avoid that
[19:28:43 CEST] <kepstin> raytiley: yeah, you can't mix -vf and -filter_complex. What you'll want to do is add the scale stuff into the filter_complex string (probably use a 'split' filter to turn it into multiple outputs, then add a different scale filter on each). Each output pad in the filter_complex should have a different name. Then you can do something like "-map [v0] -map [a] output1.mp4 -map [v1] -map [a] output2.mp4"
[19:29:07 CEST] <kepstin> raytiley: or just run multiple instances of ffmpeg with different options - this might end up being faster if you have lots of cpu cores available
[19:29:45 CEST] <furq> kepstin: oh yeah i forgot about split
[19:29:50 CEST] <furq> that's a less dumb solution
[19:30:14 CEST] <kepstin> I didn't think you could run separate filter_complex per output, i thought that was a global option?
[19:30:26 CEST] <kepstin> given how it interacts with -map
[19:31:39 CEST] <raytiley> thanks for the pointers... I'll try moving the scaling into the filter_complex
[19:31:53 CEST] <raytiley> need to read the docs on filtering / map like 1000 more times before it sinks in
[19:35:32 CEST] <kepstin> might help to draw a graph of your filters, conneting the inputs/outputs with lines, so you can see what it's doing.
[19:53:49 CEST] <lays147> kepstin: each audio has its own audio track. I can use -c copy with mapping without using filter_complex?
[19:53:52 CEST] <raytiley> Think i'm close, but not really sure where to put the split in. This comand gives me an error "unable to find suitable output format for "aac"
[19:53:57 CEST] <raytiley> https://www.irccloud.com/pastebin/IR2VIAVf/
[19:55:57 CEST] <kepstin> lays147: where does filter_complex come into this? You can't use -c copy on a track that you're filtering.
[19:56:58 CEST] <kepstin> raytiley: you need to split [v] into two outputs, so that you can attach the two different scale filters. They can't share an input like you have right now.
[19:58:39 CEST] <lays147> kepstin: idk much about ffmpeg --' I am doing a concat of videos without reecoding. The concat is working, but I am missing the audio tracks. What I need to know is how to keep my audio tracks.
[19:58:58 CEST] <lays147> what kind of mapping do I need for this?
[20:00:08 CEST] <trashPanda_> Hello, I'm trying to stream a video to a UDP address, not using the command line.  If I set my output to be a video file, it can be read in by VLC correctly.  However, if I set the output to the UDP address, VLC cannot read the stream.  It can connect to the address but cannot make sense of the output.  Is there extra information I have to have set up to correctly stream to a UDP?
[20:02:12 CEST] <raytiley> lays147: thanks...
[20:02:27 CEST] <raytiley> so do I need to pass the "[a]" through the split somehow?
[20:03:19 CEST] <raytiley> ffmpeg.exe -y -i "D:\vod\.temp\13130-1-beavis.mpg" -i "D:\vod\.temp\13130-2-farm.mpg"
[20:03:19 CEST] <raytiley>  -filter_complex "[0:v]scale=720:480,pad=854:480:67:0,setsar=1,trim=0:3600,setpts=PTS-STARTPTS [v0];
[20:03:19 CEST] <raytiley>  [0:a]atrim=0:3600,asetpts=PTS-STARTPTS[a0];
[20:03:19 CEST] <raytiley>  [1:v]scale=854:480,setsar=1,trim=10800:18000,setpts=PTS-STARTPTS [v1];
[20:03:19 CEST] <raytiley>  [1:a]atrim=10800:18000,asetpts=PTS-STARTPTS[a1];
[20:03:19 CEST] <raytiley>  [v0] [a0] [v1] [a1]concat=n=2:v=1:a=1 [v] [a];
[20:03:19 CEST] <raytiley>  [v]split=2[vx][vy];
[20:03:20 CEST] <raytiley>  [vx]scale=640:360:force_original_aspect_ratio=decrease [v360];
[20:03:20 CEST] <raytiley>  [vy]scale=842x480:force_original_aspect_ratio=decrease [v480]"
[20:03:21 CEST] <raytiley>  -map "[v360]" -map "[a]"
[20:03:21 CEST] <raytiley>  -c:a aac -ar 48000 -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod  -b:v 800k -maxrate 856k -bufsize 1200k -b:a 96k -hls_segment_filename D:\vod\13130-farm-v1/360p_%03d.ts D:\vod\13130-farm-v1/360p.m3u8
[20:03:22 CEST] <raytiley>  -map "[v480]" -map "[a]"
[20:03:29 CEST] <nicolas17> NO
[20:03:31 CEST] <nicolas17> use a pastebin
[20:03:33 CEST] <raytiley> oops... sorry for that
[20:03:52 CEST] <raytiley> I fat fingered the wrong button
[20:05:12 CEST] <lays147> kepstin: i dont think that was me xD
[20:07:45 CEST] <kepstin> yeah, sometimes it's hard to follow in a linear chat when there's multiple conversations going on. there's a reason I prefixed everything I wrote with a username...
[20:08:10 CEST] <kepstin> well, most things anyways :)
[20:09:54 CEST] <lays147> kepstin: well, irc has those perks. Can you point me the kind of map that I need?
[20:10:32 CEST] <kepstin> lays147: follow the instructions that fflogger has helpfully printed for you
[20:11:14 CEST] <kepstin> ah, wait
[20:11:17 CEST] Action: kepstin scrolls up
[20:11:26 CEST] <kepstin> you did give a paste originally, my mistake
[20:12:09 CEST] <kepstin> concat demuxer, eh. All you have to do is add "-map 0" output option
[20:12:55 CEST] <kepstin> (-map 0 means "select all the streams from input #0")
[20:16:26 CEST] <lays147> kepstin: but each video has it owns audio tracks, I dont want to use the same audio track for all videos
[20:16:56 CEST] <nicolas17> I think if you're using the concat demuxer, "input #0" is the whole concatenated thing
[20:16:59 CEST] <kepstin> lays147: the concat muxer acts like a single input as far as the rest of ffmpeg is concerned
[20:17:06 CEST] <kepstin> demuxer*
[20:18:10 CEST] <kepstin> note that when using the concat demuxer, all your files should have matching numbers and codecs of video and audio streams.
[20:20:14 CEST] <lays147> kepstin: on this application, all videos are normalized with a previous step of ffmpeg, and on a following step I use concat to glue them
[20:20:24 CEST] <lays147> So I think that I am good?
[20:20:26 CEST] Action: lays147 wonders
[20:22:28 CEST] <kepstin> should be fine... haven't you tried the suggestion yet? :)
[20:25:44 CEST] Action: lays147 messes with python script to output stderr but forget that can run a ffmpeg cli --'
[20:27:05 CEST] <lays147> yay it works :3
[20:55:54 CEST] <raytiley> https://www.irccloud.com/pastebin/iaKoVVel/
[20:56:26 CEST] <raytiley> Sorry for the multi line post before, but what do I need to do to this to make "[a]" available to the outputs?
[21:16:21 CEST] <nicolas17> is there any filter to speed up a video interpolating frames rather than just dropping frames?
[21:41:24 CEST] <trashPanda_> Hello, I'm trying to stream a video, via mpeg2ts, to a UDP and watch it with VLC.  The command I'm using is ffmpeg -re -i 'location' -map 0:0 -c copy -f mpegts udp://239.255.255.255:9000
[21:41:43 CEST] <trashPanda_> VLC won't play the stream however, can anyone point out to me what I'm doing wrong?
[21:46:27 CEST] <DHE> never seen people use 255.255.255 before. that might not work due to it being the last IP in the multicast block. try a different IP
[21:47:00 CEST] <DHE> there are other options you should add, like -pkt_size 1316 that might also help
[21:47:48 CEST] <trashPanda_> Sorry I was just using a placeholder IP, I'm using a multicast that works just fine.  I am able to read the input with a direct show UDP source filter I have, just not VLC
[21:48:47 CEST] <trashPanda_> the pkt size did the trick, thank you!  What is that doing internally?
[21:49:47 CEST] <DHE> makes UDP packets be 1316 (at most) bytes rather than letting it fragment the stream wherever it wants
[21:50:09 CEST] <DHE> mpegts sends in frames of 188 bytes, so 1316 is the largest size which is both a multiple of 188 and less than the 1500 standard MTU
[22:00:04 CEST] <trashPanda_> Is there a place you can set that if you're not using the CLI?
[22:00:22 CEST] <trashPanda_> would it be set in the output context?
[22:01:06 CEST] <trashPanda_> output AVFormatContext
[22:46:52 CEST] <raytiley> in my above command I can use the "[a]" from the concat filter in one of my outputs (2nd output gets no audio)
[22:47:23 CEST] <c_14> try using asplit to make ao1 and ao2
[22:48:43 CEST] <raytiley> however if I put a 2nd split of "[a]" into two different audio streams I get "Cannot create the link concat:1 -> split:0
[22:50:09 CEST] <c_14> you have to ;[a]asplit[ao1][ao2]
[22:50:19 CEST] <c_14> because the concat filter has 2 outputs
[22:52:25 CEST] <raytiley> c_14:  thank you!
[22:52:38 CEST] <raytiley> i'm an idiot... didn't realize there was an asplit vs split
[22:55:38 CEST] <juny_> test
[22:56:11 CEST] <juny_> i am using connect https://trac.ffmpeg.org/wiki/Concatenate. It seems to be that I can't connect 2 mp3 binary together, right? I have to write them to files first?
[00:00:00 CEST] --- Thu Sep  6 2018


More information about the Ffmpeg-devel-irc mailing list