[Ffmpeg-devel-irc] ffmpeg.log.20160611

burek burek021 at gmail.com
Sun Jun 12 02:05:01 CEST 2016


[00:16:52 CEST] <Nobgul> Can i run 2 instances of ffmpeg from the same source?
[00:16:56 CEST] <Nobgul> source file
[00:17:01 CEST] <c_14> sure
[00:42:42 CEST] <haroot> hi anyone here?
[00:43:08 CEST] <c_14> Besides you?
[00:43:57 CEST] <Nobgul> I am here
[00:45:13 CEST] <haroot> nice
[00:45:16 CEST] <haroot> can someone help me
[00:45:20 CEST] <haroot> i want to convert rtmp
[00:45:34 CEST] <haroot> i want to demux it into frames
[00:45:40 CEST] <haroot> and serve it into html5 video / canvas
[00:47:49 CEST] <haroot> or transcode it into somethign iOS can play besides HLS
[00:47:58 CEST] <haroot> i know how to do HLS but the delay is like 8s i can get it to
[00:48:04 CEST] <haroot> rtmp delay is great as is udp
[00:52:55 CEST] <c_14> Well, dumping the frames is relatively simple. But transcoding to something like mp4 is just going to add (much much) more delay
[00:55:29 CEST] <Nobgul> haroot, what delay is 8 seconds with hls?
[00:55:42 CEST] <Nobgul> between video load and play?
[00:55:49 CEST] <haroot> between client and server
[00:55:57 CEST] <Nobgul> your using the wrong software lol
[00:56:01 CEST] <haroot> most u can get it down to theoretically is 5s w hls
[00:56:06 CEST] <haroot> for HLS?
[00:56:15 CEST] <Nobgul> my hls is loading in seconds, and thats with it using smil files for adaptive bitrate
[00:56:16 CEST] <haroot> my streaming software is OBS Studio is coded well
[00:56:25 CEST] <haroot> is it hls or hls+
[00:56:30 CEST] <Nobgul> hls
[00:56:40 CEST] <haroot> whats ur delay ? between clietn and server?
[00:56:50 CEST] <haroot> i was using nginx-rtmp-module
[00:56:52 CEST] <Nobgul> with the adaptive bitrate less then 3 seconds
[00:56:59 CEST] <haroot> so it wouldn't let me go beyond 1s length and 5 files
[00:57:13 CEST] <haroot> hum what ru using to serve ffmpeg?
[00:57:24 CEST] <Nobgul> nimble streamer
[00:57:26 CEST] <haroot> can u pm me the flags you use pls?
[00:57:40 CEST] <haroot> and ur client is?
[00:57:52 CEST] <Nobgul> meaning where are they?
[00:58:03 CEST] <haroot> yes pls
[00:58:18 CEST] <haroot> i tried nginx to serve i used custom software we wrote usign udp (1s or less delay
[00:58:31 CEST] <haroot> i want a solution for iOS iphone 5 and newer iOS 8 and above
[00:58:38 CEST] <haroot> thats like 2s delay at most
[00:58:53 CEST] <haroot> what you are getting with adaptive bitrate sounds acceptable
[00:58:58 CEST] <Nobgul> check your messages haroot
[00:59:00 CEST] <haroot> dash can get better but its new and very buggy
[01:21:05 CEST] <Illya> can I remove all i-frames from a video?
[01:21:52 CEST] <c_14> theoretically or with ffmpeg?
[01:22:13 CEST] <Illya> sorry, yeah, with ffmpeg
[01:22:22 CEST] <Illya> select maybe?
[01:22:22 CEST] <c_14> Theoretically, yes (you won't be able to play the video ((barring IDR-frames and other i-frame like things))) with ffmpeg (the commandline tool) no
[01:22:50 CEST] <c_14> Well, you _could_ select on not i-frame (afaik) but that reencodes the video and gives you more i-frames in the output
[01:22:54 CEST] <c_14> Depends on what you want
[01:23:11 CEST] <c_14> A new video without all the I-frames, or a blob of data equivalent to the exact data in the input without any i-frames
[01:29:16 CEST] <Illya> a blob of data equivalent to the exact data in th einput without any i-frames
[01:30:04 CEST] <c_14> you might be able to do it using libav* (by demuxing the packets, inspecting them and dropping any that aren't i-frames) but it won't work with the cli tool
[01:31:25 CEST] <Illya> right ok. That's fine then. I'll add it to my list of things to do
[01:35:35 CEST] <jkqxz> A bsf to do it would be straightforward to write, for H.264 at least.  You get input as frames, and you can just remove any frame which contains any IDR NAL units.  (Or I slices?  Depends on exactly what you want the test to be.)
[02:31:59 CEST] <haasn> Can I forcibly reinterpret the framerate of a video track while preserving the actual encoded data (if possible)?
[02:32:13 CEST] <haasn> e.g. say I have a 23.976 Hz clip and I want to force it to 24.000 Hz instead without re-encoding
[02:32:23 CEST] <haasn> This should just be a matter of rewriting the container timestamps, no?
[02:32:45 CEST] <c_14> Theoretically yes, should be possible with libav* but the ffmpeg cli can't do it
[02:32:57 CEST] <c_14> You can try looking into whether l-smash/mkvtoolnix will do the job for you
[02:36:40 CEST] <haasn> ah, seems like I could do it with --fix-bitstream-timing-duration --default-duration
[02:36:42 CEST] <haasn> maybe
[02:38:09 CEST] <haasn> re-encoding isn't a big deal I guess
[02:38:15 CEST] <haasn> it's a very simple clip :p
[05:55:17 CEST] <wismas> what's the easiest way to run a 2MB ffmpeg command on linux without recompiling the kernel?
[05:58:37 CEST] <Kirito> What's a good reference on pixel formats? Comparison between the major ones, which to use in what situations, and so on.
[05:59:01 CEST] <Kirito> -pix_fmts outputs a ton of options and I have no idea what the difference between any of them is :D
[14:05:46 CEST] <nifwji2> I didn't think I would ever have to come here.
[14:05:52 CEST] <nifwji2> since I didn't really use ffmpeg much.
[14:06:04 CEST] <nifwji2> but then I tried to write a batch script to automate something.
[14:06:12 CEST] <nifwji2> and holy crap.
[14:06:18 CEST] <nifwji2> anyway
[14:06:45 CEST] <nifwji2> my problem is that I made a batch script that runs a python script that decodes a file.
[14:06:52 CEST] <nifwji2> and exports it's audio and frames into a folder.
[14:07:09 CEST] <nifwji2> then I am telling ffmpeg to encode the audio file and all the png files into a video.
[14:07:14 CEST] <nifwji2> I tried using
[14:07:35 CEST] <Nobgul> Dont paste here
[14:07:41 CEST] <Nobgul> Use pastebin
[14:07:44 CEST] <Nobgul> or
[14:07:58 CEST] <Nobgul> an alternative
[14:07:58 CEST] <nifwji2> dude
[14:08:06 CEST] <nifwji2> I am only posting the first part of the command
[14:08:09 CEST] <nifwji2> not the whole command
[14:08:21 CEST] <nifwji2> I am just pointing out which argument is going wrong
[14:08:36 CEST] <Nobgul> Ok some people paste a wall of text =)
[14:09:08 CEST] <nifwji2> -i "frame %03d.png"
[14:09:27 CEST] <nifwji2> because all the files are "frame 001.png"
[14:09:33 CEST] <nifwji2> but
[14:09:44 CEST] <nifwji2> when I run the command in the batch file
[14:09:44 CEST] <furq> if it's a batch script you need to escape the %
[14:09:49 CEST] <furq> %%03d
[14:10:07 CEST] <furq> although i don't know why you'd voluntarily write batch if you have python installed
[14:11:45 CEST] <nifwji2> dude
[14:11:57 CEST] <nifwji2> it took me like 6 hours of installing shit just to get these python scripts to run.
[14:12:46 CEST] <nifwji2> because one of the things I needed doesn't work with 64 bit python
[14:12:55 CEST] <furq> i take it the file you're splitting into images and video isn't a movie
[14:12:56 CEST] <nifwji2> and my computer that has 32 bit python
[14:13:00 CEST] <nifwji2> well
[14:13:10 CEST] <furq> or isn't a format supported by ffmpeg, rather
[14:13:13 CEST] <nifwji2> it isn't a movie
[14:13:27 CEST] <nifwji2> it is a .ppm file
[14:13:34 CEST] <nifwji2> the ones from flipnote studio
[14:13:39 CEST] <nifwji2> they are animations made on a ds
[14:13:59 CEST] <nifwji2> only one program exists that can decode the files frames and audio.
[14:14:11 CEST] <nifwji2> and that guy sort of abandoned the project
[14:14:25 CEST] <nifwji2> anyway.
[14:14:32 CEST] <nifwji2> I wanted to decode these files.
[14:15:14 CEST] <nifwji2> but he didn't compile it into an exe file that can run on any computer without python.
[14:15:18 CEST] <nifwji2> well he did.
[14:15:25 CEST] <nifwji2> but only the flipnote player program.
[14:15:35 CEST] <nifwji2> which doesn't support exporting anything.
[14:15:39 CEST] <nifwji2> only playing things
[14:15:53 CEST] <nifwji2> meaning I had to download all the dependencies manually.
[14:16:11 CEST] <nifwji2> and compile it myself.
[14:16:23 CEST] <nifwji2> and since this is the second time I have ever compiled someone elses code.
[14:16:41 CEST] <nifwji2> I had a bit of trouble.
[14:16:44 CEST] <nifwji2> anyway.
[14:16:46 CEST] <nifwji2> thank you
[14:16:53 CEST] <nifwji2> it worked
[14:18:23 CEST] <nifwji2> now I can automatically turn a ppm file int a webm
[14:18:32 CEST] <nifwji2> I would be more excited.
[14:19:00 CEST] <nifwji2> but since this took so long I no longer find this cool or fun.
[14:19:08 CEST] <nifwji2> anyway
[14:19:27 CEST] <nifwji2> now I just need to get ffmpeg to cut the audio off after the video ends.
[14:19:40 CEST] <furq> -shortest
[14:19:53 CEST] <nifwji2> automatically fill in a number for the framerate based on the flipnote speed.
[14:20:01 CEST] <nifwji2> so if it is speed 6
[14:20:07 CEST] <nifwji2> make it 12fps or something
[14:20:18 CEST] <nifwji2> I assume batch will allow me to do that.
[14:20:28 CEST] <nifwji2> I just need to know how to convert flipnote speed into fps
[14:20:35 CEST] <nifwji2> I think it might be doubling the number
[14:20:37 CEST] <nifwji2> but I am not sure.
[14:20:41 CEST] <nifwji2> I have to experiment
[14:22:44 CEST] <nifwji2> I never thought I would be spending this much time in a command line
[14:22:55 CEST] <nifwji2> but I use youtube-dl daily
[14:23:02 CEST] <nifwji2> ffmpeg every so often.
[14:23:07 CEST] <nifwji2> wget every so often.
[14:23:44 CEST] <nifwji2> it's pretty neat.
[14:24:03 CEST] <nifwji2> I hope to use ffmpeg more in the future.
[14:24:10 CEST] <nifwji2> so I don't need to spend time looking things up.
[14:24:16 CEST] <nifwji2> I can just remember the commands.
[14:24:27 CEST] <nifwji2> and I also hope to be able to use wget.
[14:24:54 CEST] <nifwji2> I will stop talking now
[14:37:05 CEST] <nifwji2> okay
[14:37:16 CEST] <nifwji2> how do I encode hundreds of images into a video
[14:37:21 CEST] <nifwji2> and have the video look good?
[14:37:36 CEST] <nifwji2> I want it to be pretty much lossless quality.
[14:37:55 CEST] <nifwji2> because these files only have 3 colours
[14:38:06 CEST] <nifwji2> and are about 240p
[14:38:41 CEST] <nifwji2> at the moment my video looks terrible.
[14:38:51 CEST] <nifwji2> with big rainbow blocks
[14:39:03 CEST] <nifwji2> it is just terrible.
[14:42:58 CEST] <furq> https://trac.ffmpeg.org/wiki/Encode/VP9
[14:46:49 CEST] <nifwji2> I am still trying to find out what framerate speed 6 is
[14:47:03 CEST] <nifwji2> but until I figure that out.
[14:47:33 CEST] <nifwji2> I want to have the framerate dynamically change based on the flipnotes speed.
[14:47:54 CEST] <nifwji2> if I run "ppm -m input"
[14:48:06 CEST] <nifwji2> it will print the metadata into the console
[14:48:16 CEST] <nifwji2> one of the lines is the frame speed.
[14:48:25 CEST] <nifwji2> how would I get it to take that frame speed
[14:48:39 CEST] <nifwji2> and then put the value into the ffmpeg command as the framerate?
[14:48:48 CEST] <nifwji2> can batch even do that?
[15:00:53 CEST] <nifwji2> I finally got the quality right.
[15:01:06 CEST] <nifwji2> now I just need to get the dynamic framerates working.
[15:01:36 CEST] <nifwji2> and I might also add thumbnails.
[15:02:15 CEST] <nifwji2> then I will take a break
[15:02:24 CEST] <nifwji2> learn a bit more about making batch files.
[15:02:48 CEST] <nifwji2> and hopefully clean up the files.
[15:40:38 CEST] <linforpros> Hello, I can dvgrab a video from /dev/fw0, but would like to ffmpeg it as a feed for ffserver. Do you have hints how to do that?
[15:44:12 CEST] <DHE> I'd suggest taking it step by step. start by getting it to output to something simple like an .mp4 that meets your needs. that way you know the grabber works
[15:44:31 CEST] <DHE> but fyi, ffserver is considered unsupported. we do encourage you to try using anything else (nginx-rtmp is popular)
[15:48:29 CEST] <linforpros> ffmpeg -i /dev/fw0 test.avi  - does not work. But dvgrab -i in interactive mode captures and saves a file which ffplay can easily show. Is ffmpeg suitable to grab from my 1394 device?
[15:52:24 CEST] <c_14> linforpros: you can try using ffmpeg with libiec61883
[15:53:11 CEST] <c_14> https://ffmpeg.org/ffmpeg-devices.html#iec61883
[15:54:15 CEST] <linforpros> I just checked, libiec61883 is installed on my fedora 22. I will check the link you posted.
[15:55:20 CEST] <c_14> you need to make sure your version of ffmpeg is built with --enable-libiec61883. you can check the output of ffmpeg -version or ffmpeg -devices to see if it's listed
[15:58:03 CEST] <linforpros> ffmpeg -version |grep iec give me no output. I guess is is a no go. I would have to build from source, is that right?
[15:58:56 CEST] <markizano> Hello
[15:59:36 CEST] <c_14> linforpros: yes
[15:59:42 CEST] <markizano> I am attempting to concat 2 files, and everything on the internet says use the concat:// protocol.... well, I did that, and it seems I'm the only one with this problem, as I can't find the package that contains the protocol.
[15:59:58 CEST] <markizano> Everytime I run the program to concat the resulting files, I get this: concat:myvid1.ts|myvid2.ts: Protocol not found
[16:00:08 CEST] <markizano> I've converted them into the format that let's you concat them...
[16:00:21 CEST] <markizano> So... I know they are concat-able...
[16:00:38 CEST] <markizano> What am I doing wrong that won't let me use the concat protocol, or the demuxer ??
[16:01:25 CEST] <markizano> avconv -f concat -i "myvid1.ts|myvid2.ts" == Unknown input format: 'concat'
[16:02:13 CEST] <linforpros> c_14: is there a way I could pipe dvgrab into ffmpeg in order to generate a feed?
[16:06:57 CEST] <c_14> linforpros: probably, what does dvigrab output? raw video?
[16:07:24 CEST] <linforpros> this, for one, does not work: dvgrab -format dv1 - | ffmpeg -f dv -i - -b 2000k -ab 512k -y output.mov
[16:07:34 CEST] <c_14> markizano: avconv is not ffmpeg, either ask in #libav or use ffmpeg from FFmpeg
[16:13:23 CEST] <linforpros> c_14: I only use IRC once in a year. seeking to comply...
[16:14:51 CEST] <c_14> If you use dvgrab to output to a file, and then use ffmpeg on the file, does that work?
[16:15:09 CEST] <c_14> Will also be afk for a bit, but I'll check your answers later (30 or so minutes)
[16:15:39 CEST] <linforpros> ok
[18:05:52 CEST] <c_14> linforpros: did you find anything out?
[19:02:59 CEST] <DHE> Using libav to save AAC to an mpegts file (HLS technically). When using libfdk_aac it succeeds. When using ffmpeg's AAC encoder I get "MPEG-4 AOT 21 is not allowed in ADTS"
[20:11:02 CEST] <DHE> works okay with a simple ffmpeg commandline. I'm doing something wrong, just don't know what.
[20:20:16 CEST] <DHE> figured it out, profile matters
[21:19:19 CEST] <mosb3rg> evening guys, i was hoping someone could take a look at an m3u8 playlist with token in the url, and test a dump of the feed to see if we can determine why its getting the point where we see the following error:
[21:19:25 CEST] <mosb3rg> [hls,applehttp @ 0x29d9400] Failed to reload playlist 0
[21:21:02 CEST] <mosb3rg> then i can connect again directly some times, othertimes the following error is thrown:
[21:21:29 CEST] <mosb3rg> [http @ 0x36f2e80] HTTP error 403 Forbidden
[21:21:49 CEST] <mosb3rg> when i attempt to reconnect, then it will not be accessible for a few moments.
[21:23:31 CEST] <mosb3rg> then if i wait like 30 seconds or a minute it reconnects. its a very strange result. i tried multiple custom user agents, but was unable to sustain a connection without the playlist reload error.
[22:20:38 CEST] <DHE> mosb3rg: sounds like an issue with the HTTP server on the remote end. your token may be overly aggressive
[22:21:01 CEST] <DHE> keep in mind HLS reads the same .m3u8 playlist over and over again. your token must survive that
[23:14:14 CEST] <georgios> hi! i have these videos that have nice audio and a slide like video. since one video may very well constitute of 10 pictures i would like somehow to eliminate all the useless info. we are talking about 100M,200M even 1G depending on the quality
[23:14:51 CEST] <georgios> can i transcode it somehow to have like, say 10 keyframes and 0 deltas?
[23:25:54 CEST] <mosb3rg> thanks for the response dhe
[00:00:00 CEST] --- Sun Jun 12 2016


More information about the Ffmpeg-devel-irc mailing list