[Ffmpeg-devel-irc] ffmpeg.log.20181101

burek burek021 at gmail.com
Fri Nov 2 03:05:02 EET 2018


[02:51:30 CET] <p1nky> looking at this .. https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
[02:51:43 CET] <p1nky> anyone know of a way to create a dynamic version using an mpeg-ts as input ?
[02:52:26 CET] <p1nky> dynamic meaning not needing to specify how many inputs there are.. just all programs on the TS in their own box and as many boxes as needed within a given size where they all fit
[02:53:28 CET] <p1nky> it would also be neat to do the same but be able to just extract one frame out of each and write a png
[02:55:38 CET] <nicolas17> hmm programs from a ts...
[02:56:15 CET] <nicolas17> I mean you can write a script to generate an appropriate filtergraph if you tell it how many inputs you have
[02:56:48 CET] <nicolas17> but I never had to deal with multi-program transport streams
[02:56:53 CET] <p1nky> yeah i was thinking if i had to do that i would .. probably having to run something to peek at the ts first and get that info
[02:56:56 CET] <nicolas17> so I don't even know how to tell how many programs are there
[02:57:33 CET] <p1nky> i guess i can start with them static .. the examples all show multiple input files
[02:57:43 CET] <nicolas17> also what do you do if the number of programs is a prime number? :)
[02:57:54 CET] <p1nky> i am hoping that my one input can be the ts and i can reference them
[02:57:57 CET] <p1nky> hah
[02:58:21 CET] <p1nky> probably just round to an even #
[03:01:35 CET] <poutine> p1nky, I know ts duck allows for some stuff like that
[03:02:50 CET] <p1nky> oh neat, i ran across that the other day
[03:02:55 CET] <p1nky> but didn't think it did stuff like mosaics
[03:10:14 CET] <p1nky> oh i guess you were saying to inspect the ts
[03:10:29 CET] <p1nky> i wonder how you even reference a ts program in a filtergraph
[03:11:56 CET] <nicolas17> looks like stream specifiers have program_id
[03:12:23 CET] <nicolas17> in the wiki example, [0:v] is a stream specifier, meaning "video stream of input file 0"
[03:12:46 CET] <p1nky> i am trying [0:v:0]
[03:12:59 CET] <p1nky> working on an output issue now :) trying to multicast the output
[03:13:57 CET] <nicolas17> weird, docs say it should be v:0 to get the video stream of input 0, maybe it supports 0:v for compatibility?
[03:14:15 CET] <nicolas17> anyway
[03:14:23 CET] <nicolas17> for programs, it seems to be p:program_id[:stream_type[:stream_index]]
[03:22:20 CET] <p1nky> ooh .. not sure if this is it or not https://ffmpeg.org/ffmpeg-filters.html#toc-xstack
[03:24:13 CET] <garyserj> I have an mkv file, video(AVC) / Audio(Opus).  How do I convert it to mp4 ?
[03:30:06 CET] <ariyasu> ffmpeg -i input.mkv -vcodec copy -acodec copy output.mp4
[03:31:10 CET] <nicolas17> AVC is h264?
[03:32:08 CET] <garyserj> Is -c copy the same as -acodec copy -vcodec copy ?
[03:33:13 CET] <furq> yes
[03:33:18 CET] <furq> and subtitles as well
[03:33:48 CET] <furq> but you probably want -c:v copy -c:a aac
[03:36:06 CET] <garyserj> thanks yeah that worked well, that audio codec.
[10:59:18 CET] <termos> I want to create HLS segments with epoch time and segment numbers in the filenames. Tried using second_level_segment_index hls flag but %d is then starting to count from epoch time for when I started transcoding. Any way to make this number start at 0?
[11:11:32 CET] <garyserj> if I do ffmpeg -i a.mp4 -vf "transpose=1" b.mp4    Should I do it with -c copy? And how would I rotate it 90 degrees the other way instead?
[11:11:59 CET] <ariyasu> no
[11:12:11 CET] <ariyasu> you can't copy the video stream if you are using transpose
[11:12:16 CET] <ariyasu> it needs to be re-encoded
[11:12:33 CET] <ariyasu> you can do -acodec copy though, no need to re-encode the audio
[11:13:37 CET] <ariyasu> ffmpeg -i a.mp4 -vf "transpose=1" -vcodec libx264 -acodec copy b.mp4
[11:13:58 CET] <ariyasu> somthing like that, you may want to use some x264 options though but thats the basic line
[11:31:03 CET] <BtbN> If the mp4 rotate metadata is enough, you can add that without transcoding
[11:31:10 CET] <BtbN> If it works depends entirely on the player
[12:05:18 CET] <th3_v0ice> I have a problem that is making me question my knowledge and sanity. When the program starts it uses 40mb of RAM it then decodes 100 frames and store them in vector<AVFrame> using now 400mb of RAM. So far so good. Stored frames are processed trought some filter (scale) and then sent off to the encoder. While encoding, memory doubles, which I think is fine, so 800mb. Closing the encoder and
[12:05:18 CET] <th3_v0ice> filter free's the 400mb of memory. When I start av_frame_free(frame) for each frame from the vector, memory stays at 400mb+. The problem is that this memory stays constant, meaning my program is not increasing this 400mb memory usage even when I decode next 100 frames. What could I be doing wrong?
[12:34:30 CET] <rolandaswb> hello hoe to run ffmpeg comand line in python?
[12:36:00 CET] <rolandaswb> what is command line for adding audio to video file?
[13:15:05 CET] <M6HZ> Hello, it looks like there is an issue with the way ffmpeg handles the options "-referer" and "-user_agent". These options seem to stick only for the first http request, if there is a redirection they seem to be dismissed.
[13:18:06 CET] <JEEB> quite possible. check the trac issue tracker for similar issue(s) and either comment on one of them or create a new issue with specifics.
[13:20:10 CET] <M6HZ> JEEB, ok, thanks.
[15:49:09 CET] <jngk> why can I find files for an encoder under libavcodec, when that encoder isn't listed by `configure --list-encoders`
[15:49:44 CET] <jngk> eg. libavcodec/truemotion{1,2}.c
[17:40:47 CET] <Dudemanguy> What's the fastest way to demux a large number of audio files?
[17:40:57 CET] <Dudemanguy> Using this: https://pastebin.com/raw/QF8yueC7 I can do ~12000 files in about 10 seconds
[17:41:11 CET] <Dudemanguy> It's not bad, but is it possible to be more efficient?
[17:45:03 CET] <Dudemanguy> oh and I'm not really sure if I'm guessing the input format name right. They're opus files and I think the decoder name is ogg, but I'm not 100% sure on that
[17:52:16 CET] <durandal_1707> ogg is container, so it can be either demuxer or muxer
[17:53:27 CET] <Dudemanguy> does ffmpeg not have a demuxer for opus containers?
[17:53:38 CET] <Dudemanguy> I don't see it in allformats.c at least
[18:00:05 CET] <relaxed> Dudemanguy: it's covered by the ogg demuxer
[18:00:26 CET] <iive> is there such thing as opus container? I thought they use ogg, because it is done by the same people.
[18:01:27 CET] <Dudemanguy> ah you're right it's an og container
[18:01:31 CET] <Dudemanguy> ogg* rather
[21:37:35 CET] <retal> Hi guys, I am tryng live transcoding and restream video to RTMP H265 but receiving error: Video codec hevc not compatible with flv. But same commands works if I tryng: -f rtsp rtsp://
[21:38:46 CET] <JEEB> rtsp is not rtmp
[21:38:49 CET] <JEEB> rtmp is adobe's thing
[21:39:02 CET] <JEEB> rtmp is basically FLV over a protocol
[21:39:13 CET] <JEEB> and you will have to ask adobe to standardize HEVC in it
[21:39:42 CET] <JEEB> there were requests of using some made-up ID (there's just a number for each codec in FLV), but since it's not official adobe could (in theory) just use that number for something else
[21:39:59 CET] <JEEB> but most likely at this point even adobe is generally just wanting to ignore it had RTMP :P
[21:40:15 CET] <JEEB> but yes, pester Adobe about HEVC if you want it in FLV or RTMP
[21:44:06 CET] <JEEB> so when HEVC will be in Annex E of https://wwwimages2.adobe.com/content/dam/acom/en/devnet/flv/video_file_format_spec_v10_1.pdf
[21:44:11 CET] <JEEB> then it will work :P
[21:44:42 CET] <Mavrik> What's the usecase for RTMP these days?
[21:45:33 CET] <JEEB> mostly feeding to servers, although I'm seeing things being able to take MPEG-TS (over UDP etc) or fragmented MP4 over HTTP as well
[21:45:40 CET] <retal> JEEB, i know rtsp and rtmp difference :) I just made litle  research looks like Adobe doesn't support 265 officially. Thank you
[21:45:54 CET] <JEEB> retal: that's literally what I just told you :P
[21:46:00 CET] <JEEB> but great that you figured it out as well
[21:46:01 CET] <retal> :)
[21:46:56 CET] <durandal_1707> lol
[21:48:24 CET] <retal> JEEB, I surprised H265 5-6 years old codec
[21:49:05 CET] <retal> maybe we need wait 10 years more
[21:49:06 CET] <JEEB> that's because adobe gave up on RTMP for end user streaming :P
[21:49:27 CET] <JEEB> as I said, > but most likely at this point even adobe is generally just wanting to ignore it had RTMP
[21:49:54 CET] <JEEB> (and FLV while at it)
[21:50:32 CET] <JEEB> even that specification I linked first and foremost (and most of the document) documents F4V, which is flash's variant of fragmented ISOBMFF (aka "mp4")
[21:50:45 CET] <JEEB> s/flash/adobe/
[21:50:55 CET] <JEEB> and then an annex is given to FLV
[21:51:29 CET] <retal> thank you
[21:55:24 CET] <friki> Hi. I'm trying to write build and test a DASH example with TS segments. Testing with ffplay I got "stream.mpd: Invalid data found when processing input". My mpd example looks like: https://pastebin.com/aB9YxXuk
[21:59:01 CET] <friki> The problem seems to be in the mpd, because ffplay doesn't request the segments. I'll appreciate if someone can take a look at my example or share a working one. I've only found examples using mp4 segments, btw
[22:09:14 CET] <p1nky> hmm when trying to extract frames like this from an MPEG-TS
[22:09:20 CET] <p1nky> $ ffmpeg -i rtp://@224.0.1.2:5004 -vf select=1 -frames:v 1 -y x.png
[22:09:29 CET] <p1nky> is there a way to be sure they are 'clean' ?
[22:10:04 CET] <furq> define clean
[22:10:43 CET] <furq> i'm pretty sure the decoder will just discard everything before the first keyframe
[22:10:52 CET] <furq> so select is doing nothing there
[22:11:10 CET] <p1nky> hm yeah i'm also having a ton of problems being able to select by program id
[22:11:32 CET] <JEEB> ffmpeg.c has a -map selector for that
[22:11:43 CET] <JEEB> or well, it's by PID I think
[22:11:48 CET] <furq> can you use pids in filter input labels
[22:11:50 CET] <JEEB> still helpful I would say
[22:12:20 CET] <furq> i assume it's just the same stream specifier as -map
[22:12:24 CET] <p1nky> yeah things i am reading say p:1:v for program 1
[22:12:39 CET] <p1nky> but program isn't pid right? ffmpeg says program # 1 and such
[22:12:42 CET] <p1nky> i am sure thats not the actual pid
[22:12:46 CET] <p1nky> although maybe it is..
[22:13:17 CET] <furq> yeah that's the actual pid
[22:13:25 CET] <furq> https://trac.ffmpeg.org/wiki/Map#Example8
[22:13:25 CET] <JEEB> programs and stream ids are separate
[22:14:20 CET] <JEEB> http://up-cat.net/p/7106af10
[22:14:51 CET] <JEEB> in this ffprobe output Program XXX is the program, the 0xYYY after the stream index is the stream ID
[22:14:58 CET] <JEEB> which in case of mpeg-ts is the PID :P
[22:15:49 CET] <furq> yeah in that example you could use p:1048:0 or i:0x111
[22:16:51 CET] <p1nky> ok nice thanks
[22:17:07 CET] <p1nky> so select=1 is not actually selecting anything
[22:17:11 CET] <furq> or p:1048:v or even p:1048:v:0
[22:17:19 CET] <furq> select=1 is selecting everything
[22:17:40 CET] <p1nky> so -vf select p:1048:v in your example?
[22:17:45 CET] <p1nky> i was trying that as an argument to select
[22:17:47 CET] <furq> oh
[22:17:50 CET] <furq> no that's not what select does
[22:17:51 CET] <p1nky> confused as to -map
[22:18:28 CET] <furq> if you want to filter one stream then you'd use something like -filter_complex [i:0x111]yadif
[22:18:49 CET] <furq> select is for conditionally selecting frames
[22:20:27 CET] <p1nky> hm i thought there was a paste thing in the topic..
[22:20:30 CET] <p1nky> but i don't see it
[22:20:41 CET] <furq> any pastebin is fine
[22:21:12 CET] <p1nky> i only know of pastebin and its down :)
[22:21:23 CET] <p1nky>   Program 1
[22:21:24 CET] <p1nky>     Stream #0:7: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
[22:22:17 CET] <p1nky> should this work? -filter_complex '[p:1]'
[22:23:15 CET] <furq> probably p:1:v or 0:p:1:v
[22:23:40 CET] <p1nky> with [] around it? -filter_complex '[p:1:v]'
[22:23:44 CET] <p1nky> [AVFilterGraph @ 0x5556b6dab380] No such filter: ''
[22:23:51 CET] <furq> well yeah you need to put an actual filter in there
[22:23:57 CET] <furq> otherwise just use -map
[22:27:09 CET] <p1nky> ah ok trying -map 0:0 and i still seem to be getting random programs every time i run it
[22:27:29 CET] <furq> well yeah use -map 0:p:1:v
[22:28:41 CET] <p1nky> ah! thanks!
[22:28:50 CET] <p1nky> that is reliably giving me just one program
[22:29:13 CET] <p1nky> but its a crap frame a lot of times, and sometimes it matches no stream
[22:31:53 CET] <p1nky> i need to figure out xstack at some point ..
[22:32:01 CET] <p1nky> with one box per program
[22:35:35 CET] <p1nky> anyway thanks a lot for the help!
[22:35:54 CET] <p1nky> i need to do some reading on MPEG-TS i suppose
[22:49:28 CET] <friki> p1nky: check for missing frames a the original video with ffprobe (check dst/pst/duration values)
[22:50:41 CET] <p1nky> its a satellite stream
[22:50:50 CET] <friki> a droped keyframe, may be?
[22:51:27 CET] <friki> try recording a sample to disk with "-codec copy"
[22:51:36 CET] <p1nky> receiving it with a DVB-s receiver and then using dvbstream to multicast the whole TS
[22:55:22 CET] <friki> Can ffmpeg read stream from your receiver? ffmpeg -i yourReceiver -codec copy -t 10 record.ts
[22:55:45 CET] <friki> I've used "tzap" to receive TDT
[22:56:23 CET] <p1nky>  yeah reads it fine, prints all 8 PIDs on this TS
[22:56:37 CET] <p1nky> vlc always looks fine
[22:56:48 CET] <friki> TDT => DTT
[22:56:50 CET] <p1nky> i am sure theres noise that mpeg deals with in some way
[22:57:22 CET] <p1nky> when actually watching video, but seems like this method is "get the very next frame no matter its quality and output a png of it"
[22:58:55 CET] <p1nky> various messages .. PES packet size mismatch, RTP missed 19 packets, jitter buffer full, error while decoding MB 0 0, bytestream 28483 .. corrupt input packet in stream 0
[22:59:48 CET] <friki> i bet for signal reception problems
[23:00:03 CET] <p1nky> this is a big stream, 30Msps QPSK .. 8 1920x1080 video streams with multiple audio each
[23:00:37 CET] <p1nky> well yeah its never going to be perfect even if i had a bigger dish .. but between FEC at the DVB level and whatever is in mpeg to handle it ..
[23:00:42 CET] <p1nky> actually watching streams off it looks perfect
[23:00:50 CET] <p1nky> but apparently not so easy to just capture an arbitrary frame
[23:01:25 CET] <p1nky> really i don't care a ton about capturing frames, what i want to do is use xstack to make a grid of every program in the stream at a reduced frame rate
[23:01:40 CET] <p1nky> so that i can easily monitor each
[23:02:28 CET] <friki> is an interaced signal. do you deinterlace it before export png?
[23:02:39 CET] <p1nky> oh, i don't think so :)
[23:02:47 CET] <p1nky> although sometimes it looks totally perfect..
[23:03:01 CET] <p1nky> other times its cut off at a various point in the stream i suppose and some portion is gray
[23:03:09 CET] <friki> test with: -vf 'yadif'
[23:04:11 CET] <p1nky> ffmpeg -i rtp://@224.0.1.2:5004 -map 0:p:1:v  -vf select=1 -frames:v 1 -vf 'yadif' -y x.png
[23:04:20 CET] <p1nky> nope still nasty most of the time
[23:04:43 CET] <p1nky> sometimes fails entirely with: Stream map '0:p:1:v' matches no streams.
[23:05:15 CET] <p1nky> i assume because theres only a short window of time where its trying to capture a packet with that PID and if one doesn't come it stops
[23:05:36 CET] <p1nky> i think potentially if the image isn't moving theres no packets coming? or very few ..
[23:05:39 CET] <friki> if you can share a recorded sample i can check for missing frames using ffprobe
[23:06:17 CET] <p1nky> let me try to do that .. but need to get back to something, am at work :( hehe
[23:07:30 CET] <p1nky> what args for ffprobe?
[23:09:26 CET] <friki> i'll start with: ffprobe -i sample.ts -show_packets -print_format csv > sample.csv
[23:10:30 CET] <friki> it will write a line for each packet. I'll check for "jumps" in pts column
[23:11:46 CET] <p1nky> hm interesting .. anyway i recorded a 100M sample by just running dumprtp > x.mpg .. if you have a way for me to send it and you're really interested, i can :)
[23:11:49 CET] <friki> "jump" -> unexpected big gap between two corelative video frame pts values. Gap should match packet duration
[23:11:50 CET] <p1nky> which column is that?
[23:12:03 CET] <p1nky> ah pastebin works now
[23:12:36 CET] <p1nky> exceeded 512k :)
[23:12:51 CET] <p1nky> for pastebin
[23:15:11 CET] <friki> column 4 is pts
[23:16:33 CET] <friki> column 10 is duration (usually same value)
[23:16:55 CET] <p1nky> https://pastebin.com/gwFXxRh2
[23:17:15 CET] <p1nky> col 10 is all N/A
[23:19:12 CET] <friki> Only 29 frames for "video,0" stream... None of them are key frame
[23:19:47 CET] <p1nky> some of these right now are liable to be colorbars, so not moving at all
[23:19:52 CET] <p1nky> i suspect that has something to do with it
[23:19:53 CET] <friki> in fact there are no K frames in any video stream
[23:20:07 CET] <p1nky> that was a 100mb file but i had to only take some of the lines
[23:20:10 CET] <p1nky> let me grep for video,0
[23:20:31 CET] <p1nky> how do you tell a key frame?
[23:20:42 CET] <p1nky> 716 lines for video,0
[23:20:45 CET] <friki> 'K' in the last column
[23:21:11 CET] <p1nky> oh yeah none for video,0 in the whole file
[23:21:40 CET] <p1nky> plenty for others .. K_ anyway but actually all audio
[23:23:38 CET] <friki> May be the encoder is using a "key lines wave" (i don't recall the correct naming). It's a technique to avoid transmission bandwith peaks
[23:24:07 CET] <friki> i mean, you should check the gap between column 4 values. should be stable
[23:24:49 CET] <friki> If FPS is 50, the gap between pts should be 0.02
[23:24:59 CET] <friki> 1/$FPS ;-)
[23:25:18 CET] <p1nky> its 29.97 fps according to ffmpeg
[23:25:28 CET] <p1nky> Stream #0:8: Video: h264 (High), yuv420p(top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc
[23:25:56 CET] <friki> so... 0.033-0.034 :-P
[23:25:58 CET] <p1nky> but yeah grep video,0 x.csv | cut -d, -f4
[23:26:02 CET] <p1nky> gap is usually about 1-2
[23:26:08 CET] <p1nky> heres one where its 10
[23:26:25 CET] <p1nky> gah, theres one thats 6174545206 to 6174537698
[23:26:30 CET] <friki> hehe, you got it
[23:27:06 CET] <friki> BTW: 5th column is decimal value. Same data as col 4
[23:27:07 CET] <p1nky> thanks for all the help :)
[23:27:17 CET] <p1nky> ah nice
[23:27:39 CET] <friki> a pleasure
[23:28:02 CET] <friki> interesting transmission mode without keyframes. love it
[23:29:25 CET] <p1nky> :)
[00:00:00 CET] --- Fri Nov  2 2018


More information about the Ffmpeg-devel-irc mailing list