[Ffmpeg-devel-irc] ffmpeg.log.20181002

burek burek021 at gmail.com
Wed Oct 3 03:05:02 EEST 2018


[03:56:58 CEST] <hendry> just made a screen recording with IOS *with audio* but i can't hear the audio from my Archlinux machine https://media.dev.unee-t.com/2018-10-02/report.mp4
[03:57:29 CEST] <hendry> is a new AAC or something ? Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 109 kb/s (default)
[03:57:53 CEST] <hendry> I can hear the Audio from MacOS. So I am little perplexed.
[03:58:00 CEST] <hendry> Running ffmpeg over the file doesn't help
[04:13:09 CEST] <fella> Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)
[04:13:24 CEST] <fella> 2 kb/s seems a bit low!?
[04:19:06 CEST] <fella> did you check that it's the same file on both systems (just to be sure, sry): 1d30f66ca4a3246f0cd1823d5f35368c  report.mp4 (md5)
[05:21:36 CEST] <hendry> fella: this is the original file from the device https://media.dev.unee-t.com/2018-10-02/reports.mp4
[05:22:12 CEST] <hendry>    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)
[05:22:14 CEST] <hendry>     Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 109 kb/s (default)
[05:22:38 CEST] <hendry> So there is two audio streams and I am guessing it tries to play back the 2 kb/s one ?
[05:26:41 CEST] <hendry> ok, I fixed it with `ffmpeg -i reports.mp4 -map 0:0 -map 0:2 -acodec copy -vcodec copy new_file.mp4`
[05:41:45 CEST] <fella> hendry: uhm, that one that you uploaded had one video and one audio stream: https://transfer.sh/EXed1/report.txt
[05:42:43 CEST] <hendry> fella: sorry, that one was the ffmpeg treated file. https://media.dev.unee-t.com/2018-10-02/reports.mp4 is the original.
[05:45:25 CEST] <fella> kinda funny - mpv seems to pick the wrong audio stream while ffplay chooses the right, yet ffmpeg prolly the wrong one ;)
[07:26:48 CEST] <laertus> my hardware mp3 player doesn't seem to like variable-rate mp3's (it won't fast-forward/rewind them properly, and shows the length of the mp3 wrong).. so i need to convert my variable-rate mp3's to fixed-rate mp3's.. so i'm trying "ffmpeg -i input.mp3 -ab 128k -ar 44100 output.mp3" but getting this error:  "[mp3 @ 0x55b549f84610] Frame rate very high for a muxer not efficiently supporting it.  Please consider
[07:26:50 CEST] <laertus> specifying a lower framerate, a different muxer or -vsync 2"  and at the end of the encoding it says "[libmp3lame @ 0x55b549f96350] Trying to remove 1152 samples, but the queue is empty"
[07:27:01 CEST] <laertus> are these anything to be concerned about?
[07:46:08 CEST] <laertus> also, how can i delete any attached images from mp3 files when i re-encode them?
[08:56:47 CEST] <laertus> looks like passing the -vn option to ffmpeg made it strip the attached image from the mp3 when it re-encoded it
[11:10:17 CEST] <Austin___> hi all, i've got an rtsp stream from an IP camera which i can only have one open connection to. How can I have ffmpeg "rebroadcast" this stream while also writing it to disk? Or is this not possible?
[11:12:29 CEST] <furq> -i rtsp://foo -c copy out.mkv -c copy rtsp://bar
[11:12:32 CEST] <furq> something like that
[11:13:06 CEST] <Austin___> oh, didnt realise I could use multiple -c args
[11:13:13 CEST] <Austin___> ta!
[11:34:44 CEST] <Yukkuri> hi, i have a general questions regarding chroma subsampling. Is 4:4:4 10profile in any way meaningful? Shouldn't 8bit 4:4:4 cover whole RGB range?
[11:35:44 CEST] <BtbN> The point of 10 bit is to have more than 8 bit per color.
[11:36:21 CEST] <Yukkuri> yes, but i see that useful only on lesser profiles, such as 4:2:2 or 4:2:0
[11:36:56 CEST] <Yukkuri> where 10bit while requring still less than 4:4:4 8bit could represent more color tones from important to human eye parts of the spectrum
[11:37:14 CEST] <Yukkuri> than it's 8bit 4:2:2 or 4:2:0 counterpart
[11:37:29 CEST] <BtbN> the subsampling doesn't have anything to do with that.
[11:38:02 CEST] <Yukkuri> how is that so?
[11:38:34 CEST] <BtbN> It doesn't change how many bits per color is used, but it's about pixel subsampling.
[11:40:00 CEST] <Yukkuri> but i though of 4:4:4 as lossless - 8+8+8 (RGB8) or 10+10+10 (RGB10). and of 4:2:2 as a bit lossy -- 8+4+4 or 10+5+5
[11:40:16 CEST] <Yukkuri> while storing less information, it shifts colorpsace to more important parts
[11:41:40 CEST] <Yukkuri> so with 10bit 4:2:2 you store 20 bits of color information, per pixel, while wiht 8bit 4:4:2 it is 24
[11:41:51 CEST] <Yukkuri> err
[11:41:58 CEST] <Yukkuri> 8bit 4:4:4
[11:43:16 CEST] <Yukkuri> but i'm not sure if RGB10 is of any practical home use, since all drawable surfaces are using RGB(A) to represent colors
[11:44:06 CEST] <Yukkuri> however, with lossy 4:2:2 and less profiles, advantage is more clear. not full RGB, but more important color tones.
[12:01:08 CEST] <furq> it depends what you mean by meaningful i guess
[12:01:26 CEST] <furq> it is distinct in a meaningful way but i don't think anyone actually uses it for anything
[12:02:05 CEST] <Mavrik> Not sure why do you think subsampling and color bits are connected
[12:02:14 CEST] <Mavrik> Subsampling just uses same color for multiple pixels
[12:02:26 CEST] <Mavrik> And 10-bit color means that you can describe a color with greater precision
[12:02:43 CEST] <Mavrik> Which is important when you expand your color space - think HDR.
[12:02:50 CEST] <Yukkuri> when 4:4:4 subsampling is used, there is no actual "sub" smapling in my understanding
[12:02:54 CEST] <Yukkuri> every pixel is sampled
[12:05:48 CEST] <Yukkuri> so 10bit 4:4:4 can be used for some rare displays that go beyond RGB range in their pixels?
[12:06:35 CEST] <Mavrik> Yes, every pixel is sampled, but there's still a difference if color for that pixel is described with 8 or 10 bits.
[12:06:41 CEST] <Mavrik> No idea what do you mean by RGB range.
[12:06:54 CEST] <Mavrik> 10-bits just gives you greater precision to desribe the color.
[12:07:07 CEST] <Yukkuri> i mean RGB8, as in #FF0000
[12:07:41 CEST] <Yukkuri> i find it hard to draw something on a display with more color varaiance than 256 per component
[12:08:19 CEST] <Yukkuri> be it GL surface or some sofware renderer, all they operate RGB8 buffers when dumping data to display
[12:08:35 CEST] <Yukkuri> unless it is some special hardware
[12:08:45 CEST] <BtbN> 10 or even 12 bit color depth is pretty common
[12:08:58 CEST] <Yukkuri> when sampled for every pixel?
[12:09:16 CEST] <BtbN> that's still not connected at all to subsampling...
[12:10:07 CEST] <Yukkuri> i mean, why having RGB10 for every pixel, when most displays can represent only RGB8?
[12:10:22 CEST] <BtbN> Just because your displays don't doesn't mean they don't exist.
[12:10:54 CEST] <Mavrik> There's a lot of displays that benefit from more precision
[12:11:16 CEST] <Mavrik> And I think you need to read up on what color spaces do.
[12:11:25 CEST] <Mavrik> Because RGB8 can mean a lot of things depending on context.
[12:11:33 CEST] <Mavrik> #FF0000 isn't the same on all displays and in all color spaces.
[12:12:17 CEST] <Mavrik> And yeah, HDR displays are a thing (especially on TVs) and they need those 10 bits in HDR mode.
[12:13:05 CEST] <Mavrik> (standardized as BT.2100 for HDR10 if I remember correctly)
[12:13:19 CEST] <Yukkuri> i see
[12:15:36 CEST] <Yukkuri> but is RGB <-> YUV conversion losless, when both spaces are represented by 8-bit components, or is there no direct mapping due to different orientation and size of this two spaces? also, when color is stored in 3-components 8-bit YUV per pixel, is it more often actual YUV or just RGB using the same space?
[12:16:35 CEST] <Yukkuri> *dirrect mapping in said precision, since numerical losses are very likely
[12:17:29 CEST] <Yukkuri> *using the same data space (not color space)
[12:18:20 CEST] <Mavrik> IIRC there are lossless transforms between RGB <-> YUV
[12:18:39 CEST] <Mavrik> But that's a bit out of my area of expertise
[12:18:49 CEST] <furq> not with the same bit depth
[12:18:52 CEST] <JEEB> YCgCo with +1 bit depth
[12:18:55 CEST] <JEEB> IIRC was lossless?
[12:19:02 CEST] <JEEB> although I think that was for BT.709/Gamma?
[12:19:43 CEST] <JEEB> although it might be unrelated to the colorspaces/etc
[12:22:34 CEST] <Yukkuri> also, is it common trick for codecs to store just RGB data in place of YUV data, when number of information is the same to avoid loss?
[12:58:27 CEST] <Dexx1_> I just spent the last 4 hours googling trying to figure out how to loop a single MP4 video file X times .... and no luck. I assume you fine folks know how to make such magic happen?
[12:59:00 CEST] <Shibe> how can I get a list of encoder options for a codec, say h264_vaapi?
[12:59:30 CEST] <JEEB> f.ex. `-h encoder=libx264`
[12:59:37 CEST] <JEEB> that lists the AVOptions that module has
[12:59:49 CEST] <JEEB> (there are also generic options which are separate)
[13:25:41 CEST] <King_DuckZ> hi, I'm looking at this function in our code https://alarmpi.no-ip.org/kamokan/cm?cc which is trying to tell how many frames are in a given input file I think
[13:26:31 CEST] <King_DuckZ> I'm getting deprecation warnings on av_stream_get_r_frame_rate and I don't know how to rewrite this for latest ffmpeg
[13:27:15 CEST] <King_DuckZ> I can't even find that function in the documentiation for 4.0, there's an entry for 3.4 but it says absolutely nothing
[13:39:44 CEST] <relaxed> Dexx1_: there's an example on how to do it here, https://trac.ffmpeg.org/wiki/Concatenate
[14:11:58 CEST] <bodqhrohro> I missed the probably main question. Is geq filter GPU-powered?
[15:56:01 CEST] <Hello71> in general unless it specifically says it is, it is not
[16:04:28 CEST] <King_DuckZ> have I missed any reply while I was offline?
[16:04:50 CEST] <King_DuckZ> my question was about a snippet of code I pasted earlier
[16:52:04 CEST] <microcolonel> I figured you folks would be likely to know: is there some way with MPEG TS to have a video stream start at some seek point into the underlying stream?
[16:52:30 CEST] <microcolonel> i.e. 64 P frames in
[16:53:47 CEST] <microcolonel> I have a DASH stream, and I'd like to be able to rapidly produce clip files from already-downloaded parts without reencoding the first part.
[16:56:59 CEST] <Mavrik> Why not just drop the packets?
[16:57:54 CEST] <microcolonel> (and I'd like accurate seeking at the beginning of the clips)
[17:28:21 CEST] <Daisae> I am having difficulty selecting a stream. I have been reading the user documentation. In an AV file with multiple audio streams, the unwanted audio stream is selected.
[17:28:33 CEST] <Daisae> ffmpeg/ffprobe shows the audio Stream I want as : Stream #0:2
[17:28:42 CEST] <Daisae> ffmpeg -y -noaccurate_seek -ss 1:00 -i Input.mkv -codec:V copy -codec:a:2 copy -t 1 Output.mkv
[17:28:58 CEST] <Daisae> In output: Stream mapping:  Stream #0:0 -> #0:0 (copy)  Stream #0:1 -> #0:1 (ac3 (native) -> vorbis (libvorbis))
[17:31:03 CEST] <JEEB> you haven't done any mapping, so automagically ffmpeg.c will pick the "best" track for audio and video
[17:31:54 CEST] <JEEB> after input, -map 0:v (map all video tracks since most likely just 1), -map 0:a:N where N is starting from zero how many'th audio track it is :P
[17:32:06 CEST] <JEEB> since your input seems to be #0
[17:32:14 CEST] <JEEB> (that is why both maps begin with 0
[17:41:21 CEST] <Daisae> Thanks. That solved that problem.
[20:29:37 CEST] <ilushka4> hey guys, is there anyone who can help me with ffmpeg-python?
[20:50:56 CEST] <poutine> ilushka4, You never know unless you ask, also is it specifically with this library's bindings, or something ffmpeg is doing itself?
[20:59:10 CEST] <ilushka4> poutine, hey, here is my code: "https://paste.pound-python.org/show/JjwqE1zgo8pTU9r590hn/" I'm trying to open mms stream but I get an error of "AttributeError: module 'ffmpeg' has no attribute 'input'"
[21:00:20 CEST] <poutine> paste is broken ilushka4
[21:01:08 CEST] <ilushka4> https://paste.pound-python.org/show/JjwqE1zgo8pTU9r590hn/
[21:03:23 CEST] <poutine> https://github.com/jiashaokun/ffmpeg
[21:03:28 CEST] <poutine> I suggest following the examples there
[21:03:36 CEST] <poutine> perhaps using a different library as this one doesn't seem that common
[21:04:15 CEST] <poutine> but in that library, the ffmpeg module does not have a "input" method, the ffmpeg module has a stream which when initialized has an input method
[21:04:27 CEST] <poutine> see the very last section (I cannot read chinese)
[00:00:00 CEST] --- Wed Oct  3 2018


More information about the Ffmpeg-devel-irc mailing list