[Ffmpeg-devel-irc] ffmpeg.log.20171121

burek burek021 at gmail.com
Wed Nov 22 03:05:01 EET 2017


[00:06:17 CET] <koyglreg> Downloaded a long highway driving video from YouTube (4K).  The blue sky shows a lot of banding.  What's the best debanding method?  High big depth is acceptable.  Also, I plan to downscale the video to 2.5K.
[00:06:58 CET] <koyglreg> Although actually, my monitor is a typical 8-bit monitor, I believe.
[00:10:33 CET] <sfan5> gradfun
[00:10:42 CET] <sfan5> if your player has built-in debanding use that instead
[00:10:49 CET] <koyglreg> vlc?
[00:14:18 CET] <sfan5> no not that one :P
[00:23:21 CET] <dystoipa_> hello, i extract 888 subs like this
[00:23:31 CET] <dystoipa_> ffmpeg -txt_page 888 -txt_format text -fix_sub_duration -i test.ts -vn -an -scodec srt -y test.srt -hide_banner
[00:24:37 CET] <dystoipa_> but i have some files that are breaking the extraction. if a file has dvb and 888 subs, and the dvb are on map #0:2 and the 888 subs are on map #0:3, then it will try to extract the dvd subs with the above command and fail
[00:25:04 CET] <dystoipa_> Stream #0:2[0x90e](eng): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
[00:25:05 CET] <dystoipa_> Stream #0:3[0x90f](eng): Subtitle: dvb_teletext ([6][0][0][0] / 0x0006)
[00:25:36 CET] <dystoipa_> is there anyway to only extract the 888 teletext subs and not the dvb ones without specifying -map
[00:28:55 CET] <Dalboz989> If I am using ffmpeg to record an RTSP stream to an mp4 file as one process could I as another process copy the last 10 seconds of the mp4 file while it is still being written to by the first process?
[00:52:02 CET] <relaxed> dystoipa_: write a script that uses ffprobe to identify which stream id you need to extract
[01:38:02 CET] <dystoipa_> relaxed are you still here
[01:38:18 CET] <dystoipa_> ffprobe test.ts > test.txt
[01:38:47 CET] <dystoipa_> this doesn't work :( how can i dump the ffprobe output to a text file so i can grep it and set stuff as variables?
[01:56:00 CET] <SortaCore> @Dalboz989 I wouldn't think that would be possible, you'd be better off outputting to something like a named pipe and just caching 10 seconds
[01:59:05 CET] <kerio> Dalboz989: have a secondary output with the segment muxer
[02:18:06 CET] <relaxed> dystoipa_: ffprobe -i input| awk '/teletext/ {print substr($2,2,3)}'
[02:18:35 CET] <relaxed> er, ffprobe -i input 2>&1| awk '/teletext/ {print substr($2,2,3)}'
[02:26:20 CET] <dystoipa_> thanks relaxed :)
[03:01:56 CET] <Dalboz989> SortaCore: Just to confirm what your suggestion is.. Process 1 stream the RTSP to a named pipe with a 10 second buffer in ffmpeg.. Process 2 captures for 10 seconds the output of the pipe.. and then Process 3 saves the output of the pipe to a file..
[03:03:04 CET] <Neo-Galaxy> Hi.
[03:04:38 CET] <Dalboz989> kerio: Can you elaborate on using the segment muxer? would I stream the data to say a bunch of 1 second long files and then grab the last 10 when an event happens.. and later on i recombine all of them into an mp4 for the whole stream?
[03:06:15 CET] <Neo-Galaxy> I have a question, I'm trying to copy the cover of one sound file to another while I convert it to another format, how can I do that?
[03:07:27 CET] <Neo-Galaxy> I'm using this command for the conversion "ffmpeg -i test.mp3 -vn -acodec flac test.flac".
[03:12:02 CET] <relaxed> Neo-Galaxy: last I heard ffmpeg can't include cover art in flac
[03:13:17 CET] <atomnuker> rcombs: RCOOOOOOMBS
[03:13:23 CET] <rcombs> what'd I do this time
[03:13:54 CET] <atomnuker> link me those flac coverart patches + the cue demuxer, I shall finish them
[03:14:17 CET] <Neo-Galaxy> Really? That would be a pity.
[03:14:48 CET] <relaxed> sounds like support may be added soon :)
[03:15:11 CET] <Neo-Galaxy> I know how to add them with other programs, but I wanted to see if ffmpeg could do it while converting them.
[03:15:39 CET] <Neo-Galaxy> Hopefully, relaxed.
[03:15:42 CET] <rcombs> I dunno if there's actually anything left but they're all here https://patchwork.ffmpeg.org/project/ffmpeg/list/?submitter=79
[03:16:46 CET] <atomnuker> none of them got merged, right?
[03:18:44 CET] <Neo-Galaxy> Another question, how can I specify the options to a codec? I want FLAC to use maximum compression, but I do not know how it is done. I know that with "compression_level 12" will work, but I don't know where I have to write to make it work.
[03:19:09 CET] <relaxed> ffmpeg -h encoder=flac
[03:19:35 CET] <relaxed> encoding options go after the input
[03:21:35 CET] <atomnuker> rcombs: why was that last patch to generate timestamps needed?
[03:21:49 CET] <atomnuker> was it to fix segment timestamps so they start at 0?
[03:21:59 CET] <rcombs> 0, or whatever else they start at
[03:22:08 CET] <rcombs> otherwise it just passes through the timestamps from the encoder
[03:22:14 CET] <relaxed> Neo-Galaxy: add -lpc_type 3 -lpc_passes 5
[03:22:21 CET] <rcombs> (why does FLAC have timestamps in-frame? ¯\_(Ä)_/¯)
[03:22:27 CET] <Neo-Galaxy> In what part of this code should I put it? "ffmpeg -i test.mp3 -vn -acodec flac test.flac"
[03:22:37 CET] <Neo-Galaxy> *which
[03:22:55 CET] <atomnuker> rcombs: decoders/demuxers should still be fine with any arbitrary offset added to the timestamps, no?
[03:23:17 CET] <relaxed> ffmpeg -i test.mp3 -vn -acodec flac -lpc_type 3 -lpc_passes 5 -compression_level 12 test.flac
[03:23:34 CET] <Neo-Galaxy> Thanks.
[03:23:36 CET] <rcombs> atomnuker: well let's say you use the segment muxer to split a cue (this was my use-ase)
[03:23:39 CET] <rcombs> *case
[03:23:44 CET] <relaxed> but why are you going from mp3 to flac?
[03:24:07 CET] <rcombs> atomnuker: without that patch, you'd end up with a series of files, each with timestamps starting at [wherever that track started in the cue]
[03:24:19 CET] <Neo-Galaxy> MP3 was an example.
[03:24:28 CET] <relaxed> Neo-Galaxy: you're just increasing its size with no gain
[03:24:47 CET] <relaxed> ok
[03:24:47 CET] <atomnuker> rcombs: yeah, and that's okay, players look at the delta rather than the absolute value of timestamps
[03:25:34 CET] <rcombs> I was getting duration=(actual duration + start time) in lavf
[03:27:08 CET] <atomnuker> really?
[03:27:17 CET] <atomnuker> that sounds like a bug in the demuxer
[03:38:28 CET] <Neo-Galaxy> Well, I'm leaving, thanks for the help.
[03:58:34 CET] <TD-Linux> rcombs, not in frame timestamps but in frame durations
[03:59:17 CET] <TD-Linux> ah no it has sample number too nvm
[03:59:53 CET] <TD-Linux> basically flac is basically its own container too and when people mapped it into other containers it was easier just to put the frames in verbatim rather than try to partially rewrite them to remove potentially redundant data
[04:18:18 CET] <rcombs> TD-Linux: and also people treated the framing as part of the codec-level packets, rather than part of the container
[04:52:10 CET] <dystoipa_> thanks again relaxed :) https://i.imgur.com/NZuTCzW.png
[04:52:39 CET] <dystoipa_> it was more awkward in windows without awk but i got there in the end
[11:27:54 CET] <fps> hi, my video material source gives me mono8 pixel format.. is there an efficient way of converting these to yuv420p, which is the  format the ffmpeg api expects..
[11:28:33 CET] <fps> i guess i can write it myself, but if someone else has some tricks to make it quick under their slieves... :)
[11:29:00 CET] <sfan5> isn't that exactly what swscale does?
[11:30:43 CET] <fps> is that an API function?
[11:32:00 CET] <fps> i'm trying to work from the example video encoder in the docs folder
[11:32:03 CET] <sfan5> a library https://ffmpeg.org/doxygen/trunk/group__libsws.html#details
[11:33:37 CET] <fps> oh, ok. thanks. will take a look :)
[13:56:19 CET] <vilva> hi, have a simple mp3 -> pcm sample use case and thought I try ffmpeg for a start. using the code from doc/examples/decode_audio.c gives me only  [mp3 @ 0x2992300] Header missing
[15:31:30 CET] <durandal_170> vilva: do you have demux code?
[16:06:38 CET] <sunslider> hi guys, I just compiled ffmpeg from git and compiled it to version N-89180-gafd2bf5, however I am missing the -movflags option in the binary which is crucial for me.. any chance helping me understand what I am doing wrong?
[16:10:35 CET] <DHE> it's not missing. you may be misusing it
[16:20:56 CET] <sunslider> DHE: what do you mean? it's not showing in -h and the binary says it doesn't know such flag
[16:21:21 CET] <furq> sunslider: does it show in -h full
[16:22:06 CET] <sunslider> yes *sigh*
[16:22:22 CET] <sunslider> furq: what am I missing?
[16:22:35 CET] <furq> pastebin the command and output
[16:25:19 CET] <sunslider> furq: https://pastebin.com/gwQgXrVn
[16:26:15 CET] <DHE> you specified them as an input parameter. that's incorrect
[16:27:21 CET] <sunslider> because they are before -i?
[16:27:46 CET] <DHE> correct
[16:28:09 CET] <DHE> ffmpeg parses its parameters as: ffmpeg [input options] -i input1 ... [output options] output1
[16:28:53 CET] <DHE> you can have multiple inputs and outputs, and they are grouped with the options preceding the input or output name
[16:37:28 CET] <sunslider> DHE: Okay! it recognizes the argument now! thank you so much, although the root problem persists.. I'm unable to convert a simple RTSP stream to mp4, not sure why it fails
[16:38:11 CET] <sunslider> ffserver.conf: https://pastebin.com/5fnp9VZz (not sure why but I couldn't get the h264 codec to run although it shows -codecs)
[16:38:27 CET] <sunslider> ffmpeg: https://pastebin.com/asunLK02
[16:39:39 CET] <sunslider> I'm getting a "muxer does not support non seekable output" on ffserver, something -moveflags should have helped solving
[16:45:18 CET] <sunslider> about the codec I just noted I didn't ./configure  --enable-libx264
[16:47:40 CET] <DHE> also, ffserver is not supported and its use is strongly discouraged
[16:47:54 CET] <DHE> please use any other software. we recommend looking at nginx-rtmp
[16:50:51 CET] <sunslider> yes I wondered why it was built with the latest ffmpeg although the changelog says it should be deprecated
[16:50:57 CET] <sunslider> just wasn't aware of any alternative
[16:54:01 CET] <DHE> deprecated doesn't mean it's removed. it's a warning not to expect its availability in the future
[16:58:47 CET] <Alina-malina> i want to make a video effect out of my webcam like so: https://hsto.org/getpro/habr/post_images/fda/f98/f59/fdaf98f59edc981e02ca522d0a135432.jpg  is it possible to achieve with ffmpeg?
[17:29:32 CET] <_ppp> anyone familiar with kaltura?
[18:24:29 CET] <durandal_1707> Alina-malina: nope
[18:25:01 CET] <Alina-malina> :-/
[18:30:30 CET] <sunslider> Any reason libavutil fails to find rc_eq?
[18:30:50 CET] <sunslider> is this option libx264-related?
[18:33:15 CET] <sunslider> (latest ffmpeg of git)
[18:49:58 CET] <debianuser> Hello. I'm trying "nlmeans" filter instead of hqdn3d, and can't make it as smooth as hqdn3d is. It blurs the image, but keeps the noise. :( Even `nlmeans=s=30:r=15` is blurry and noisy compared to `hqdn3d=15`. 27sec test video: https://drive.google.com/open?id=1FXt1i5He77EMYtd7e8kXcmzdTcoBK84C Are my nlmeans params wrong? Or is hqdn3d actually better? Or is ffmpeg's nlmeans filter broken?
[19:03:53 CET] <durandal_1707> debianuser: thats not typical noise
[19:08:09 CET] <debianuser> durandal_1707: Well, that's all I have. :)
[19:09:33 CET] <durandal_1707> debianuser: have you tried vaguedenoiser?
[19:09:47 CET] <debianuser> Not yet, I'll try...
[19:33:29 CET] <pgorley> hi, is there a way to programmatically detect rtp packet loss? rtpdec prints the number of missed packets, but doesn't seem to set any flags to tell me there was a packet loss
[19:38:39 CET] <voovi> i want to overlay 2 video file but i got "broken pipe"   ffmpeg -i 1.264 -i 2.264 -filter_complex "[0][1] overlay=0:0" http://x.x.x.x   so i must to put a pipe  fmpeg -i 1.264 -i 2.264 -filter_complex "[0][1] overlay=0:0"  -f h264 pipe:1 | ffmpeg -i pipe:0http://x.x.x.x   how can I use on the first command without pipe please?
[19:58:34 CET] <debianuser> durandal_1707: vaguedenoiser works faster than nlmeans, but looks somewhat similar, not as smooth as hqdn3d: https://drive.google.com/open?id=1wHXTWN5OMgcGo-PpF6xyrwSmP7R9rfE3 (hqdn3d=15 vs vaguedenoiser=15, 16 seconds)
[20:00:01 CET] <COOC> I got disconnected,
[20:00:04 CET] <COOC>  i want to overlay 2 video file but i got "broken pipe"   ffmpeg -i 1.264 -i 2.264 -filter_complex "[0][1] overlay=0:0" http://x.x.x.x   so i must to put a pipe  fmpeg -i 1.264 -i 2.264 -filter_complex "[0][1] overlay=0:0"  -f h264 pipe:1 | ffmpeg -i pipe:0http://x.x.x.x   how can I use on the first command without pipe please?
[21:23:36 CET] <faLUCE> a generic question: do you know any *really* useful application of low latency streaming?
[21:25:35 CET] <COOC> not ffserver?
[21:27:05 CET] <faLUCE> COOC: ?
[21:54:13 CET] <lomancer86> Hello, I am trying to enable hardware accelerated decoding with libavcodec and I am confused on exactly what I need to do to enable it. I use av_hwaccel_next to find a AVHWAccel* which matches the codec_id and pix_fmt (AV_CODEC_ID_H264 and AVPIX_FMT_VAAPI_VLD) and I am taking this AVHWAccel* and setting  the hwaccel field of my AVCodecContext* to t
[21:54:13 CET] <lomancer86> his ptr. do I need to call av_register_hwaccel? This is likely not the only thing I need to do to enable hardware decoding. Do I need to setup a hwaccell_conext as well. Do I pass a different codec to avcodec_decode_video2 or is the setup done for me inside of avcodec_open2? Any documentation or examples of hardware acceleration? TIA
[21:56:05 CET] <degenerate> I'm using: ffprobe -of json -show_streams filename
[21:56:11 CET] <degenerate> to get some info about videos, and usually there is a parameter nb_frames in the video stream section
[21:56:32 CET] <degenerate> which i use to do other calculations
[21:56:35 CET] <TheRock> ffmpeg can do a lot of stuff. why do you guys don't sell it to the NSA ?
[21:57:15 CET] <degenerate> but some videos don't have the nb_frames parameter. how can i computer the number of frames in these cases?
[21:57:28 CET] <durandal_1707> TheRock: RMS does not allow that
[21:58:18 CET] <degenerate> for example, can i get number of frames from this information: http://termbin.com/5xeu
[21:58:36 CET] <degenerate> maybe it is something like: duration / r_frame_rate
[21:59:01 CET] <degenerate> or would it be: duration / avg_frame_rate
[21:59:08 CET] <degenerate> what is the difference?
[21:59:20 CET] <degenerate> and what is duration_ts ?
[22:01:16 CET] <degenerate> oops above i should have says * not /
[22:04:42 CET] <pgorley> lomancer86: check doc/examples/hw_decode.c
[22:04:51 CET] <degenerate> nm, i figured it out.
[22:26:33 CET] <lomancer86> pgorley awesome thanks!
[00:00:00 CET] --- Wed Nov 22 2017


More information about the Ffmpeg-devel-irc mailing list