[Ffmpeg-devel-irc] ffmpeg.log.20170411

burek burek021 at gmail.com
Wed Apr 12 03:05:01 EEST 2017


[00:00:18 CEST] <alexpigment> is there just like a -t command i can use?
[00:00:29 CEST] <alexpigment> i don't know how long i'm supposed to wait for it to finish
[00:00:29 CEST] <kepstin> hmm.
[00:01:35 CEST] <kepstin> try adding "-read_intervals %+#20"
[00:01:47 CEST] <kepstin> it might need some quoting, there's funky characters in there
[00:02:23 CEST] <alexpigment> ok finally
[00:02:24 CEST] <alexpigment> thank you
[00:02:31 CEST] <alexpigment> nb_samples=502
[00:02:44 CEST] <alexpigment> i'll try adding that to my command line parameters with asetnsamples
[00:03:26 CEST] <llogan> ffprobe -loglevel error -show_entries frame=nb_samples -of default=nw=1 input.vob > out.txt
[00:03:42 CEST] <llogan> not sure what other parameters you want but you can add them frame=foo,bar
[00:03:46 CEST] <kepstin> I don't use ffprobe nearly enough :/
[00:03:58 CEST] <alexpigment> you use it more than i do :)
[00:03:59 CEST] <kepstin> but 502 samples, interesting - assuming stereo, 16bit, that's 2008 bytes per frame, which is a nice round 20 bytes less than 2048 :)
[00:04:05 CEST] <alexpigment> thanks llogan. i'll have to remember that syntax for next time
[00:04:37 CEST] <kepstin> er, my maths are wrong
[00:04:41 CEST] <kepstin> 40 bytes :)
[00:04:55 CEST] <llogan> alexpigment: some other zamples here: http://trac.ffmpeg.org/wiki/FFprobeTips
[00:05:41 CEST] <kepstin> alexpigment: but yeah, give -af asetnsamples=n=502 a try, and report back :)
[00:05:48 CEST] <alexpigment> kepstin: same buffer problems wwhen using the asetnsamples=n=502
[00:05:59 CEST] <kepstin> hmm. well, that's annoying. there goes that theory.
[00:06:22 CEST] <alexpigment> maybe i should try doing some regression testing to see if this has always been broken
[00:06:45 CEST] <alexpigment> i'm almost positive i did some preliminary testing on this a few years ago, but my memory has been wrong many times before ;)
[00:07:49 CEST] <alexpigment> well, it was broken at least as far back as 2014...
[00:14:21 CEST] <alexpigment> fwiw, i just checked two other PCM DVDs, and the nb_samples=502 as well
[09:52:02 CEST] <alines> hi, why avcodec_receive_packet return -22 ?
[10:45:45 CEST] <jya> hi... We've submitted some patches a while back for adding support of Opus in MP4 (we're very keen on seeing opus being used everywhere)... There's been no response in a while.. What can we do to speed things up ?
[10:46:04 CEST] <jya> https://patchwork.ffmpeg.org/patch/2942 and https://patchwork.ffmpeg.org/patch/3327/
[11:28:18 CEST] <durandal_1707> jya: wrong channel, #ffmpeg-devel is correct place
[11:31:24 CEST] <jya> ah of course.. thank you :)
[13:20:33 CEST] <TikityTik> Can you encode libvorbis lower than 45 kbps?
[13:23:02 CEST] <atomnuker> yes, using -q:a
[13:23:56 CEST] <furq> -b:a goes down to 32k
[13:40:58 CEST] <TikityTik> atomnuker: so -q:a can go below 32k?
[13:41:08 CEST] <TikityTik> And -q:a has the smallest value of 0?
[13:41:53 CEST] <durandal_1707> qscale!= bitrate
[13:42:46 CEST] <TikityTik> well the size ended up being larger
[13:42:54 CEST] <TikityTik> furq: why can it not go below 32k?
[13:43:35 CEST] <TikityTik> also 32k is not the lowest: maybe incorrect parameters such as bit_rate, rate, width or height
[13:43:46 CEST] <TikityTik> Error while opening encoder for output stream #0:0
[13:44:14 CEST] <furq> -q is vbr, so it'll go as low as needed
[13:44:25 CEST] <furq> and -b:a 32k works for me
[13:44:54 CEST] <furq> that's mono though
[13:46:15 CEST] <TikityTik> Why is it that making a sound mono doesn't halve the file size?
[13:46:45 CEST] <furq> because -b:a isn't per channel
[13:46:57 CEST] <furq> also because most lossy codecs use joint stereo
[14:35:03 CEST] <acamargo> Gentleman, any idea why this can happen? https://github.com/FFmpeg/FFmpeg/blob/master/ffmpeg.c#L617
[14:35:51 CEST] <acamargo> I'm streaming a rtmp to akamai and *sometimes* ffmpeg dies with the error message "Conversion failed"
[14:36:30 CEST] <acamargo> I was reading the code trying to figure out the reason, without success :-)
[14:38:41 CEST] <acamargo> I'm sorry, I'm running version 3.2.4 https://github.com/FFmpeg/FFmpeg/blob/release/3.2/ffmpeg.c#L590
[14:57:11 CEST] <DHE> there should have been more useful error messages above that
[16:18:17 CEST] <djk> Does ffmpeg use GPU memory when live streaming to rtmp ? I have a headless Raspberry Pi and if the GPU memory isn't used then it would seem best to set it to 16 and leave more to the CPU.
[16:19:53 CEST] <c_14> It shouldn't, unless you're using some hardware decoder that uses it.
[16:19:58 CEST] <c_14> s/decoder/encoder/
[16:19:58 CEST] <thebombzen> not unless you're using the gpu
[16:21:20 CEST] <djk> and I don't think I am using the gpu for decode/encode
[16:21:25 CEST] <roasted> Is it not possible to download a UDP multicast stream? I keep getting errors applying filters. If I pull in the RTMP stream (which is also available) I can use ffmpeg to download it without issue.
[16:22:33 CEST] <c_14> If it's plain udp you'll probably have problems with packet reordering and packet loss, but other than that it should work.
[16:22:59 CEST] <roasted> hm, yeah
[16:23:06 CEST] <roasted> maybe I should just stay with RTMP for the recording
[16:23:11 CEST] <roasted> and continue with multicast for the live stream
[16:23:19 CEST] <roasted> best of both worlds me thinks??
[16:27:14 CEST] <djk> I am having issue keeping a stream solid enough for youtube live to stay up hence wanting to be as efficient as possible. The webcam I have does  YUYV and MJPG does 30fps at the high resolutions is it possible or does it make sense to use the mjpg to send the flv rtmp to youtube? Video is all relatively new space for me.
[16:28:53 CEST] <c_14> youtube should probably support mjpeg
[16:32:17 CEST] <DHE> looking to add some kind of motion to what is an otherwise static image so that a user can see that it's not actually frozen. I tried the `noise` filter but it's hell on the bitrate. anyone got an idea for what I could do instead?
[16:34:39 CEST] <alexpigment> DHE: do the colors need to stay static?
[16:35:22 CEST] <DHE> alexpigment: as much as possible. I'm hoping for a subtle effect that's still easily noticed without, like, inducing motion sickness or something
[16:36:42 CEST] <alexpigment> hmm, that is a tricky one then
[16:37:04 CEST] <alexpigment> how often does it have to move?
[16:37:25 CEST] <alexpigment> for example, could you have it kinda do an aesthetic "glitch" thing every 5 seconds or so?
[16:37:29 CEST] <DHE> hmm. don't have a solid answer to that. ever 3 seconds maybe?
[16:39:16 CEST] <DHE> I was thinking of a "film grain" overlay but I don't have a good input sample..
[16:39:39 CEST] <alexpigment> yeah, film grain with bitrate constraints :(
[16:43:06 CEST] <alexpigment> so i don't know much about using filters over time - i've only ever used whole-video filters - but do you already know how to do panning with a filter?
[16:43:58 CEST] <alexpigment> if so, it might be worth it to input the video twice, one that is static on top but with ~80% opacity, and then below it you can have the same video pan (e.g. left to right, repeat)
[16:45:01 CEST] <alexpigment> alternatively, you could do the same thing, but chroma key part of the image that isn't the majority color, then let the moving part show through that chroma keyed area
[16:48:45 CEST] <DHE> I thought of having some kind of moving object - just a dot or something. Could use drawtext to draw a period or overlay for a generic image...
[16:49:33 CEST] <DHE> this is one of those creativity vs complexity problems...
[16:52:48 CEST] <kepstin> DHE: find an animated gif of a loading spinner or something and just overlay it on the corner of the video
[16:56:17 CEST] <alexpigment> kepstin: that's a simple elegant solution
[16:56:33 CEST] <alexpigment> probably the best and least-distracting one
[16:57:05 CEST] <kepstin> the other common thing done is to just stick a clock on the video
[16:57:13 CEST] <kepstin> which i think you can do with drawtext?
[16:57:27 CEST] <DHE> hmm... yeah it's easy but I get the impression upstream management isn't going to like the idea...
[16:57:39 CEST] <DHE> okay, I'll take it up with them and see if any of these ideas gets their interest
[16:59:56 CEST] <djk> c_14: This is what I have that 'works' how would I set it to use the mpjg format from the camera?
[16:59:56 CEST] <djk> ffmpeg -thread_queue_size 512 -framerate 15 -i /dev/video1 -f lavfi -i anullsrc -c:v libx264 -pix_fmt yuv420p -g 15 -b:v 2500k -c:a libmp3lame -ar 44100 -b:a 32k -f flv rtmp://a.rtmp.youtube.com/live2/[KEY]
[17:09:22 CEST] <kepstin> djk: stick a '-input_format mjpeg' before the input
[17:11:05 CEST] <mr_pinc> Hi i just got the latest build and am trying to add '-movflags faststart' to my currently working encode but i get an error - Unrecognized option 'moveflags'.
[17:11:05 CEST] <mr_pinc> Error splitting the argument list: Option not found'
[17:11:56 CEST] <mr_pinc> Does it need to be in a specific location?
[17:12:05 CEST] <durandal_1707> mr_pinc: pastebin full command and output
[17:14:02 CEST] <kepstin> mr_pinc: is the error exactly "Unrecognized option 'moveflags'."? Look closer - you just typoed the name of the option...
[17:16:54 CEST] <djk> kepstin: thank you that worked but still not getting a solid stream to make youtube live happy. Any suggestions greatly appreciated.
[17:17:42 CEST] <kepstin> djk: you're not using a raspberry pi or something ridiculously underpowered like that, are you?
[17:18:01 CEST] <djk> of course I am ;-)
[17:18:05 CEST] <mr_pinc> https://pastebin.com/prkhRrcy
[17:18:42 CEST] <kepstin> djk: well, you're gonna have a bad time then. But as a start, you probably don't have enough cpu power to run x264 reliably, you might want to try to get the hardware video encoder working
[17:18:45 CEST] <mr_pinc> dam faststart
[17:19:03 CEST] <kepstin> (which is not something i'm familiar with)
[17:19:39 CEST] <djk> and would mean I should increase the GPU memory if I do that
[17:20:00 CEST] <kepstin> mr_pinc: please read the error message! you wrote "moveflags", which is a typo! (it's supposed to be "movflags")
[17:20:16 CEST] <mr_pinc> yeah i know, but it fails even with movflags
[17:20:26 CEST] <durandal_1707> mr_pinc: its output option and not input
[17:20:32 CEST] <kepstin> mr_pinc: also it's an output option, so you need to put it after the inputs
[17:20:35 CEST] <mr_pinc> yeah i got it now.  thank you
[17:21:44 CEST] <mr_pinc> thanks everyone
[17:21:49 CEST] <mr_pinc> well both of you
[17:30:53 CEST] <furq> djk: the onboard rpi encoder works fine at 1080p30 with 128MB gpu memory
[17:31:08 CEST] <furq> it might work with less, i never looked into it too much
[17:33:28 CEST] <djk> furq: will the ffmpeg prebuilt package support the rpi hardware encoding?
[17:33:38 CEST] <furq> no
[17:33:50 CEST] <djk> so I have compile it on the rpi
[17:34:08 CEST] <furq> or you could cross-compile it
[17:36:30 CEST] <djk> I will look into that though this may be out of scope for the current timeline project
[17:39:01 CEST] <djk> my alternative is to use snapshots that I am grabbing every 15 sec to make a time lapse video. The problem I have with that is it make a very fast video 12hr to ~1m I would like it to be longer
[17:43:50 CEST] <djk> thank you all for the input
[17:46:23 CEST] <korylprince> Hello all. I'm trying to debug a problem with blender and ffmpeg. Blender is listing the avformat version as '57, 56, 100' . I'm trying to figure out how that maps to a ffmpeg release, e.g. 3.2.4. Is there a list somewhere of how these versions coorespond?
[18:01:49 CEST] <c_14> probably not without checking out libavformat/version.h in each of the possible releases
[18:03:21 CEST] <kepstin> korylprince: https://ffmpeg.org/download.html#releases has a list
[18:04:01 CEST] <c_14> Oh, it does
[18:04:17 CEST] <c_14> Must have missed that
[18:04:29 CEST] <kepstin> korylprince: note that the library version numbers don't change in point releases, so you'll be able to get '3.2.x' but not specifically '3.2.4'
[18:04:38 CEST] <kepstin> I think?
[18:05:22 CEST] <kepstin> or at least, they aren't necessarily incremented on every point release.
[18:05:50 CEST] <c_14> they shouldn't change through point releases
[18:05:54 CEST] <c_14> since only fixes are backported
[18:08:38 CEST] <korylprince> @kepstin thanks. It was staring me right in the face.
[19:39:10 CEST] <rosek> Hi, is it possible to buffer input stream in order to capture video from the past? I would like to capture video from DeckLink Duo 2, encode it using h264_qsv and then when a trigger occurs, I have to store 10 seconds to a file. This 10 seconds video may be up to 5 minutes in the past and I'll get only UTC stamp which indicates when recording should start.
[19:42:08 CEST] <ChocolateArmpits> rosek, why do you refere to storing 10 second file as recording ?
[19:42:11 CEST] <ChocolateArmpits> refer*
[19:42:37 CEST] <DHE> not at that scale. I'd recommend using the segment output format and write videos constantly and delete files you don't need.
[19:43:26 CEST] <ChocolateArmpits> DHE, the segmenter can overwrite files maintaining constant length saved, no ?
[19:43:53 CEST] <DHE> I think so. need to double-check it supports overwriting existing files.
[19:44:29 CEST] <rosek> DHE, I was thinking about this, I can store even longer segments and then cut part which is needed
[19:44:34 CEST] <kepstin> yeah, it can be used as a circular buffer, basically have the index numbers wrap.
[19:44:39 CEST] <ChocolateArmpits> "segment_wrap" is the setting
[19:44:52 CEST] <rosek> but I wonder if I'd get missing frames between these segments?
[19:45:20 CEST] <kepstin> rosek: no? the segments play continuously when concatenated
[19:46:00 CEST] <ChocolateArmpits> rosek, you can save 10 second segments and then just pick the one you need if the UTC stamp you receive is rounded to tens of seconds and that's sufficient in your case
[19:46:25 CEST] <ChocolateArmpits> Each segment can start with a keyframe
[19:46:48 CEST] <rosek> kepstin: thanks for confirmation
[19:47:09 CEST] <kepstin> if you need more accuracy than the lenght of a segment, you'd have to re-encode anyways, because you probably wouldn't have a keyframe at the exact spot you want to start
[19:47:32 CEST] <rosek> no, the time stamp will be random, so I'd have to re-process files and cut part which is needed
[19:48:18 CEST] <ChocolateArmpits> rosek, even more accurate than 1 second ?
[19:49:08 CEST] <ChocolateArmpits> Cause imo it's simpler to just have plenty of files to copy forward rather than transcoding them
[19:49:48 CEST] <rosek> theoretically the trigger will be accurate to tens of milliseconds but video can be accurate to 1 second
[19:50:53 CEST] <furq> you could use 1-second segments then
[19:51:09 CEST] <furq> i'm not sure how badly 1-second gops hurts h264_qsv but it's probably not a huge problem
[19:51:39 CEST] <ChocolateArmpits> So you could segment to 1 second each, at 5 minutes that's 300 files, you'll probably need a slightly bigger length to account for processing time so maybe 360 files for 6 minutes
[19:53:19 CEST] <rosek> right, so then I only merge 10 files into one?
[19:53:33 CEST] <furq> right
[19:53:37 CEST] <ChocolateArmpits> If it's ts you can just use cat
[19:53:41 CEST] <furq> if you use mpegts segments then you can...yeah
[19:54:04 CEST] <ChocolateArmpits> mpeg-ts that is, works with simple binary concatenation, no need to even use ffmpeg
[19:54:15 CEST] <rosek> okay, that make sense :-)
[19:55:08 CEST] <rosek> many thanks for the suggestions, I'll start to read documentation and play with ffmpeg and segments
[19:55:21 CEST] <rosek> this seems like a good solution
[19:55:46 CEST] <furq> !muxer segment @rosek
[19:55:46 CEST] <nfobot> rosek: http://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[19:55:50 CEST] <furq> that should be everything you need
[19:57:05 CEST] <rosek> furq: thanks, I'll give it a go :-)
[20:34:46 CEST] <azahi> how to record sound with alsa? `-f alsa -i hw:0,0` does not work, assuming I have `card 0: PCH [HDA Intel PCH], device 0: CX20590 Analog [CX20590 Analog]`, I've followed this article (https://trac.ffmpeg.org/wiki/Capture/ALSA) but I've got nothing
[20:40:37 CEST] <azahi> here's output, if it matters https://p.teknik.io/Raw/Ss138
[20:54:04 CEST] <mdavis> Hello all
[20:56:20 CEST] <debianuser> azahi: Do regular alsa app work? For example try: `arecord -Vstereo -fdat -d30 -Dhw:0,0 somefile.wav`
[21:00:31 CEST] <azahi> debianuser: no
[21:10:24 CEST] <debianuser> azahi: Does it print any errors? Can you copy its output to a pastebin?
[21:14:33 CEST] <azahi> debianuser: that commad you've given outputs this: `Recording WAVE 'somefile.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo`
[21:14:41 CEST] <azahi> and nothing else
[21:17:37 CEST] <azahi> can confirm that it is not a playback issue either; my mpv supports WAVE
[22:13:09 CEST] <TheTrueHohoah> I keep getting errors when using mpv+youtube-dl\
[22:13:11 CEST] <TheTrueHohoah> [ffmpeg] https: HTTP error 404 Not Found
[22:13:25 CEST] <TheTrueHohoah> What information do youi need?
[22:13:39 CEST] <TheTrueHohoah> ffmpeg version 2.6.9
[22:17:24 CEST] <JEEB> that really looks like not an FFmpeg issue if the URL you're passing just gives out 404
[22:29:06 CEST] <debianuser> azahi: You're probably having some alsa capturing issues. Can you show more details about your sound system? You can use alsa-info script: https://wiki.ubuntu.com/Audio/AlsaInfo it should automatically suggest you to upload your data and give you a link to it (you can run it as a regular user, it doesn't need root).
[22:30:12 CEST] <TheTrueHohoah> JEEB: but the URL isn't 404
[22:30:17 CEST] <TheTrueHohoah> So something is up there
[22:34:48 CEST] <azahi> debianuser: http://www.alsa-project.org/db/?f=d9d61dd87459b50e824d7762bb863227afee6179
[22:35:47 CEST] <azahi> if you wondering about kernel modules, I have them compiled in. here is my config: https://paste.pound-python.org/show/mbfMpuupgb2TT3BrUpEo/
[22:45:37 CEST] <debianuser> azahi: Yeah, I was going to suggest you to try different `position_fix` maybe ( https://www.kernel.org/doc/Documentation/sound/hd-audio/notes.rst : DMA-Position Problem ), and then noticed that you don't have any modules loaded. Well, you can still try that, you'll just have to reboot with   snd_hda_intel.position_fix=1  kernel arg set (try each of them: 1, 2, 3 and 4).
[22:48:03 CEST] <debianuser> azahi: you can test each of them with same ffmpeg or arecord command, e.g. `arecord -v -Vstereo -fdat -d30 -Dhw:0,0 somefile.wav`
[22:50:47 CEST] <azahi> debianuser: okay, thanks. will try this tomorrow, hope it will work
[23:08:51 CEST] <debianuser> You're welcome! If that won't work feel free to ping me here or in #alsa, I'll try to help if I can. It's just your soundcard/laptop model seems known for about 5 years (there's an explicit quirk for 0x17aa,0x21ce in /sound/pci/hda/patch_conexant.c), so it's supposed to work.
[00:00:00 CEST] --- Wed Apr 12 2017


More information about the Ffmpeg-devel-irc mailing list