[Ffmpeg-devel-irc] ffmpeg.log.20171114

burek burek021 at gmail.com
Wed Nov 15 03:05:01 EET 2017


[00:01:27 CET] <durandal_1707> they have quantum computers which do all work in blink of eye
[00:27:43 CET] <echelon> ok, i set -threads 16 -slices 16 -cpu-used -4
[00:28:07 CET] <echelon> getting 0.826x speed
[00:53:03 CET] <SortaCore> quantum computers that have simultaneously completed and are still doing the encoding at once
[01:03:40 CET] <TheFuzzball> I have an mjpeg stream from VLC over HTTP, and when I try to capture it with ffmpeg: ffmpeg -i http://192.168.1.64:8554/ out.mp4, after a minute it packs in with an error
[01:05:43 CET] <TheFuzzball> "http://192.168.1.64:8554/: Invalid data found when processing input"
[01:05:56 CET] <TheFuzzball> Is there a way to stably record this stream indefinitely?
[01:55:20 CET] <debianuser> Hello, I have a weird question. (A friend asked me how to denoise video a bit, I suggested ffmpeg hqdn3d, and got a clarification - denoise video with sony vegas... so...) Does anyone knows if it's possible to use ffmpeg filters in Sony Vegas? Is there a plugin for vegas? Or what GUI tools people use for denoising these days?
[01:57:56 CET] <atomnuker> no, but tell your friend to use nlmeans, its better
[02:01:19 CET] <debianuser> atomnuker: Thank you! I didn't know about nlmeans before. Looking for nlmeans sony vegas plugin...
[02:02:47 CET] <atomnuker> I meant tto use ffmpeg's nlmeans filter, its better than hqdn3d
[02:05:10 CET] <debianuser> Ah, ok. Thanks anyway! Looks like nlmeans is supported in Handbrake GUI... let's look at it...
[04:02:37 CET] <raytiley_> trying to use a directshow source and I think it's guessing at the audio... the audio is 48000 but the it's reading 44100 which makes it sound like a robot... is there a way to tell ffmpeg what the input properties are?
[05:33:34 CET] <Jonno_FTW> how can I add the current alsa audio output as input to ffmpeg?
[05:35:41 CET] <debianuser> Jonno_FTW: You mean capturing whatever other apps are playing?
[05:35:46 CET] <Jonno_FTW> Yes
[05:39:37 CET] <debianuser> Well, there're many ways to do that, depending on what you need and what you use. Some cards support hardware "Loopback mixing" features. For pulseaudio you have to capture from "monitor" device. For jackd you can capture any app on its own. For plain alsa you can duplicate the sound to a file or a virtual Loopback card, and capture it with ffmpeg.
[05:40:12 CET] <debianuser> Jonno_FTW: TL;DR, can you show what audio system you have? You can use alsa-info script: https://wiki.ubuntu.com/Audio/AlsaInfo it should automatically suggest you to upload your data and give you a link to it (you can run it as a regular user, it doesn't need root).
[05:41:38 CET] <Jonno_FTW> debianuser: http://www.alsa-project.org/db/?f=25c793cc89a3dae9a4ffe2d91b28a1d7390a8eff
[05:43:01 CET] <Jonno_FTW> debianuser: here's my ffmpeg use https://gist.github.com/JonnoFTW/f24613308d845406a2741bdad9355f2d
[05:47:16 CET] <debianuser> Jonno_FTW: You card has no hardware loopback mixing, and we can't add a virtual loopback card for it (well, we can, but you have pulseaudio installed, it redirects all the output to itself and would ignore our settings anyway)...
[05:47:18 CET] <debianuser> Jonno_FTW: I guess your best option is to start ffmpeg capturing from `-f alsa -i default` (or `-f pulse -i default`) and when it's already running open "Recording" tab in `pavucontrol` and switch ffmpeg to "Monitor" device there: https://askubuntu.com/a/682793
[05:50:19 CET] <Jonno_FTW> debianuser: I get this error: https://gist.github.com/JonnoFTW/f24613308d845406a2741bdad9355f2d
[05:52:57 CET] <debianuser> Jonno_FTW: You have a typo there: remove another "-f alsa" right before the "$YOUTUBE_URL/$KEY"
[05:55:35 CET] <Jonno_FTW> debianuser: [NULL @ 0x315c4e0] Unable to find a suitable output format for 'rtmp://a.rtmp.youtube.com/live2/k...
[05:58:35 CET] <debianuser> Jonno_FTW: I guess youtube wants you to use "flv" output format. Try adding `-f flv` instead of that another `-f alsa` you removed.
[05:59:48 CET] <Jonno_FTW> debianuser: now -i deafult  doesn't work
[05:59:56 CET] <Jonno_FTW> no such file or directory
[06:00:32 CET] <debianuser> It's `-i default`, not `-i deafult` :)
[06:01:03 CET] <Jonno_FTW> my typo in irc
[06:01:24 CET] <Jonno_FTW> do I have to rebuild ffmpeg with pulse?
[06:02:50 CET] <debianuser> Jonno_FTW: no, `-f alsa -i default` should work with pulse too
[06:03:11 CET] <Jonno_FTW> so it does
[06:04:05 CET] <Jonno_FTW> debianuser: https://www.youtube.com/user/Jonnononno/live
[06:04:06 CET] <Jonno_FTW> it works!
[06:04:23 CET] <Jonno_FTW> can I have ffmpeg spit out less garbage?
[06:07:32 CET] <Jonno_FTW> debianuser: thanks for the help
[06:10:52 CET] <debianuser> By "garbage" you mean a lot of those "Past duration 0.... too large" messages? I don't know... A comment in https://trac.ffmpeg.org/ticket/4643 suggests it could be related to -framerate vs -r option. Maybe try inserting `-r $FPS` before the "-i :0.0" option and check if that changes anything?
[06:11:04 CET] <debianuser> Jonno_FTW: And you're welcome! I'm glad I could help!
[06:15:34 CET] <debianuser> (or the `-framerate $FPS` option... or you can try one, then the other, and check which one of them works better :) theoretically there should be `-framerate $FPS`)
[06:31:38 CET] <debianuser> PS: Can anyone add the `pavucontrol` hint to https://trac.ffmpeg.org/wiki/Capture/Desktop pulseaudio section ? Reference links: https://xpressrazor.wordpress.com/2013/05/26/record-desktop-in-linux-using-ffmpeg/ and https://askubuntu.com/questions/682144/capturing-only-desktop-audio-with-ffmpeg or scripted https://ffmpeg.org/pipermail/ffmpeg-user/2015-October/028834.html
[06:49:52 CET] <buhman> lol
[06:50:17 CET] <buhman> 3 years ago: "you didn't do exactly as I say, so I'm going to close this issue, even though it's a real valid problem"
[06:50:27 CET] <buhman> 3 years later: still not fixed
[06:53:29 CET] <buhman> > -r and -framerate are not the same, -r does this
[06:53:45 CET] <buhman> > complete explanation for -framerate: "Set the grabbing frame rate."
[06:54:31 CET] <buhman> -grabbing-framerate: see -framerate
[08:57:34 CET] <JC_Yang> does libavformat provide some kinds of connection-lost/failed notification callback for network connections? do I have to proactively detect the connection-lost and end the blocking operation via the interrupt callback when the condition is detected?
[11:42:37 CET] <debianuser> atomnuker: Heh. Author of `-vf nlmeans` is extremely mean! 3 seconds per a 720p frame, 4 days to process 1 hour of 30fps 720p video! He must be using a quantum computer to run that.
[11:50:15 CET] <muculus> my ffserver setting for HLS is: https://paste.ubuntu.com/25959873/
[11:50:48 CET] <muculus> but when I ran the server , I encounter this errors:
[11:51:07 CET] <muculus> Tue Nov 14 14:13:39 2017 [hls @ 0x55ed31333460]failed to rename file .tmp to
[11:51:30 CET] <DHE> ffserver support borderline doesn't exist anymore. I'm surprised it hasn't been yanked yet...
[11:55:30 CET] <Bear10> Does anyone know if you can somehow listen or run certain scripts when you start / stop receiving packets on an ffmpeg -listen 1 -i rtmp://.... ?
[11:56:50 CET] <furq> debianuser: reduce r if you want it to run faster
[11:57:13 CET] <furq> there's also an opencl version in avisynth/vapoursynth which i've had decent results with
[11:58:00 CET] <furq> r defaults to 15, it should apparently be an odd number
[11:58:50 CET] <muculus> DHE: Do you know whats the problem?
[11:59:21 CET] <furq> it's probably that ffserver is garbage
[11:59:26 CET] <furq> there's really no need to use it for hls
[11:59:50 CET] <furq> just put your m3u8 and fragments in a directory and serve it with any httld
[11:59:52 CET] <furq> httpd
[12:00:11 CET] <muculus> furq: It is a live streaming
[12:00:18 CET] <DHE> yeah, and? I do this all the time
[12:00:21 CET] <furq> yes it is
[12:00:38 CET] <DHE> I have a camera live-streaming a buiding being built over HLS
[12:00:49 CET] <muculus> I broadcast webcam on client to server
[12:01:16 CET] <muculus> then the server serves the HLS
[12:01:19 CET] <furq> if you need to generate the fragments on a remote box then use nginx-rtmp or something
[12:01:33 CET] <furq> ffserver is basically unsupported at this point
[12:02:08 CET] <furq> and it's perpetually on the brink of being removed entirely, so even if it did work you shouldn't base a workflow around it
[12:03:34 CET] <furq> Bear10: you probably also want to use nginx-rtmp if you're hoping to serve rtmp
[12:03:50 CET] <furq> you can run external commands on publish, play etc
[12:04:38 CET] <Bear10> furq: hmm maybe that's what i'm looking for
[12:05:50 CET] <Bear10> furq: i'll have to see if it can act on packets received
[12:05:58 CET] <Bear10> unless that's the "play"
[12:17:04 CET] <Bear10> hmm yeah i think this will do thanks! :)
[14:02:47 CET] <SortaCore> should I use h264_qsv encoder with h264_qsv hwaaccel?
[14:10:37 CET] <jkqxz> The hwaccel is just a hack to make libavcodec output GPU-side surfaces, and doesn't actually get used for anything.  You don't need ever need to touch it.
[14:20:27 CET] <BtbN> It's actually a hack just for ffmpeg.c
[15:07:35 CET] <SortaCore> what if I'm transcoding from h264 native to h264_qsv?
[15:07:44 CET] <SortaCore> rtsp h264 to file h264_qsv
[15:08:19 CET] <SortaCore> use h264 decoder, no hwaccel, and use h264_qsv encoder, no hwaaccel?
[15:10:03 CET] <SortaCore> @jkqxz
[15:16:42 CET] <jkqxz> No qsv hwaccel option.  (Though you could still use a different one if you want.)
[15:39:45 CET] <dradakovic> guys im trying to input multiple http radios and output them as udp multicast. The problem is that my outputs all have the same specific radio. I will paste the command.
[15:40:13 CET] <dradakovic> ffmpeg -thread_queue_size 1024 -i "http://listen.181fm.com:8052" -map 0 -f mpegts "udp://239.0.0.1:12345?localaddr=172.16.10.116" -i "http://listen.181fm.com:8126" -map 1 -f mpegts "udp://239.0.0.2:12345?localaddr=172.16.10.116"
[15:41:48 CET] <dradakovic> What i want is each radio input to be on each seperate output
[15:42:29 CET] <dradakovic> As you can see i tried with map parameter but it changed nothing
[16:00:33 CET] <c_14> maps go all the way at the end
[16:00:43 CET] <c_14> (before the output file you want it to affect)
[16:00:47 CET] <c_14> not before the input files
[16:01:02 CET] <c_14> wait
[16:01:18 CET] <c_14> yeah
[16:01:24 CET] <c_14> put your inputs all at the front
[16:01:28 CET] <c_14> and then your outputs at the end
[16:01:55 CET] <c_14> I don't think the parser understands extra input files after output files
[16:14:40 CET] <DHE> running 1 copy of ffmpeg to do what is essentially 2 unrelated jobs sits wrong with me.. why not run 2 instances?
[16:15:12 CET] <DHE> also you might want to consider setting codec options or use -c copy, and in the new versions multicast mpegts UDP is best used in CBR mode with a UDP bitrate set
[16:32:10 CET] <cc___> hello
[16:32:18 CET] <cc___> I've tried to follow this guide https://trac.ffmpeg.org/wiki/Capture/ALSA#Recordaudiofromanapplicationwhilealsoroutingtheaudiotoanoutputdevice
[16:32:48 CET] <cc___> in order to record the audio from a screencast while still hearing the app sounds
[16:33:03 CET] <cc___> it worked but I lost the mixing
[16:33:41 CET] <cc___> I've tried many things in the asoundrc.conf like adding dmix slaves but nothing worked
[16:33:53 CET] <cc___> I guess I didn't do it right
[16:34:12 CET] <cc___> how can I get mixing back while keeping this config ?
[16:58:10 CET] <zerodefect> When using the C-API to decode audio wrapped in MOV/MP4 container, do I need to always use a CodecContext per stream index? Even if it's PCM?
[17:02:55 CET] <DHE> it's encouraged. you can just have a single codepath regardless of the audio compression (or lack thereof)
[17:05:56 CET] <zerodefect> Ah ok. Thanks.
[17:06:27 CET] <zerodefect> Have you personally tried one or the other?
[17:07:27 CET] <zerodefect> So when you say, it's encouraged...it's encouraged to use a CodecContext per stream? Just to clarify.
[17:12:02 CET] <DHE> it's encouraged to just "decode" the PCM with an AVCodecContext even if it's not really doing any work
[17:12:28 CET] <zerodefect> Ah ok
[17:13:46 CET] <zerodefect> then coming back to the mapping, I presume one always uses an AVCodecContext per stream (let's say I'm decoding 8 streams of AAC as an example).
[17:21:41 CET] <DHE> yes, one each for any stream you intend to actually decode or encode.
[17:22:07 CET] <zerodefect> Thanks @DHE. You've cleared a few things up for me :)
[17:26:20 CET] <zerodefect> Actually @DHE, are there any tools in the API/code to buffer decoded audio/video to keep the AV synchronized? I'm guessing I have to implement the buffering/syncrhonization myself?  Just thinking that when I read a single audio frame, it may not correspond to a video frame's worth of audio?
[17:28:21 CET] <DHE> av_write_interleaved_frame() or such is intended to help you
[17:28:30 CET] <DHE> this is from memory, check the real docs
[17:29:28 CET] <zerodefect> Thanks. I'll take a look.
[18:04:12 CET] <faLUCE> Hello. I tried this:  "ffmpeg -loop 1 -i vaccaro3.png -i all_MP3WRAP.mp3 -c:a copy -c:v mpeg4 -shortest -t 00:00:10 scarlatti-vergine-dei-dolori.mp4"  in order to create a video with an image and a mp3 file. It works, but after few seconds the image of the video becomes with big pixels
[18:04:19 CET] <faLUCE> how can I avoid that?
[18:05:16 CET] <c_14> use libx264 instead of mpeg4
[18:08:25 CET] <faLUCE> c_14: thanks, it solved the problem
[18:09:06 CET] <faLUCE> c_14: do you know how to speed up the process? it requires ~1 second for each second of audio
[18:09:55 CET] <c_14> try a faster x264 preset
[18:10:02 CET] <c_14> -preset fast/veryfast/ultrafast
[18:11:25 CET] <faLUCE> c_14: in this way it is faster. Is it possible to make it even much faster?
[18:11:46 CET] <c_14> faster hard drive
[18:11:48 CET] <c_14> better cpu
[18:11:53 CET] <faLUCE> I see.
[18:11:55 CET] <c_14> ramdisk
[18:28:19 CET] <DHE> what's the framerate on that?
[18:29:35 CET] <DHE> it might be possible to reduce the framerate to buy time
[18:30:24 CET] <stockstandard> Hello everyone, extremely new to ffmpeg here... What exactly is the difference between ffmpeg and ffprobe?
[18:30:58 CET] <faLUCE> DHE: I did not set the framerate
[18:31:26 CET] <stockstandard> Trying to get the following working and unsure what the dir for ffmprobe would be (if different from dir for ffmpeg?): https://github.com/jwdempsey/PlexTools.bundle
[18:31:29 CET] <faLUCE> I can see that the bitrate is 393kbits/s
[18:31:36 CET] <faLUCE> (with medium preset)
[18:33:26 CET] <faLUCE> DHE: given that the image is always the same, I could set 1fps
[18:36:08 CET] <faLUCE> well, I speeded up with -r 1
[18:38:42 CET] <DHE> stockstandard: ffmpeg is a full conversion utility. ffprobe mostly reports information about a media file like resolution, duration, framerate, etc.
[18:39:20 CET] <stockstandard> DHE, is ffprobe a function of ffmpeg, or an entirely different program?
[18:39:50 CET] <stockstandard> ...is there a separate dir I would point to for utilizing ffprobe v ffmpeg
[18:41:02 CET] <DHE> ffmpeg is both the name of the project, and a command-line app to do media conversions. ffprobe is another command-line app that just provides a quick summary of a file (unless options request more in-depth data)
[18:42:42 CET] <stockstandard> Okay, so ffprobe needs to be downloaded/installed separately?
[18:42:44 CET] <intrac> can anyone confirm that ffmpeg can't write h264 video into an .ogg container?
[18:43:06 CET] <DHE> depends on where you get ffmpeg. most linux distros include both. some 3rd party may not.
[18:45:52 CET] <stockstandard> I am running a QNAP - seems that it came with it
[18:46:03 CET] <stockstandard> seems that it came with ffmpeg*
[18:59:21 CET] <stockstandard> DHE, looks like QNAP linux comes with ffserver and ffmpeg
[18:59:50 CET] <stockstandard> ffserver something totally different?
[19:04:08 CET] <SortaCore> No accelerated colorspace conversion from YUV420P to BGR24, any way to make one?
[19:07:07 CET] <DHE> stockstandard: in theory ffserver allows you to do content distribution. in practice it's unmaintained and buggy - use anything else if possible
[19:07:27 CET] <stockstandard> DHE okay thanks
[19:11:52 CET] <Toma> Hey everyone. Sorry for the uber noob question that is probably asked about 20 times per day, but is there an updated guide somewhere for how to convert DCP to another movie format with the best possible results?
[19:12:13 CET] <Toma> From googling, I could only find older guides from 2014 and before and I am not sure, they are the best option to use
[19:15:15 CET] <SortaCore> is there a av_read_frame that returns immediately if there is no new frame? (e.g. rtsp)
[19:34:41 CET] <blunaxela> I can record my local desktop fairly easily with ffmpeg -f x11grab, but when I try to record via X2Go (remote desktop with libnx) it's blank. I can take screenshots with xwd, but x11grab seems to record all black.
[19:59:49 CET] <pgorley> hi, i'm using av_image_get_buffer_size with align hardcoded to 1, is there a way to find out the right alignment? i'm on linux amd64
[20:00:12 CET] <BtbN> right alignment for what?
[20:03:25 CET] <pgorley> BtbN: i have an AVFrame and i want to get its size in bytes
[20:04:12 CET] <pgorley> so i'd like to know what alignment i need to use to get the exact size
[20:04:32 CET] <BtbN> I don't follow
[20:05:32 CET] <BtbN> numbers_of_planes*height*linesize_per_plane is your size
[20:05:44 CET] <BtbN> keep in mind that linesize can vary per plane
[20:06:44 CET] <pgorley> i'm calling av_image_get_buffer_size(frame->format, frame->width, frame->height, 1)
[20:08:04 CET] <BtbN> https://www.ffmpeg.org/doxygen/trunk/group__lavu__picture.html#ga24a67963c3ae0054a2a4bab35930e694
[20:10:55 CET] <pgorley> i found this https://ffmpeg.org/doxygen/1.2/group__lavc__picture.html#ga18a08bcb237767ef442fd5d3d1dd2084
[20:11:12 CET] <pgorley> which always assumes a linsize alignment of 1
[20:12:05 CET] <BtbN> that is _ancient_ documentation.
[20:12:20 CET] <pgorley> :/
[20:12:25 CET] <BtbN> ffmpeg 1.2
[20:12:41 CET] <BtbN> I don't understand your problem
[20:12:57 CET] <BtbN> If you already have a finished picture, you already know its size. Why calculate it?
[20:13:16 CET] <pgorley> i need to find the decoding speed per byte
[20:13:34 CET] <BtbN> per byte of raw picture?
[20:14:21 CET] <pgorley> i need to find out how long it takes to decode a picture, so i'm going with x bytes/second
[20:14:47 CET] <BtbN> Are you sure you don't want to use the amount of bytes send into the decoder for that?
[20:15:08 CET] <BtbN> The pictures they output are massie compared to the encoded stream
[20:15:36 CET] <pgorley> maybe, how would i do that?
[20:16:10 CET] <BtbN> Packets just have their size on them as a field
[20:16:35 CET] <pgorley> oh wow, how did i miss that?
[20:16:45 CET] <BtbN> Frames also have their size right on them
[20:16:57 CET] <BtbN> you know their height, you know the amount of planes, and you know the linesize of each plane
[20:17:01 CET] <BtbN> so you have the size right there.
[20:17:31 CET] <pgorley> i'm an idiot, thanks
[21:09:07 CET] <echelon> what would be the effect of using a ultrafast preset for the first pass and a slow preset for the second pass when encoding h.264?
[21:09:48 CET] <BtbN> most likely an error about useless first-pass data.
[21:10:14 CET] <echelon> ah.. constant rate-factor is incompatible with 2pass.
[21:10:31 CET] <BtbN> crf makes no sense for twopass
[21:10:51 CET] <BtbN> either you want a constant filesize, or a constant quality
[21:11:25 CET] <echelon> is there a -speed setting that you can use like with vp9?
[21:11:36 CET] <echelon> vp9 lets you use different speed parameters for first and second pass
[21:12:14 CET] <echelon> also, i didn't specify -crf
[21:21:14 CET] <thebombzen> echelon: you'll probably get an error about useless first-pass data
[21:21:48 CET] <thebombzen> you should probably be using crf encoding anyway with a vbv-bufsize and a vbv-maxrate
[21:22:54 CET] <thebombzen> you can set -crf:v, -maxrate:v and -bufsize:v and it'll do crf encoding but the average rate across a buffer of size "bufsize" won't exceed "maxrate"
[21:23:52 CET] <echelon> well, i dunno what the optimal rate should be so i let ffmpeg decide :/
[21:26:58 CET] <echelon> constant rate-factor is incompatible with 2pass -_-
[21:27:05 CET] <echelon> i'm not even using crf
[21:33:22 CET] <echelon> thebombzen, BtbN https://paste.ee/r/q4PfO
[21:34:04 CET] <thebombzen> echelon: that's because CRF mode targets a specific quality, and 2-pass targets a specific filesize
[21:34:23 CET] <thebombzen> I recommend you don't use 2-pass encoding, and instead use CRF encoding, setting the maxrate and bufsize.
[21:34:48 CET] <thebombzen> However, it really depends on  your application.
[21:34:54 CET] <echelon> but i'm not using crf in the args
[21:34:58 CET] <echelon> hrm
[21:35:24 CET] <thebombzen> echelon: you need -pass 1
[21:35:29 CET] <thebombzen> in the first command
[21:35:36 CET] <echelon> d'oh
[21:35:37 CET] <echelon> thatnks
[21:35:40 CET] <echelon> thanks*
[21:35:41 CET] <thebombzen> and you need to set the bitrate in the second command
[21:35:55 CET] <thebombzen> if you don't set the bitrate, FFmpeg will assume CRF encoding with CRF 23
[21:35:58 CET] <thebombzen> I believe
[21:36:31 CET] <echelon> kk
[21:36:57 CET] <thebombzen> ideally also set the bitrate in the first command as well
[21:37:23 CET] <thebombzen> I'm not sure if it's actually necessary, but you should anyway even if it isn't
[21:37:54 CET] <echelon> ok
[21:46:04 CET] <drathir> hi all... im wonder f its normal/known, but looks like ffmpeg fail convert mp4 sources with vobsub streams dvd ripped from handbrake...
[21:53:40 CET] <jfdh> is it possible to compile ffmpeg static with mmal support? I'm unable to compile on Raspbian with these configure options https://pastebin.com/bWB0DMRY
[21:55:06 CET] <BtbN> Does mmal have static libs?
[21:58:00 CET] <thebombzen> jfdh: what does 'Unable' mean
[21:58:13 CET] <thebombzen> does it produce an error? does it just produce a shared binary?
[21:59:09 CET] <jfdh> either I get the error ERROR: mmal not found or I can compile it without mmal into a static binary or I am able to compile it as dynamic binary but given I am building for Docker I prefer static
[21:59:25 CET] <relaxed> pretty sure omx is a shared lib
[22:00:46 CET] <jfdh> yeah my other problem is that the static binary complains about [h264_omx @ 0x214c9f0] /opt/vc/lib/libopenmaxil.so not found even though its there
[22:01:11 CET] <jfdh> if I build with OMX/MMAL I am unable to compile it as static binary?
[22:01:34 CET] <BtbN> I doubt you can put something in Docker that can interface with a GPU encoder/decoder
[22:01:38 CET] <relaxed> correct
[22:02:10 CET] <BtbN> Also, if you're using Docker, why would you care about shared libs? It's Docker after all.
[22:03:57 CET] <jfdh> easier to update from my perspective
[22:05:32 CET] <BtbN> That makes no sense
[22:05:35 CET] <BtbN> it's Docker
[22:05:38 CET] <BtbN> the whole thing is static
[22:05:45 CET] <SortaCore> how do I get the socket of an RTSP input?
[22:07:30 CET] <relaxed> netstat?
[22:07:36 CET] <SortaCore> in C++
[22:09:55 CET] <SortaCore> so many classes and abstractions I can't figure out where it is, let alone how to get at it
[22:10:25 CET] <SortaCore> wish I was more familiar with how ffmpeg do
[22:10:33 CET] <JEEB> if you want to do IO yourself you just register AVIO callbacks
[22:11:02 CET] <JEEB> otherwise, you don't want to break the abstraction layers of libavformat
[22:12:55 CET] <SortaCore> I don't really want to do it myself
[22:13:15 CET] <SortaCore> I just want to call a peek function to see if reading a packet will make the thread wait for new daata
[22:13:25 CET] <SortaCore> or if data was already received
[22:15:21 CET] <SortaCore> av_read_frame is currently blocking until it can read, which is okay if you manage to balance it with the main thread
[22:15:29 CET] <SortaCore> but you end up with massive CPU usage
[22:15:47 CET] <JEEB> yea, you want an event based thing, which currently lavf isn't
[22:16:13 CET] <JEEB> and lavf IIRC won't read you stuff automagically if you don't call av_read_frame()
[22:16:22 CET] <JEEB> so you can't just keep on peeking
[22:16:39 CET] <SortaCore> my easiest workaround was to use ioctlsocket() with FIONREAD to see pending byte count
[22:16:57 CET] <SortaCore> if it's zero, continue with main app, otherwise call read_frame, and loop back to ioctlsocket
[22:17:40 CET] <BtbN> ffmpeg is blocking and single threaded
[22:17:48 CET] <BtbN> so put it in a thread and send yourself notifications from that
[22:17:58 CET] <SortaCore> it is in a thread
[22:18:10 CET] <SortaCore> but if I get the thread to sleep, it falls behind more and more
[22:18:17 CET] <SortaCore> if I have a loop, I have to guess how many network packets it needs to read
[22:18:20 CET] <BtbN> I don't follow
[22:18:35 CET] <BtbN> You read, process, and when processing is done, read again
[22:18:46 CET] <BtbN> No idea how you'd fall behind there
[22:18:50 CET] <SortaCore> I have a function that processes, from av_read_frame input to outputting
[22:19:04 CET] <SortaCore> the CPU usage is too high, that's why it's a problem
[22:19:19 CET] <BtbN> If CPU usage is too high, you are bound to fall behind
[22:19:20 CET] <SortaCore> because reading the network will max out CPU usage until it has data
[22:19:26 CET] <BtbN> no it won't
[22:19:31 CET] <BtbN> It's blocking
[22:19:46 CET] <BtbN> reading a blocking socket does not cause any CPU usage
[22:20:40 CET] <SortaCore> okay, then how do I reduce CPU usage altogether
[22:21:28 CET] <BtbN> No idea, use a profiler and see what's using so much.
[22:21:44 CET] <BtbN> If this isn't some ultra low powered device, decoding should not use 100% CPU
[22:21:56 CET] <BtbN> unless it's HEVC or something
[22:22:03 CET] <SortaCore> it's not 100%, it's just maxed out a core
[22:22:13 CET] <SortaCore> well, all that the thread can get, anyway
[22:22:41 CET] <BtbN> sounds like you're doing some busy waiting somewhere
[22:28:59 CET] <SortaCore> from memory threads do blow up when you have them in an infinite loop of read, process, output
[22:29:38 CET] <BtbN> blocking system calls do not cause CPU load.
[22:33:06 CET] <SortaCore> all the profiler is telling me is that it's libmfxhw32.dll using the most
[22:36:09 CET] <BtbN> well, blame intel then
[22:37:13 CET] <JEEB> whatever the hell it's doing while you're waiting for input o_O
[22:38:31 CET] <SortaCore> anywhere from 7% to 60% o.O
[22:41:57 CET] <BtbN> I doubt your CPU usage happens whil you're waiting for input
[22:42:00 CET] <BtbN> but during decoding
[22:44:38 CET] <SortaCore> hm
[22:47:05 CET] <raytiley_> anyone ever use ffmpeg w/ named pipes in windows (or I guess any OS really)? I have an application that has raw video / pcm audio and I was thinking of writing it to a named pipe to be consumed by ffmpeg
[22:47:43 CET] <JEEB> use NUT for raw video/audio in a single thing
[22:47:56 CET] <JEEB> also lets you have pretty proper timestamps on those packets
[22:48:06 CET] <raytiley_> what is NUT?
[22:48:35 CET] <JEEB> a "let's make a container" thing from FFmpeg, which in the end became mostly used for passing on raw video+audio over pipes etc
[22:48:50 CET] <JEEB> because it's streamable, has timestamps and can contain audio
[22:49:12 CET] <raytiley_> cool.. googled  and found this: https://ffmpeg.org/nut.html
[22:49:17 CET] <raytiley_> i'll read through that. Thanks!
[23:06:48 CET] <debianuser> cc___: If your question about mixing is still relevant, can you show what configs and soundcards you have? You can use alsa-info script: https://wiki.ubuntu.com/Audio/AlsaInfo it should automatically suggest you to upload your data and give you a link to it (you can run it as a regular user, it doesn't need root).
[23:29:26 CET] <alexpigment> Hey guys, I have a unitasker app from a while back that passes a command line to FFMPEG. Unfortunately, it uses the scale filter which will occasionally fail when it calculates an odd number for the resolution
[23:29:41 CET] <alexpigment> In this case, it's way easier to replace FFMPEG than to rewrite the app
[23:30:13 CET] <alexpigment> so I'm wondering if there's a way in the FFMPEG source to make it read -vf scale=-1:360 as -vf scale=-2:360
[23:33:03 CET] <alexpigment> Or, I guess make x264 round up to the nearest even number
[23:36:52 CET] <c_14> hardcode a factor in libavfilter/scale.c around lines 180-200
[23:38:18 CET] <alexpigment> so like basically for (i = -1 {i==-2 ?
[23:38:34 CET] <alexpigment> (i'm summarizing, but still)
[23:39:21 CET] <c_14> hmm?
[23:39:49 CET] <alexpigment> ohhh
[23:39:50 CET] <alexpigment> nm
[23:39:54 CET] <c_14> probably just set the factor_w/factor_h both to 2 and drop lines 182-187
[23:39:57 CET] <alexpigment> i was still looking in libswscale
[23:40:16 CET] <kepstin> don't even have to drop lines 182-187, since the app is passing -1, but it won't hurt.
[23:40:18 CET] <alexpigment> sorry, now looking at the right scale.c ;)
[23:42:21 CET] <alexpigment> ok, i'm going to try this out. thanks c_14 & kepstin
[00:00:00 CET] --- Wed Nov 15 2017


More information about the Ffmpeg-devel-irc mailing list