[Ffmpeg-devel-irc] ffmpeg.log.20170124

burek burek021 at gmail.com
Wed Jan 25 03:05:01 EET 2017


[00:00:06 CET] <kerio> (which is definitely in that same file)
[00:02:30 CET] <furq> https://s-media-cache-ak0.pinimg.com/originals/18/d8/51/18d85163fa7ff962ffff771796016f5f.jpg
[00:02:35 CET] <furq> are you sure you lined this up correctly
[00:02:53 CET] <kerio> MI3 didn't have that
[00:03:30 CET] <furq> well yeah but how am i going to make a joke about no copy protection
[00:06:54 CET] <kerio> furq: https://www.youtube.com/watch?v=386zfdJt8JM
[00:07:13 CET] <llogan> reminds me of pirated World of Xeen in '94. didn't get the manual with the passwords, so repeatedly typed "dwarf, gold, mine" until it worked. didn't take long.
[00:08:08 CET] <furq> yeah i had sam & max hit the road without the manual, and it had sam or max in a different outfit on every page and you had to put the right outfit on them
[00:08:14 CET] <furq> but you could just put every outfit on them at once and it worked
[00:08:47 CET] <furq> i think that's the only lucasarts game i ever bothered finishing
[00:09:02 CET] <llogan> kerio: what the hell? i know the dude that uploaded that vid. weird.
[00:09:21 CET] <kerio> anyway, this is what i get https://www.youtube.com/watch?v=wpHI9JBb9EM from OPENING.SAN
[00:09:54 CET] <kerio> the timing is also wrong
[00:10:01 CET] <kerio> compare that to https://www.youtube.com/watch?v=h4KfVDllyBA
[00:13:06 CET] <kerio> aww it appears to be merely a toy http://ffmpeg.org/pipermail/ffmpeg-devel/2009-January/068543.html :(
[02:06:30 CET] <fyroc_> I have a ffmpeg stream that outputs to different rtmp urls. I'm wanting to make a scheduler application that allows me to schedule and change the video.
[02:07:04 CET] <fyroc_> A playlist.txt file is out of the question because ffmpeg only looks at the file on start.
[02:08:35 CET] <fyroc_> and I tried doing a symbolic link and that didnt work either
[02:09:21 CET] <fyroc_> Is there a way to change video without interrupting the stream?
[02:10:55 CET] <thebombzen> fyroc: if you're streaming from a streamable file, you could try using a named pipe
[02:11:18 CET] <thebombzen> have ffmpeg read from a pipe, and then write a program or script to write to the pipe
[02:11:22 CET] <fyroc_> I looked into this. They are MP4 files
[02:11:37 CET] <thebombzen> but ffmpeg can remux them to .ts or something
[02:11:45 CET] <fyroc_> I came to a dead end on how to do named pipes
[02:12:15 CET] <thebombzen> is have your scheduler application call ffmpeg or avformat to remux them to .ts, and feed that to a pipe, which you broadcast
[02:12:32 CET] <furq> the video will still drop if the pipe is empty though
[02:12:52 CET] <thebombzen> well yes
[02:13:14 CET] <thebombzen> ideally the feeder can write to the pipe faster than realtime
[02:14:14 CET] <fyroc_> Is there any code examples for this?
[02:16:04 CET] <furq> if sendcmd works with the movie source then you could maybe do something with zmq
[02:16:59 CET] <furq> i don't think you could create a dynamic playlist with that though
[02:18:26 CET] <furq> i wonder if you could abuse the hls demuxer into doing it
[02:20:37 CET] <fyroc_> Hm is ffmpeg not capable of doing this?
[02:24:54 CET] <fyroc_> I originally wanted to use ffmpeg because it was lightweight and didnt hog resources for just streaming mp4 files.
[02:25:19 CET] <furq> there's probably some ugly hack for doing it but nothing that comes to mind
[02:25:35 CET] <furq> streaming in ffmpeg is generally a bit of an afterthought
[02:52:33 CET] <DHE> there's ways around it. the pipe ends if the last process with the write side closes it. so the catch is to not let that happen
[02:52:47 CET] <DHE> something like:  run-my-schedule.sh > namedpipe
[02:53:03 CET] <DHE> and this script will repeatedly run "ffmpeg -i next-thing.mp4 -c copy -f mpegts -"
[02:53:15 CET] <DHE> this is just something I pulled out of my ass. someone here can probably refine it
[03:28:19 CET] <fyroc_> Hm.
[06:24:18 CET] <damdai> i found a bug in VLC but they are not willing to fix it
[09:20:03 CET] <mixfix41> does ffmpeg need to get recompiled if you switch to oss
[09:20:23 CET] <mixfix41> i cant remmeber what i do to get ffmpeg going with oss, shouldve noted
[09:38:59 CET] <kerio> oss is da best
[10:41:18 CET] <flomko> Hi all! i have strange problem with play mp3 - freeze by some millisecond at random time. diff on this files - freeze track have  encoder : LAME3.99r, normal track have no metadate about encoder.
[10:41:22 CET] <flomko> http://pastebin.com/q4LBNrVp
[10:42:29 CET] <flomko> if i decode working song with liblame codec, freeze appeared
[10:43:54 CET] <flomko> how can i find which encoder use song?
[10:44:34 CET] <ikevin> if it's not in the metadatas you can't
[10:52:45 CET] <flomko> may be i need upgrade ffmpeg, i sow codec mp3pro, maybe without support this codec i can't find it metadate
[10:52:48 CET] <flomko> &
[10:52:50 CET] <flomko> ?
[11:14:57 CET] <SouLShocK> shouldn't topic say 3.2.2 is released?
[11:50:38 CET] <kam187> hi guys
[11:51:16 CET] <kam187> i'm putting h264 nalu's into a mp4 container using ffmpeg, and it works most of the time.  But on some machines the first packet seems corrupted
[11:51:35 CET] <kam187> is there any way to analyse the output file with ffmpeg etc to look for the specific problem?
[11:56:57 CET] <kam187> nvm got it
[11:57:10 CET] <kam187> https://www.irccloud.com/pastebin/89PP9XR7/
[12:05:47 CET] <donEduardo> hi everyone
[12:07:03 CET] <donEduardo> short question: is there a "median filter" in ffmpeg? It should work as follows: 1st frame of the output video is the median of frames 1-10 in the input video. 2nd frame in the output is the median of frames 2-11 in the input video and so on
[12:07:16 CET] <donEduardo> now, i'm doing this with convert, but thats slow as hell
[12:15:29 CET] <kerio> when you say median
[12:41:05 CET] <daslicht> when creating a image thumbnail from a x264 video , its colors are a lil bit different
[12:41:06 CET] <daslicht> https://jsbin.com/qidahe/1/edit?html,output
[12:41:28 CET] <daslicht> is there a way to compensate this difference please?
[12:41:30 CET] <c_14> probably yuv->rgb conversion
[12:41:39 CET] <daslicht> yeah
[12:41:50 CET] <daslicht> is there a workaround ?
[12:45:39 CET] <c_14> You can try using the zscale filter which is supposedly better at format conversions
[13:03:38 CET] <daslicht> ok thanks
[13:08:40 CET] <donEduardo> @keriko by median, i mean the average value of each pixle. i'm looking for a similar result as in "convert *.jpg -evaluate-sequence median out.jpg"
[13:08:47 CET] <llamapixel> The image contents can help with understanding what compression to use.
[13:08:57 CET] <donEduardo> sorry, i meant #kerio :)
[13:09:07 CET] <kerio> the mean then
[13:09:32 CET] <donEduardo> kerio, yes. sorry, english is not my native language
[13:09:54 CET] <durandal_1707> donEduardo: no, but see atadenoise it do almost exactly that
[13:15:05 CET] <daslicht>  c_14: zscale just scales ? , I dont scale teh video at all at the moment, i just need a thumbnail which looks the same as the input
[13:15:54 CET] <durandal_1707> daslicht: it also converts between colorspaces
[13:20:34 CET] <daslicht> ahh ok thanks
[13:25:02 CET] <daslicht> anyone knows a video conversion service which takes yuv -> rgb differences into account ?
[13:29:16 CET] <furq> daslicht: you could also create jpg thumbnails, those are yuv
[13:29:23 CET] <furq> youtube thumbnails are yuv444p jpg
[13:31:13 CET] <daslicht> aha!
[13:31:32 CET] <daslicht> let me try this
[13:31:51 CET] <daslicht> currently we are using coconut as service
[13:39:39 CET] <donEduardo> thanks, durandal_1707, but I can't figure out the parameters. I try to remove people on a timelaps of a building beeing built, one picture was taken every 1 Minute. I thought that this options should remove all "fast" changes in the picture: -vf atadenoise,0a=0.3,1a=0.3,2a=0.3,0b=5,1b=5,2b=5
[13:48:03 CET] <durandal_1707> donEduardo: it cant, you need to modify filter code
[13:50:07 CET] <donEduardo> @durandal_1707 ah thanks... so I think I'll stick with convert ATM, it yields nice results... but takes looooooooooooots of time ;)
[13:52:03 CET] <kerio> and, if you're actually using jpegs as you wrote above, will look like super shit probably
[13:52:45 CET] <kerio> what does it even mean to average in yuv space
[13:52:47 CET] <kerio> is yuv linear
[13:53:12 CET] <kerio> srgb is definitely not linear
[13:54:15 CET] <durandal_1707> donEduardo: what convert command you use?
[13:56:18 CET] <daslicht> https://jsbin.com/karava/2/edit?html,output
[13:56:31 CET] <daslicht> thats an jpeg craeted from a x264 as source
[13:56:39 CET] <daslicht> looks 'oK' in chome
[13:56:48 CET] <daslicht> but in safari it is not acceptible
[13:57:07 CET] <kerio> daslicht: i see http://i.imgur.com/r84gpCb.jpg and https://storage.googleapis.com/product-videos-3c793.appspot.com/anita-hass-2/products/9096540999/videoData/poster.jpg with identical colors
[13:57:23 CET] <daslicht> try that in safari
[13:57:26 CET] <kerio> (if i zoom them so that the squares have equal size i can't tell the difference when flicking between the tabs)
[13:57:28 CET] <kerio> in safari
[13:57:30 CET] <kerio> on macos 10.12
[13:57:39 CET] <furq> looks fine to me in firefox
[13:57:45 CET] <daslicht> when I press play i see a dramatic color jump
[13:57:56 CET] <furq> although firefox and chrome are probably playing the webm
[13:57:59 CET] <kerio> i don't see a play button
[13:58:00 CET] <furq> firefox is here
[13:58:06 CET] <donEduardo> @durandal_1707 convert from imagemagick. I wrote a little python wrapper that combines 10 pictures like a moving average over the pictures taken in one minute intervals. this removes any flicker from different sun illumination and also removes most of the workers. then, i use ffmpeg to combine the pictures into a video
[13:58:25 CET] <furq> fwiw i don't think there's any point having an ogm fallback in this day and age
[13:58:33 CET] <furq> s/ogm/ogv
[13:58:39 CET] <kerio> donEduardo: can't you just read the raw video in numpy
[13:58:54 CET] <daslicht> in safari it gets brighter
[13:58:57 CET] <furq> i'm not aware of any recent version of any browser which only supports ogv out of those three formats
[13:59:16 CET] <kerio> daslicht: it outright doesn't work for me in safari :\
[13:59:21 CET] <furq> it gets brighter here but that's because there's a gray overlay on the video before it loads
[13:59:25 CET] <furq> s/loads/plays/
[13:59:25 CET] <daslicht> https://jsbin.com/karava/edit?html,output
[13:59:42 CET] <daslicht> i have moved up the mp4
[13:59:44 CET] <kerio> i see a video thing very briefly
[13:59:48 CET] <kerio> and then it disappears
[13:59:51 CET] <daslicht> hm
[13:59:55 CET] <donEduardo> kerio good idea. well it's jpegs as an input, but that might work faster than imagemagick
[14:00:05 CET] <kerio> donEduardo: hold on is the source format just jpeg
[14:00:08 CET] <kerio> i thought it was a video file
[14:00:55 CET] <donEduardo> kerio no it's jpeg, and the imagemagick way works good, but takes very long. so i thought maybe i'm faster with ffmpeg, because it's very fast in blending video streams
[14:01:21 CET] <kerio> are these RGB jpegs
[14:01:30 CET] <furq> rgb jpeg isn't supported by anything afaik
[14:01:36 CET] <kerio> ...really
[14:01:39 CET] <kerio> i thought it was always rgb :o
[14:01:42 CET] <furq> no
[14:01:45 CET] <kerio> shows what i know
[14:01:49 CET] <furq> it's always yuv full range
[14:01:52 CET] <furq> s/always/normally/
[14:02:00 CET] <furq> getting a lot of use out of irc sed today
[14:02:03 CET] <kerio> still, yuv space is not linear either right
[14:02:25 CET] <furq> yuv is linear
[14:02:29 CET] <kerio> oh nice :o
[14:03:23 CET] <test222> getting error when trying to copy a webm
[14:03:25 CET] <test222> http://pastebin.com/xnPyWNVt
[14:03:50 CET] <furq> but yeah
[14:03:54 CET] <furq> http://vpaste.net/sbeVM
[14:04:02 CET] <kerio> donEduardo: read with ffmpeg, output rawvideo to stdout, python script that reads 10 * width * height * Bpp in a numpy array and averages over one dimension and outputs average to stdout, ffmpeg reading rawvideo from stdin?
[14:04:05 CET] <furq> i don't think ffmpeg even supports rgb jpeg
[14:04:16 CET] <donEduardo> yes, they are 8bit sRGB
[14:04:33 CET] <kerio> donEduardo: apparently that's just a color profile thing
[14:04:51 CET] <donEduardo> kerio JPEG 2048x1536 2048x1536+0+0 8-bit sRGB 200KB 0.000u 0:00.000
[14:05:30 CET] <furq> yeah sRGB is just the colour profile
[14:05:33 CET] <furq> idk how or if ffmpeg handles that
[14:05:48 CET] <donEduardo> ah i see
[14:05:51 CET] <furq> kerio: http://vpaste.net/C4K1l
[14:05:59 CET] <furq> so the builtin encoder definitely doesn't handle it
[14:06:08 CET] <furq> maybe one of the external jpeg encoders does, but i don't think anything supports it anyway
[14:07:13 CET] <donEduardo> but kerio, that brings me to an idea... i jsut read the first 10 pictures into an array of 10 array, numpy them with average, and write out the first picture. then i replace the first array with the 11th picture and start again at averaging and so on
[14:07:18 CET] <kepstin> hmm? aren't most YUV spaces a direct linear conversion from some rgb space? As such, I'd expect then to still have a gamma curve, not be linear.
[14:07:34 CET] <kerio> donEduardo: are you sure you want 10
[14:07:37 CET] <furq> idk i just read that on the internet
[14:07:39 CET] <kerio> and not a power of 2
[14:07:49 CET] <kerio> because ffmpeg can do powers of 2
[14:07:55 CET] <kerio> http://video.stackexchange.com/a/17257
[14:08:44 CET] <donEduardo> kerio don't know yet, i'm still testing that. if it's to slow, the picture is not calm enoug. if it's to high, it's somehow gosty. but thats a good idea, i'll stick with powers of 2
[14:09:00 CET] <kerio> you could try tblend 3 times
[14:09:04 CET] <kerio> for 8 frames in 1
[14:09:20 CET] <kerio> but idk
[14:09:25 CET] <kerio> the numpy idea is probably the simplest here
[14:10:26 CET] <donEduardo> tblend sounds intersting... i have to check that. and i will try the numpy way also
[14:13:57 CET] <durandal_1707> tblend can average only 2 frames
[14:22:50 CET] <kepstin> hmm. a median filter might get better results than average, but you need more than 2 frames for that..
[14:27:58 CET] <durandal_1707> median is also slower to calculate
[14:29:19 CET] <kerio> what median?
[14:29:25 CET] <kerio> multivariate medians have multiple definition
[14:29:26 CET] <kerio> *s
[14:30:38 CET] <kepstin> individual medians on each colour plane would probably be "good enough" for this.
[14:37:34 CET] <donEduardo> well the median is the value in the middle of the list if you sort all the values. this is better than mean if you want to remove people. if we think grayscale, and a "bright" person (value:220) is walking in front of a dark pixel (value: 12) you get, if you take 5 imeages, for example 11,13,12,219,12. the median is 12 (sorted list: 11, 12, 12, 13, 219, value in the middle: 12). if you take the mean, you'll get 92.
[14:38:11 CET] <donEduardo> for compenstaing different lighting situation, the mean is better
[14:43:11 CET] <donEduardo> for example, i got 10 pictures, one minute difference, where a car is driving around. with a median operation, it disapears: http://iten.pro/median.jpg. if you take a mean, you will see it the better the slower it is: http://iten.pro/mean.jpg
[14:52:48 CET] <donEduardo> strange... mean is faster in numpy than with imagemagick, median is slower in numpy than imagemagick...
[14:59:17 CET] <test222> no, I don't think that helps
[15:28:28 CET] <fyroc> I found a solution
[15:28:32 CET] <fyroc> https://www.sk89q.com/
[15:28:52 CET] <fyroc> It uses liquidsoap and libav
[15:34:39 CET] <NapoleonWils0n> hi all
[15:34:58 CET] <NapoleonWils0n> im trying to helps someone record 50fps video from an m3u8 playlist
[15:35:26 CET] <NapoleonWils0n> if they manually add "-map 0:0 -map 0:5 -c copy" they can record the video
[15:36:04 CET] <NapoleonWils0n> if they dont use the mapping option ffmpeg doesnt record the 50fps video
[15:36:23 CET] <NapoleonWils0n> i would have thought ffmpeg should auto detect the highest bitrate video and use that
[15:38:33 CET] <DHE> ffmpeg's HLS player doesn't handle bitrate shifting automatically
[15:39:06 CET] <DHE> you're best off handling the selection of exactly which variant you want and then having ffmpeg take only that one
[15:39:24 CET] <NapoleonWils0n> so will need to manually map the stream
[15:39:31 CET] <NapoleonWils0n> cheers DHE
[15:40:32 CET] <NapoleonWils0n> will have to knock up a bash script and case statement with ffprobe to let the user select the mapping then pass off to ffmpeg
[15:46:39 CET] <DHE> wasn't what I meant.... but oh well
[16:10:32 CET] <SouLShocK> ffmpeg -i input.wav -af loudnorm=I=-16:TP=-1.5:LRA=11:print_format=json -f null - is very slow, running at 5.4x realtime. is loudnorm supposed to be that slow? it's running on a core i7 2.5ghz cpu
[16:14:45 CET] <SouLShocK> for comparison: ffmpeg -i input.wav test.mp4 produces at 25x realtime
[17:10:31 CET] <kepstin> SouLShocK: loudnorm is quite slow, yes, the calculations it does are pretty intensive.
[17:12:21 CET] <kepstin> (probably mostly due to the resampling required for peak handling, which i don't think you can disable)
[17:24:18 CET] <SouLShocK> Bummer. And loudnorm is the only R128 normalizer?
[17:24:55 CET] <SouLShocK> I work for the Danish broadcast corporation, which is part of EBU,  so that standard makes perfect sense for us
[17:28:22 CET] <SouLShocK> Interestingly I noted that it is not cpu bound. In my laptop it only used about 15% cpu. My cpu is hyperthreaded quadcore so my guess is that loudnorm is single threaded?
[17:28:57 CET] <SouLShocK> Since 15% is about 1/8 of the total resources = 1 core
[17:30:51 CET] <kerio> uefi-edk2-bhyve still has broken dependencies :|
[17:48:26 CET] <kepstin> SouLShocK: loudnorm may or may not be suitable depending what you're trying to do. Keep in mind it's designed as a realtime thing where it dynamically adjusts loudness within a stream
[17:49:05 CET] <kepstin> if you just want to adjust the integrated loudness of a single program, scanning with the 'ebur128' filter, then applying the gain with a volume filter in a second pass is probably faster
[17:50:18 CET] <kepstin> but loudnorm is great if you have an arbitrary input audio stream, and you just want to keep it within certain limits - it's just that doing that is computationally more expensive than a simple scan+volume adjust
[17:51:07 CET] <kepstin> and yes, basically all ffmpeg filters are single-threaded
[17:51:19 CET] <kepstin> (the core ffmpeg filter chain stuff is single-threaded)
[17:58:18 CET] <SouLShocK> Ah ok. It would indeed be used on each program separately. I'll try the ebur128 filter and then use a volume filter
[18:07:00 CET] <demonswaltz> I've cross compiled for ARM and am getting this error message: http://pastebin.com/c1dWUdcU  Googling is coming up empty.  What did I do wrong?
[18:09:56 CET] <linuxloon> I have opened a bug report on an issue with 24bit PCM audio not being reconized. I am attempting to upload a sample file to the ftp server upload.ffmpeg.org but I am not getting an answere from the server. Is there another way to upload sample files ?
[18:13:01 CET] <furq> demonswaltz: what's the target system
[18:13:26 CET] <linuxloon> fedora24
[18:14:04 CET] <linuxloon> trying from command line ftp and have tried gui tools like filezilla
[18:14:25 CET] <demonswaltz> chip I compiled using RPi instructions.
[18:39:17 CET] <faLUCE> hello. After muxing a packet, with  av_interleaved_write_frame(muxerContext, &encodedAudioPkt), how can I get the muxed data? The only way I found is to access the  AVIOContext buffer when I close it with avio_close_dyn_buf(muxerContext->pb, &muxerAVIOContextBuffer); but there should be a better way
[19:07:59 CET] <faLUCE> the problem is that   avio_open_dyn_buf() opens a WRITE_ONLY buffer... Then, how can I access it, in order to get the muxed frame? I don't find any other method for accessing it....
[19:10:22 CET] <Jpack> Hello, Ive been trying to convert a raw .pcm file to .m4a and my FFMPEG build just runs without any output except what is listed below: http://stackoverflow.com/questions/41816406/converting-from-raw-pcm-to-m4a
[19:10:54 CET] <Jpack> This is the command Im using:
[19:11:06 CET] <BtbN> you should update your ffmpeg
[19:11:15 CET] <BtbN> no idea if it helps with the issue, but it's old.
[19:12:32 CET] <Jpack> = new String[]{
[19:12:32 CET] <Jpack>                 "-f", "s16le", "-sample_rate", "44100", "-channels", "2",
[19:12:32 CET] <Jpack>                 "-i",
[19:12:32 CET] <Jpack>                 mDecodedBeat.getAbsolutePath(),
[19:12:32 CET] <Jpack>                 "-c:a", "aac", "-strict", "experimental",
[19:12:32 CET] <Jpack> I'm using the Android wrapper and its using 3.0.1, is there a newer update availa
[19:12:40 CET] <Jpack> *sorry
[19:13:55 CET] <BtbN> 3.2.2 just got released, and if you can, just use latest master
[19:15:22 CET] <Jpack> It doesn't seem like an option with the library I'm using sadly
[19:16:06 CET] <BtbN> seems like a horrible library then. And I also can't think of a reason why reading raw pcm would stall. It would happily read any kind of file with that.
[19:16:36 CET] <BtbN> Unless the OS blocks while openening/reading the file, or the output file.
[19:16:45 CET] <BtbN> In which case there's not much ffmpeg can do.
[19:20:04 CET] <Jpack> The pcm file that I am trying to encode is the result of mixing two pcm tracks together. The short array plays the audio data fine. I save that data into the file and that Is what I call with FFMPEG.
[21:15:49 CET] <rjp421> does anyone happen to have the long rpi-cam using raspivid cmd i pasted the other week? cant find a channel archive on google
[21:16:38 CET] <rjp421> its not in my bash history and the cmd itried to remake doesnt work
[21:17:27 CET] <llogan> rjp421: http://lists.ffmpeg.org/pipermail/ffmpeg-devel-irc/
[21:18:10 CET] <rjp421> ty ill check
[21:18:55 CET] <furq> rjp421: http://vpaste.net/Wq2PT
[21:19:11 CET] <furq> also the logs are at http://ffmpeg.gusari.org/irclogs/
[21:19:20 CET] <rjp421> furq, awesome ty!
[21:19:36 CET] <faLUCE> Does anyone know that?   after av_interleaved_write_frame(muxerContext, &encodedAudioPkt); how can I access the MUXED frame, whithout having to inspect the produced file?
[21:20:17 CET] <faLUCE> this is a very basic feature, and it's really hard to find a doc or page which explains that
[21:20:55 CET] <faLUCE> do I have to check muxerContext->pb->buffer ?
[21:21:14 CET] <BtbN> I'd guess you don't. Keep it in memory and write it to a file yourself if you also need to access it.
[21:23:20 CET] <faLUCE> BtbN: how can I keep it in memory?
[21:23:45 CET] <faLUCE> the API is really obscure for that.
[21:26:09 CET] <BtbN> With a dyn_buf, for example.
[21:26:27 CET] <faLUCE> I had a connection problem
[21:26:40 CET] <faLUCE> how can I keep it in memory? the API is obscure
[21:27:07 CET] <faLUCE> I tried with avio_open_dyn_buf(),  but the buffer is not accessible for reading
[21:27:15 CET] <BtbN> avio_get_dyn_buf...
[21:27:38 CET] <BtbN> and you also get it when closing it
[21:28:26 CET] <BtbN> If you need something more complex, you need to implement your own avio by implementing the callbacks avio_alloc_context takes.
[21:28:28 CET] <faLUCE> BtbN: I don't have this function in my API
[21:28:35 CET] <BtbN> You have a strange API then.
[21:28:53 CET] <faLUCE> BtbN: I use 2.8.7
[21:29:08 CET] <BtbN> Yeah, that's quite ancient. No idea what works and doesn't work there.
[21:29:33 CET] <BtbN> avio_get_dyn_buf was actually added to master 17 days ago. So I guess with the old API you only get the buffer on avio_close_dyn_buf.
[21:29:52 CET] <faLUCE> BtbN: yes, and this is the problem
[21:30:03 CET] <BtbN> And if you intend to mux a lot of data, that API seems suboptimal anyway.
[21:30:10 CET] <BtbN> Yeah, your old ffmpeg is a problem, indeed.
[21:30:12 CET] <faLUCE> I already tried with the way open-close-read
[21:30:19 CET] <faLUCE> this is something weird
[21:30:41 CET] <BtbN> Well, sounds like you will have to implement your own callbacks via avio_alloc_context then.
[21:30:52 CET] <BtbN> And you should update
[21:31:53 CET] <faLUCE> BtbN: tried that too. It's completely unsafe, weird and broken
[21:32:06 CET] <BtbN> You did something wrong then
[21:32:11 CET] <faLUCE> BtbN: really not
[21:32:53 CET] <faLUCE> BtbN: if I use a callback, I can't use the dyn_buf
[21:33:19 CET] <faLUCE> and if I don't use the dyn_buf, then the av_input_open function is a mess, for a non-file input
[21:33:25 CET] <faLUCE> (output)
[21:34:09 CET] <BtbN> No idea what you're getting at. If you alloc and implement the AVIO Context yourself, you obviously don't use the pre-defined dynbuf or file-output ones anymore.
[21:36:47 CET] <BtbN> There even is an example. For reading though, but you should get the idea: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/avio_reading.c
[21:38:04 CET] <faLUCE> muxerAVIOContext = avio_alloc_context(muxerAVIOContextBuffer, muxerAVIOContextBufferSize, 0, &bd, &read_packet, NULL, NULL);  <--- I allocated in this way the AVIOContext. This is exactly what you said.
[21:38:20 CET] <BtbN> didn't you want to _write_?
[21:38:30 CET] <faLUCE> BtbN: no, you are wrong. The example refers to a FILE
[21:38:41 CET] <BtbN> ...
[21:38:49 CET] <faLUCE> I don't have a file, I have a living stream
[21:39:03 CET] <BtbN> The example shows how to implement a custom avio context
[21:39:07 CET] <BtbN> do with it whatever pleases you
[21:39:38 CET] <faLUCE> BtbN: did you already tried with a non-file stream? I can assure you that it's completely weird with the api
[21:39:47 CET] <faLUCE> I already tried to modify that example
[21:40:10 CET] <BtbN> You just seem to be confused about how the API works. If that is your call, and you want to write, you are implementing the wrong callback.
[21:40:29 CET] <BtbN> The example shows how to read _something_, and uses a file as example.
[21:40:50 CET] <BtbN> You want to write I guess, so adapt accordingly and implement the write callback
[21:41:21 CET] <faLUCE> BtbN: then I have to use a write callback instead of the read callback
[21:41:56 CET] <faLUCE> BtbN: then I have to call avformat_open_input(&fmt_ctx, NULL, NULL, NULL);   ?
[21:42:11 CET] <faLUCE> with fmt_ctx with a write callback ?
[21:47:11 CET] <BtbN> Not sure why you want to open_input when you want to output something.
[21:47:16 CET] <BtbN> avformat_alloc_output_context2
[21:57:57 CET] <faLUCE> BtbN: so, you say: 1) avformat_alloc_output_context2 -->  muxerContext   2) avio_ctx = avio_alloc_context(...) with a write callback  3) muxerContext->pb = avio_ctx;     ?
[22:08:09 CET] <faLUCE> BtbN: it worked, thanks. I forgot to set the write_flag for the callback. Now it's ok
[23:29:51 CET] <Thisguy_> Whenever I transcode a particular video clip to mpeg4, its quality suffers immensely. I'm doing "ffmpeg -i video.mkv -vcodec mpeg4 outputfile.mp4". There is no sound in the source video, which is why I'm not specifying an audio codec of any kind. What am I doing wrong?
[23:30:11 CET] <Thisguy_> (Source video codec is H.264)
[23:32:32 CET] <kerio> Thisguy_: which bitrate or crf did you specify?
[23:33:13 CET] <Thisguy_> I specified neither, then tried -b:v 64k. Same result with the latter. Where in the command should I specify a constant rate factor?
[23:33:38 CET] <kerio> ...what are resolution and framerate of your video?
[23:34:14 CET] <Thisguy_> The source's resolution is 800x600... I think, I'm not really clear on that anymore... and the framerate is 30fps.
[23:34:28 CET] <kerio> ...64k? really?
[23:34:35 CET] <Thisguy_> I have no idea what I'm doing
[23:34:44 CET] <kerio> that's much, much less than what you'd expect to use for decent quality *audio*
[23:34:52 CET] <kerio> do you have any particular reason to use mpeg4?
[23:35:13 CET] <Thisguy_> Not really, I just want to compress it a bit... but not THIS much.
[23:35:21 CET] <Thisguy_> As far as I know, it's uncompressed.
[23:35:27 CET] <kerio> but you said it's x264
[23:35:42 CET] <Thisguy_> I am ALSO not clear on THAT. Sorry!
[23:35:46 CET] <kerio> can you pastebin the output of `ffprobe path/to/video`?
[23:37:29 CET] <Thisguy_> http://pastebin.com/6YNfzt9i I also still have the command I used to actually create the source.
[23:38:07 CET] <Thisguy_> Um
[23:38:11 CET] <Thisguy_> It occurs to me I missed
[23:38:16 CET] <Thisguy_> Lemme try that again
[23:38:57 CET] <Thisguy_> http://pastebin.com/Thvxf6At
[23:39:10 CET] <llogan> Using the encoder "mpeg4" will result in MPEG-4 Part 2 video. Is that what you want?
[23:39:24 CET] <thebombzen> It looks like this is the result of a screen capture
[23:39:44 CET] <shincodex> any good high level videos on encoding
[23:39:53 CET] <Thisguy_> Not sure. I have a short video in high quality that I want to compress a bit, and I don't mind losing a little quality on the way there. I chose mpeg4 because I *thought* I had heard of it.
[23:40:00 CET] <shincodex> other than me googling encoding basics or encoding high level concepts
[23:40:10 CET] <thebombzen> if you're interested in converting it to a more reasonable bitrate, I'd recommend using: "ffmpeg -i input -c:v libx264 -crf:v 23 output.mkv"
[23:40:33 CET] <thebombzen> x264 has a nice feature where you can set the quality ("crf") and it autoadjusts the bitrate to give you the required quality
[23:40:44 CET] <llogan> add "-pix_fmt yuv420p" if you want it to be playable on QuickTime, WMP, and other crap players
[23:41:07 CET] <Thisguy_> So most Windows users will be able to open it in its *current* state, and all I'm doing therein is making it a reasonably sized file for its length.
[23:41:13 CET] <thebombzen> also crf is "Constat Rate Factor" and a *lower* number is higher quality
[23:41:20 CET] <Thisguy_> I used crf 0 to get it.
[23:41:31 CET] <thebombzen> Yea crf 0 is perfectly lossless and completely unnecessary
[23:41:57 CET] <thebombzen> crf 16 is considered "visually lossless" in most cases but you can usually go higher than that
[23:42:19 CET] <shincodex> why not enforce a constant bit rate instead of having variable?
[23:42:20 CET] <thebombzen> 23 is the default but if you find that too low quality you can try something like 20
[23:42:35 CET] <thebombzen> shincodex: because constant bitrate gives worse quality/bitrate ratio
[23:42:48 CET] <thebombzen> variable bitrate encoding gives the best quality/bitrate ratio
[23:42:57 CET] <Thisguy_> I didn't know that. I was worried anything other than 0 would affect the output quality in some noticable way once someone got a hold of it with video editing software.
[23:43:18 CET] <thebombzen> 0 is perfectly lossless as in you can reconstruct the input bit-for-bit but it's not necessary to do that
[23:43:22 CET] <Thisguy_> Originally I wrote the script for a totally different reason, just repurposed it because it was handy.
[23:43:50 CET] <thebombzen> I'd recommend something in the 18-23 range. the general rule is to pick the highest number you're willing to accept
[23:44:27 CET] <thebombzen> crf 16 is nearly always not noticably different from the original, but it's often still unnecessarily high bitrate
[23:44:43 CET] <Thisguy_> Okay, cool. Will crf 18 work out for me during screen capture in such a way as to allow a higher framerate at the same overhead resource cost?
[23:45:02 CET] <Thisguy_> Or will it only affect size on disk?
[23:45:17 CET] <thebombzen> well if you capture at 60 fps instead of 30 you will use twice as much overhead
[23:45:23 CET] <thebombzen> because you're encoding twice as many frames per second
[23:45:23 CET] <Thisguy_> I get that.
[23:45:46 CET] <thebombzen> the crf affects encoding time but not nearly as much as the preset
[23:45:57 CET] <Thisguy_> Been using ultrafast.
[23:46:32 CET] <thebombzen> if you can't encode 60 fps with crf 0 ultrafast I find it unlikely you can encode crf18 at 60 fps ultrafast
[23:46:38 CET] <thebombzen> but why don't you just try it and find out
[23:47:03 CET] <thebombzen> It will affect the disk I/O thought, if that's the bottleneck
[23:47:13 CET] <thebombzen> but if the bottleneck is cpu I don't see it fixing anything
[23:55:50 CET] <shincodex> How can i understand that visually
[23:56:07 CET] <shincodex> I always assume that constant bit rate enforce a perticular fps
[23:57:10 CET] <Diag> shincodex: only if its not on constant fps
[23:57:44 CET] <Diag> i think in some cases (as someone pointed out)
[23:57:55 CET] <Diag> having a higher bitrate than needed just gets filled with junk lol
[00:00:00 CET] --- Wed Jan 25 2017


More information about the Ffmpeg-devel-irc mailing list