[Ffmpeg-devel-irc] ffmpeg.log.20160324

burek burek021 at gmail.com
Fri Mar 25 02:05:01 CET 2016


[00:05:24 CET] <durandal_170> AlexQ: yes; directly
[00:06:31 CET] <AlexQ> I'm afraid that will preserve chapter info and whatnot, but whatever...
[00:06:47 CET] <AlexQ> Let's see
[00:06:56 CET] <AlexQ> Checked the original file and it plays well
[00:07:21 CET] <AlexQ> It this doesn't work, maybe I'll try to mux the original file with the FLAC I extracted or sth
[00:09:42 CET] <AlexQ> remuxing fixed the issue.
[00:10:23 CET] <AlexQ> both audio and vid from the same file that was wrong, just muxed again. Ehh, so strange
[00:10:40 CET] <AlexQ> maybe it was the filters that created the issue, durandal_170
[00:11:25 CET] <durandal_170> now try with demuxer one to do filters
[00:11:42 CET] <durandal_170> *remuxed
[00:12:01 CET] <AlexQ> um, I am done, actually: the FLAC file was the result of the volume amp filter
[00:12:28 CET] <AlexQ> FLAC stream*
[00:13:13 CET] <AlexQ> interestingly, Foobar's Dobly Headphone DSP produced 10 dB louder output than VLC's headphone DSP
[01:02:09 CET] <isoboy> Hi there
[01:02:49 CET] <isoboy> I'm trying to build a ffmpeg sample code from https://ffmpeg.org/doxygen/trunk/encoding-example_8c-source.html
[01:03:13 CET] <isoboy> I've managed to get it working, but I'm getting timestamp error
[01:03:21 CET] <isoboy> http://pastebin.com/hwGQAGgp
[01:04:34 CET] <isoboy> I've tried setting the AVFrame::pts and AVPacket::pts but they did not work. What unit are the timestamps in? milliseconds? seconds?
[01:04:55 CET] <isoboy> I'd really appreciate it if someone could point me in the right direction.
[01:05:29 CET] <llogan> not many library usage answers here. may want to try the FFmpeg libav-user mailing list
[01:09:09 CET] <isoboy> I see. Thanks llogan.
[04:02:20 CET] <kuroro> i want to concatenate 3 short videos ( 2 of which are loud, while 1 has low volume). I want to normalize their volumes to be similar to each other. my current attempt of using dynaudnorm filter didnt really work (last clip still has low volume)
[04:02:45 CET] <kuroro> here's my ffmpeg command that's not working (http://pastebin.com/raw/euKxspxV)
[04:35:21 CET] <kuroro> trying to use dynaudnorm on audio file of concatenated mp4 didnt work either (the results from "sox file.wav -n stat") were the same before and after the ffmpeg command below was executed
[04:35:36 CET] <kuroro> ffmpeg -v debug -i norm_tAy6wnTbQUFY_vQ.wav -af dynaudnorm=m=100.0:r=0.3 dynanorm_tAy6wnTbQUFY_vQ.wav
[04:35:48 CET] <kuroro> http://pastebin.com/raw/b2mGZJ4X
[06:59:43 CET] <parrot1> hi..while trying to link against ffmpeg library I got error /usr/bin/ld: /usr/local/lib/libavdevice.a(xcbgrab.o): undefined reference to symbol 'xcb_xfixes_query_version_reply' .
[06:59:45 CET] <parrot1> Any idea?
[07:15:06 CET] <thebombzen> parrot1: did you install libxcb correctly?
[07:15:34 CET] <thebombzen> make sure you install the development library as well (libxcb-dev or xcb-devel usually)
[07:19:05 CET] <parrot1> thebombzen: checking....
[07:25:35 CET] <parrot1> thebombzen: looks like libxcb-devel is installed. I'm using Fedora 23
[07:26:10 CET] <thebombzen> try dynamically linking ffmpeg. that is what you should be doing anyway
[07:33:35 CET] <parrot1> thebombzen: ok..thanks for the help :-)
[08:52:14 CET] <liquid-silence> hi all
[08:52:23 CET] <liquid-silence> ffmpeg -i 1frame.mp4 -ss 4.522133333333334 -t 1 -s 800x600 -f image2 imagefile.jpg
[08:52:31 CET] <liquid-silence> extracts the frame after those seconds
[08:53:02 CET] <liquid-silence> so if burnin on the video is 00:00:04:15 the image is 00:00:04:16
[08:53:05 CET] <linuxuser9000> Hi. I'm trying to build ffmpeg from source and I was wondering if anyone has a generic command they use for the configure script? I want to be able to convert gif to webm, so I want to enable vp8 or vp9 for example, but I'd like to know if any of you have a better configure command than that
[08:53:07 CET] <liquid-silence> its consistent
[08:53:26 CET] <liquid-silence> but how do I extract the frame on those seconds? or get the 15th frame on 4th second of the video?
[08:59:01 CET] <furq> liquid-silence: -filter:v "select=gte(n\,115)" -frames:v 1
[08:59:04 CET] <furq> assuming your video is 25fps
[08:59:11 CET] <liquid-silence> yeah
[08:59:27 CET] <liquid-silence> this only works when we add a keyframe to every frame in the video
[08:59:45 CET] <furq> it shouldn't make a difference if you're extracting to jpeg
[08:59:47 CET] <liquid-silence> we are using an html5 video player
[08:59:51 CET] <furq> s/extracting/converting/
[08:59:55 CET] <liquid-silence> that is reporting the incorrect timecode
[09:00:08 CET] <liquid-silence> those seconds come from the html5 video tag's current time
[09:00:17 CET] <liquid-silence> which is inherently broken
[09:00:28 CET] <liquid-silence> so we will never be accurate?
[09:00:45 CET] <furq> no idea, i've never had to deal with browser video timecodes
[09:00:59 CET] <furq> just typing "browser video timecodes" gave me a headache
[09:01:16 CET] <liquid-silence> its a pain in the ass
[09:01:18 CET] <liquid-silence> honestly
[09:02:20 CET] <furq> doubtless once you do figure it out you'll discover that firefox and chrome do it completely differently
[09:03:55 CET] <furq> and that safari doesn't support it, and IE claims to support it but is actually just returning CryptGenRandom()
[09:04:04 CET] <furq> web development!
[09:09:38 CET] <liquid-silence> pain in my ass man
[09:16:55 CET] <linuxuser9000> well i tried configuring everything i could from the encoder section. this should be good
[09:23:56 CET] <furq> linuxuser9000: you generally don't need to do much
[09:25:07 CET] <furq> you only really need to enable stuff that you intend on using for encoding
[09:26:08 CET] <furq> there are only a few obscure-ish codecs which don't have built-in decoders
[09:29:48 CET] <linuxuser9000> furq: Thanks
[09:29:56 CET] <linuxuser9000> I'm going to retry tomorrow
[09:30:46 CET] <linuxuser9000> So the encoding, if I only want to make .aac, .opus, .mp4 and .webm, do i only need encoders for those? and vice versa for decoders for the inputs i use?
[09:31:20 CET] <furq> all you need is libopus and libvpx
[09:32:37 CET] <linuxuser9000> make clean returns the directory to as if I'd just done a git-pull, right?
[09:32:59 CET] <furq> it might be make distclean, i forget now
[09:33:06 CET] <furq> one or the other
[09:33:28 CET] <linuxuser9000> Thank you
[09:33:35 CET] <furq> https://ffmpeg.org/general.html#Video-Codecs
[09:33:44 CET] <furq> only codecs with an E under decoding need an external library
[09:33:58 CET] <furq> which is maybe three or four audio codecs and no video codecs
[09:34:58 CET] <linuxuser9000> argh I need to enable vp8 and vp9
[09:35:04 CET] <linuxuser9000> thanks for the link
[09:35:47 CET] <furq> if you want to make mp4 then you probably also want libx264
[09:36:01 CET] <furq> and fdk-aac is a better aac encoder than the builtin one
[09:36:17 CET] <linuxuser9000> Thanks again for those tips I'll enable those
[09:36:51 CET] <linuxuser9000> I'm going to just ctrl-c my current make and re-issue a new configure command
[09:37:11 CET] <furq> don't forget to run make -j
[09:37:35 CET] <linuxuser9000> that didn't do anything
[09:37:49 CET] <furq> it should be running faster
[09:41:33 CET] <NicolasRaoul> Using latest ffmpeg (N-53602-g65cff81-static from johnvansickle) on Ubuntu I get: ALSA lib ../../src/conf.c:3357:(snd_config_hooks_call) Cannot open shared library libasound_module_conf_pulse.so
[09:42:01 CET] <NicolasRaoul> Any idea how to solve this? It works if I record only video and not audio.
[09:44:22 CET] <liquid-silence> are math question here guys
[09:44:40 CET] <liquid-silence> 6.6715555 = 00:00:06:14 @ 25fps
[09:44:44 CET] <liquid-silence> how do I remove one frame
[09:45:16 CET] <liquid-silence> so when I do
[09:45:34 CET] <liquid-silence> ffmpeg -i test.mp4 -ss 6.6715555 -t 1 -s 800x600 -f image2 imagefile.jpg I actually get the following burnin on the video
[09:45:40 CET] <liquid-silence> 00:00:06:156
[09:45:45 CET] <liquid-silence> 00:00:06:15
[09:45:47 CET] <liquid-silence> not 14
[09:46:01 CET] <liquid-silence> so I need to subtract one frame from 6.6715555
[10:06:26 CET] <debianuser> NicolasRaoul: Maybe you're running 32-bit ffmpeg on a 64-bit system and you don't have 32-bit alsa-pulse plugin installed?
[10:07:00 CET] <NicolasRaoul> debianuser, I downloaded and use http://johnvansickle.com/ffmpeg/builds/ffmpeg-git-64bit-static.tar.xz
[10:08:20 CET] <NicolasRaoul> the error does not happen with the ffmpeg from the official Ubuntu 2015.10 repository (which is too old)
[10:09:38 CET] <t4nk455> hey :), is there a command to show an images size in ffmpeg?
[10:09:47 CET] <JEEB> ffprobe file
[10:10:05 CET] <JEEB> you can get json output etc from it as well if you want to probe things within an application
[10:11:13 CET] <debianuser> NicolasRaoul: Maybe it looks for pulse plugin in the different directory then? I mean it could be /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_pulse.so or /usr/lib/alsa-lib/libasound_module_pcm_pulse.so. The search path depends on your alsa-lib, but since that's a static build it depends on that static alsa-lib.
[10:11:41 CET] <debianuser> NicolasRaoul: You can workaround that by putting an explicit path to that module in your alsa config.
[10:12:23 CET] <relaxed> NicolasRaoul: my builds lack pulse support, if that's what you're trying
[10:12:39 CET] <t4nk455>  thank you :), can you tell me, how to get the json output ?
[10:13:40 CET] <NicolasRaoul> relaxed, are you johnvansickle? :-)
[10:13:51 CET] <relaxed> yes
[10:14:15 CET] <debianuser> NicolasRaoul: put in your ~/.asoundrc something like: pcm_type.pulse { lib "/usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_pulse.so" }   that would hopefully make it working.
[10:14:50 CET] <NicolasRaoul> cool! Thanks for the builds. I am on Ubuntu where pulse is the norm if I understand correctly. I will try that now, thanks!
[10:20:09 CET] <debianuser> NicolasRaoul: That may break sound in 32-bit apps like wine/skype/adobe-flash. So if that solves ffmpeg problem but breaks 32-bit apps, you'll need to use ffmpeg-specific config instead: rename it to ~/.asoundrc.64bit for example, and run ffmpeg like: env ALSA_CONFIG_PATH=/usr/share/alsa/alsa.conf:$HOME/.asoundrc.64bit ffmpeg ...
[10:20:11 CET] <NicolasRaoul> debianuser, I just created that .asoundrc but still the same "Cannot open shared library libasound_module_conf_pulse.so"... I would create a symbolic link but I don't know where it is trying to open it in the first place.
[10:21:19 CET] <debianuser> Yeah, symbolic link may work too. Most probably it looks for /usr/lib/alsa-lib. To check that run: strace ffmpeg ... 2>&1 | grep libasound_module
[10:23:43 CET] <debianuser> (hm... it looks for _conf_pulse.so, not _pcm_pulse.so, so the config won't help anyway)
[10:29:16 CET] <NicolasRaoul> open("/usr/lib/alsa-lib/libasound_module_conf_pulse.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
[10:31:41 CET] <relaxed> t4nk455: man ffprobe-all|less +/json
[10:32:29 CET] <NicolasRaoul> Symbolic link created, now I just get Segmentation fault
[10:33:19 CET] <gnome1> I got to say that, so far, ffmpeg is working quite well for muxing/merging different streams. bundling subtitles in mkv, or maintaining subtitles when transcoding just works.
[10:38:37 CET] <debianuser> NicolasRaoul: `sudo ln -s x86_64-linux-gnu/alsa-lib /usr/lib/alsa-lib` ?
[10:43:10 CET] <NicolasRaoul> debianuser, cd /usr/lib/alsa-lib/ ; sudo ln -s /usr/lib/x86_64-linux-gnu/alsa-lib/ libasound_module_conf_pulse.so
[10:43:32 CET] <NicolasRaoul> sudo ln -s /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_conf_pulse.so libasound_module_conf_pulse.so
[10:43:55 CET] <NicolasRaoul> (correct line is the second one)
[10:44:11 CET] <debianuser> you'll also need a symlink for libasound_module_ctl_pulse.so and libasound_module_pcm_pulse.so then
[10:47:01 CET] <NicolasRaoul> symlinking the whole directory is a better idea indeed! I just did this, same segfault though. Still with ffmpeg -f alsa -ac 2 -i hw:0 -f video4linux2 -framerate 10 -video_size 1280x720 -i /dev/video0 test.mkv
[10:49:18 CET] <NicolasRaoul> I wonder if there is any easier way to use ffmpeg 3.0 on Ubuntu...
[10:50:19 CET] <NicolasRaoul> I tried compiling yesterday, video quality was not as good, for some unknown reason, it was pixelated.
[10:53:11 CET] Action: debianuser guesses the easiest way would be to remove/move /usr/share/alsa/alsa.conf.d/*pulse*.conf files somewhere.
[10:56:54 CET] <debianuser> NicolasRaoul: To workaround pulse bug you can try a custom standalone config. Put   pcm.myhw { type hw; card 0 } into ~/.asoundrc.ffmpeg and then test `env ALSA_CONFIG_PATH=$HOME/.asoundrc.ffmpeg  ffmpeg -f alsa -ac 2 -i myhw ...`. Not sure if that would work, but it should bypass all pulse configs and use just your "myhw" config alone.
[10:59:37 CET] <NicolasRaoul> [alsa @ 0x50b8f20] cannot open audio device myhw (No such file or directory)
[11:02:01 CET] <debianuser> that's with ALSA_CONFIG_PATH=$HOME/.asoundrc.ffmpeg and pcm.myhw definition in it?
[11:07:20 CET] <liquid-silence> can someone here explain drop frames please
[11:08:24 CET] <relaxed> liquid-silence: frames that aren't encoded to the output
[11:25:25 CET] <lukesan> morning all
[11:25:36 CET] <lukesan> first time here
[11:30:07 CET] <lukesan> can I post a little script to explain a problem?
[11:47:08 CET] <relaxed> lukesan: pastebin.com the script
[12:34:46 CET] <lukesan> http://pastebin.com/CZ0gCYpu
[12:35:53 CET] <lukesan> I'd like to create a top bottom dual video, with a drag&drop input
[12:36:34 CET] <lukesan> for simple conversion works perfectly, but with this video filter something goes wrong
[12:41:42 CET] <J_Darnley> don't pad
[12:41:55 CET] <J_Darnley> don't use movie source filter
[12:42:03 CET] <J_Darnley> do use vstack
[12:42:43 CET] <J_Darnley> and "something goes wrong" is not an error message
[12:44:42 CET] <lukesan> I need to be more precise
[12:45:59 CET] <Razva> guys, any RHEL/CentOS repo with ffmpeg 3.0 available?
[12:46:03 CET] <lukesan> [Parsed_movie_1 @ 0000000002d8d6e0] Failed to avformat_open_input 'V'
[12:46:03 CET] <lukesan> [AVFilterGraph @ 0000000002d51680] Error initializing filter 'movie' with args '
[12:46:03 CET] <lukesan> V:IReXCOMP04_COMPOSITINGPROGETTITeatro_trampolo_elasticoRENDERTeatro_trampolo_el
[12:46:03 CET] <lukesan> astico_R.mp4'
[12:46:03 CET] <lukesan> Error opening filters!
[12:46:04 CET] <lukesan> Premere un tasto per continuare . . .
[12:46:25 CET] <furq> lukesan: https://ffmpeg.org/ffmpeg-filters.html#vstack
[12:46:25 CET] <lukesan> soory
[12:46:45 CET] <furq> Razva: http://johnvansickle.com/ffmpeg/
[12:46:52 CET] <furq> not a repo but it'll probably be much less hassle
[12:46:53 CET] <J_Darnley> Another reason to not use the movie filter: you need to escape lots on Windows
[12:47:02 CET] <furq> based on my suppressed memories of centos
[12:47:15 CET] <lukesan> http://pastebin.com/vCG33JkK
[12:48:40 CET] <J_Darnley> And I point back to my previous line
[13:00:53 CET] <lukesan> thanks J_Darnley I'm going to try the vstack that seem to be interesting
[13:07:24 CET] <satinder___> Hi , I am getting following errors when using drawtext ffmpeg filter with Textfile and reload = 1
[13:07:49 CET] <satinder___> [Parsed_drawtext_0 @ 0x3a88ee0] [FILE @ 0x7fff05b4e4a0] Error occurred in mmap(): Invalid argument
[13:07:49 CET] <satinder___> [Parsed_drawtext_0 @ 0x3a88ee0] The text file '/home/satinder/OverlayInfo' could not be read or is empty
[13:07:49 CET] <satinder___> Failed to inject frame into filter network: Invalid argument
[13:07:49 CET] <satinder___> [video4linux2,v4l2 @ 0x3a84460] Some buffers are still owned by the caller on close.
[13:08:12 CET] <satinder___> please any body help me , what I am doing wrong
[13:08:24 CET] <satinder___> my command is following
[13:08:51 CET] Action: relaxed whispers pastebin.com
[13:09:00 CET] <satinder___> ffmpeg -re -i /dev/video0 -vf drawtext='fontsize = 20 : fontfile = /usr/share/fonts/truetype/freefont/FreeSansBold.ttf : textfile = %s : reload = 1'  -f v4l2 /dev/video1
[13:09:19 CET] <satinder___> %s = Textfile
[13:11:02 CET] <relaxed> try removing all the spaces from the filtergraph
[13:11:23 CET] <relaxed> if that doesn't work pastebin.com the command and all console output
[13:12:57 CET] <satinder___> relaxed : Sir that is working for 16 mins after that giving above errors
[13:14:08 CET] <satinder___> that means not any issue in command that generating error due to some other reasons
[13:14:16 CET] <satinder___> but I am not sure
[13:14:18 CET] <satinder___> :(
[13:15:14 CET] <J_Darnley> You could always read the error message.
[13:15:32 CET] <J_Darnley> I assume this is your non-atomic updating coming back to bite you.
[13:16:24 CET] <satinder___> J_Darnley : Sir , I think you said right
[13:16:50 CET] <satinder___> because the error console show [Parsed_drawtext_0 @ 0x3a88ee0] The text file '/home/satinder/OverlayInfo' could not be read or is empty
[13:17:15 CET] <satinder___> J_Darnley : what I can do resolve that issue , Sir
[13:17:32 CET] <J_Darnley> Someone told you yesterday (or whenever)
[13:17:45 CET] <J_Darnley> "rename" was the answer
[13:18:10 CET] <t4nk455> Does anyone have an Idea, how to begin with the search how to stream from the virtual cameras inside a 3D Engine ?
[13:18:26 CET] <satinder___> J_Darnley  : yes sir
[13:18:42 CET] <satinder___> but I am using a C program for updating that file
[13:18:57 CET] <satinder___> that is following
[13:19:19 CET] <satinder___> while (1) {
[13:19:19 CET] <satinder___>         writeStructure(OverlayParam);
[13:19:20 CET] <satinder___>         EXIT_CODE = CheckOverlayValues(OverlayParam);
[13:19:20 CET] <satinder___>         if (EXIT_SUCCESS == EXIT_CODE) {
[13:19:20 CET] <satinder___>             file = fopen(TextFile, "w+");
[13:19:20 CET] <satinder___>             if (NULL == file) {
[13:19:22 CET] <satinder___>                 CAMERA_ERROR("Overlay output file not opened ");
[13:19:24 CET] <J_Darnley> OMFG!
[13:19:24 CET] <satinder___>                 exit(EXIT_FAILURE);
[13:19:26 CET] <satinder___>             } else {
[13:19:28 CET] <satinder___>                 CAMERA_DEBUGL("OverLay outfile opened successfully");
[13:19:30 CET] <satinder___>                 EXIT_CODE = Write_TextFile(file, OverlayParam);
[13:19:30 CET] <J_Darnley> FUCK OFF
[13:19:32 CET] <satinder___>                 if (EXIT_SUCCESS == EXIT_CODE) {
[13:19:34 CET] <satinder___>                     fclose(file);
[13:19:36 CET] <satinder___>                 }
[13:19:38 CET] <satinder___>             }
[13:19:42 CET] <satinder___>         }
[13:19:44 CET] <satinder___>         sleep(1);
[13:19:46 CET] <satinder___>     }
[13:19:53 CET] <J_Darnley> ignored
[13:20:03 CET] <satinder___> ??
[13:20:09 CET] <JEEB> satinder___: for future reference, never ever paste long text onto an IRC channel
[13:20:18 CET] <JEEB> you will get hated and condemned
[13:20:29 CET] <JEEB> use a pastebin-like sane service that you can link to
[13:20:47 CET] <satinder___> JEEB : sorry !! I don't not
[13:20:50 CET] <satinder___> Sir
[13:21:00 CET] <satinder___> Next time that will not happened
[13:21:20 CET] <satinder___> But please help what is wrong in that method
[13:22:54 CET] <IntelRNG> It floods the IRC clients of everybody in the room
[13:24:27 CET] <t4nk455> Does anyone have an Idea, how to begin with the search how to stream from the virtual cameras inside a 3D Engine ?
[13:24:29 CET] <furq> weren't you advised to do an atomic update of the file
[13:24:31 CET] <jkqxz> Write to a temporary file ('file = fopen("blah", ...);') and then rename it to the thing you want atomically after you've finished writing it ('fclose(file); rename("blah", TextFile);').
[13:24:55 CET] <furq> e.g. `echo foo > /tmp/overlay; mv /tmp/overlay /home/satinder/OverlayInfo`
[13:24:55 CET] <J_Darnley> t4nk455: like any other thing: use the ffmpeg libraries
[13:25:01 CET] <furq> or yeah what jkqxz said
[13:25:38 CET] <satinder___> J_Darnley : that is possible renaming a file in c
[13:26:10 CET] <furq> http://linux.die.net/man/3/rename
[13:26:20 CET] <satinder___> ok
[13:26:52 CET] <satinder___> furq : you mean update value in file and then rename it
[13:27:13 CET] <satinder___> thanx if that will works for me , Sir
[13:29:55 CET] <satinder___> But I don't understand what happened when I am truncating Textfile and writing new values in this
[13:30:18 CET] <satinder___> furq : ??
[13:45:24 CET] <lukesan> vstack still doesn't work with windows variables
[13:46:13 CET] <J_Darnley> If that is the only thing you changed then you completelyignored the error
[16:37:14 CET] <fatpelt> good afternoon all!  i'm having an issue with ffmpeg version git-2016-03-08-b60dfae (should be a recent build).  i'm using overlay to put 4 video streams in a mosaic.  that works fine, but as soon as i put an audio stream in my complex filter ffmpeg dies when one of the content switches ac3 audio layout mid-stream.  http://pastebin.com/VHx9si6m   if i remove line 10 from the command and rerun it'll run all day long.  i've tried a ton of different filters but i
[16:37:16 CET] <fatpelt> "Input stream #1:1 frame changed from rate:48000 fmt:fltp ch:2 chl:stereo to rate:48000 fmt:fltp ch:6 chl:5.1(side)
[16:37:16 CET] <fatpelt> [Parsed_overlay_28 @ 0x42fe540] [framesync @ 0x1f86fe8] Buffer queue overflow, dropping.drop=58 speed=0.983x"
[16:39:44 CET] <durandal_1707> fatpelt: better use vstack/hstack for mosaic
[16:40:01 CET] Action: fatpelt googles that filter
[16:40:20 CET] <fatpelt> ooh
[16:40:21 CET] <fatpelt> tasty
[16:40:34 CET] <fatpelt> let me re-write this command and see what happens
[16:47:21 CET] <momomo> anyone here used clappr ?
[16:48:10 CET] <andrey_utkin> is it correct that i can concatenate H.264-in-MP4 videos from different origins if input clips were processed with "dump_extra" bitstream filter? I know that joining works if you convert all clips to AnnexB-in-MPEGTS and then back, now it seems dump_extra does that more efficiently, is this correct?
[16:49:00 CET] Last message repeated 1 time(s).
[16:50:38 CET] <andrey_utkin> oh sorry for repetition
[16:51:30 CET] <lroe> Does anyone know if there is a way to enable a 'record' button on the controls of an HTML5 video stream?
[16:52:35 CET] <lroe> I'm serving an rtsp stream using the html5 native player.  I have enabled controls so people can go back in time a bit, but I'd love to allow people to download a clip they're watching
[16:54:43 CET] <andrey_utkin> is anybody seeing my message? seems my xmpp-irc gateway misbehaves
[16:54:50 CET] <lroe> andrey_utkin, I see it
[16:54:55 CET] <andrey_utkin> thanks lroe
[16:55:18 CET] <andrey_utkin> about dump_extra too?
[16:56:49 CET] <fatpelt> ok.  i've rewritten my complex filter to use *stack instead.  i've got the one audio stream with only an anullsink to see if i can reproduce
[16:57:40 CET] <andrey_utkin> (just in case. Sorry if you've already received this.) is it correct that i can concatenate H.264-in-MP4 videos from different origins if input clips were processed with "dump_extra" bitstream filter? I know that joining works if you convert all clips to AnnexB-in-MPEGTS and then back, now it seems dump_extra does that more efficiently, is this correct?
[17:01:22 CET] <fatpelt> durandal_1707: still crashes on the ac3 layout change
[17:02:07 CET] <pyro25> hi there! I've got this funny problem: a platform with no video4linux2 on it and I need to capture the webcam. I see so many applications, like streamer, with which I can capture but I just can't figure what they use =P
[17:02:09 CET] <durandal_1707> you mean it changes channels midstream?
[17:03:58 CET] <explodes_> My team is working on a video player for Android. The great thing about it is that it supports variable speed playback. We just released our app using the library to 20% of our users. Unfortunately, the crash rate is ridiculous (7%) and they're all C-level errors
[17:04:52 CET] <explodes_> My team and myself is pretty much inexperienced in the way of C, so tracking down what I believe to be memory errors has been an arduous process for us.
[17:06:54 CET] <durandal_1707> so you ask for free support?
[17:14:07 CET] <fatpelt> durandal_1707: yeah.  the ac3 audio layout changes mid stream.
[17:16:16 CET] <durandal_1707> fatpelt: than you need to transcode it first
[17:17:17 CET] <fatpelt> durandal_1707: i've tried a couple of different ways, and i'm sure i'm doing it wrong.  i've got -ac 1 *before* all the -i inputs.   shouldn't that transcode it to single channel?
[17:18:14 CET] <sfan5> most likely not
[17:18:17 CET] <durandal_1707> no, you need it after inputs
[17:18:29 CET] <sfan5> it would tell ffmpeg to "interpret" the inputs as with 1 audio channel
[17:26:06 CET] <fatpelt> ok.  so, i moved the -ac2 to after the inputs and now the showvolumes is showing more than 2 channels of audio
[17:27:12 CET] <fatpelt> oh&.   i moved it back up to the top and it shows audio on showvolumes still anyway
[17:29:16 CET] <explodes_> durandal_1707: I don't know of a better place to go looking for someone to hire than at or near the source, so I'm here
[17:36:35 CET] <fatpelt> Input stream #3:1 frame changed from rate:48000 fmt:fltp ch:6 chl:5.1(side) to rate:48000 fmt:fltp ch:2 chl:stereo
[17:36:54 CET] <fatpelt> right after that i get a ton of Buffer quey overflow, dropping messages and the video dies
[18:48:01 CET] <durandal_1707> fatpelt: midstream changes are simple not supported
[18:56:20 CET] <petecouture> Is there anyone on that has best practices for encoding a Live HLS stream. Mine works in some players like Flash and HLS.js but desktop players like VLC say it can't detect the format. Also timed metadata/ID3 tags don't get detected. Here's my config: http://pastebin.com/V5qV9pBc
[18:56:33 CET] <petecouture> ffprobe returns no errors
[18:59:59 CET] <blue_misfit> does media autobuild suite actually work for anyone right now?
[19:01:01 CET] <llogan> what's a media autobuild suite?
[19:01:23 CET] <blue_misfit> it's a package for Windows that sets up a build environment and builds ffmpeg among other related things
[19:01:30 CET] <blue_misfit> allegedly. I've never gotten it to work tho :)
[19:02:37 CET] <llogan> it's not from FFmpeg. you should contact the author.
[19:04:21 CET] <blue_misfit> indeed, I will. I just thought I'd ask in case others are familiar with this.
[19:05:54 CET] <llogan> there is also this which is mentioned on the zeranoe ffmpeg forum now and then: https://github.com/rdp/ffmpeg-windows-build-helpers
[19:08:09 CET] <Angus_> Hey guys
[19:09:12 CET] <JEEB> don't spam
[19:09:23 CET] <JEEB> if you've got a user question, stay here
[19:09:35 CET] <JEEB> if you've got a thing regarding development of FFmpeg itself, then maybe -devel
[19:18:13 CET] <petecouture> spam spam spamity spam
[20:03:16 CET] <anotherRandomGuy> the docs for libmp3lame wrapper of ffmpeg say this: "Set bitrate expressed in bits/s for CBR or ABR. LAME bitrate is expressed in kilobits/s. "
[20:03:34 CET] <anotherRandomGuy> does this assume that 1 kilobit equals 1024 bits, or rather 1000 bits?
[20:05:12 CET] <anotherRandomGuy> or maybe a bit different question, when an MP3 file is let's say 192 kbps CBR, is it in fact 192000 bits per seconds, or actually 196608 bits per second?
[20:06:04 CET] <JEEB> usually when talking of bits in multimedia a kilo means 1000
[20:11:33 CET] <anotherRandomGuy> I tried encoding the same source file using both 192000 and 196608 and it turns out, I got exactly the same file from both runs
[20:11:56 CET] <anotherRandomGuy> and by that I mean the checksums do match. So I guess it doesn't matter which of them you use anyway
[20:12:57 CET] <JEEB> that's a thing with the specific encoder you're using
[20:13:05 CET] <JEEB> it most probably only supports specific bit rates when it bit rate mode
[20:13:32 CET] <JEEB> you can't really pull parallels between that and all the other encoders available through avcodec
[20:14:38 CET] <anotherRandomGuy> so what you mean is what might be true for one codec in ffmpeg, not necessarily has to be true for some other? I'll keep that in mind, thanks
[20:49:18 CET] <explodes_>  /buffer 2
[20:49:20 CET] <explodes_> nice.
[21:12:21 CET] <anotherRandomGuy> is it possible to make ffmpeg put different encoder metadata to the output files? right now, the encoder field is set to "Lavf57.29.100", but I'd like it to say "Lavf57.29.100 libmp3lame" instead
[21:13:30 CET] <anotherRandomGuy> I tried -metadata encoder="foo", -metadata ENCODER="foo", -metadata TSSE="foo", -metadata tsse="foo", none of which worked
[21:13:38 CET] <fatpelt> hey all.  i'm back with a question on ac3.  i've got a stream that changes the layout mid stream, and when it does things all go to pot and ffmpeg starts to drop packets
[21:14:00 CET] <fatpelt> within mpeg-ts, i'm not seeing that it isn't supported,
[21:14:56 CET] <c_14> fatpelt: can you copy the stream to a file and reproduce with that? If yes, open a bug report on trac with the file as a sample.
[21:15:40 CET] <fatpelt> c_14 i'll have to see if i can do that
[21:16:30 CET] <fatpelt> oddly though, it only crashes when i add something like "[0:a] SOMEFILTERINGSTUFF"  to my complex filter.  it even crashes when i use anullsink by itelf
[21:16:34 CET] <c_14> anotherRandomGuy: it seems to work for me with the stream metadata field but not the global one
[21:16:46 CET] <fatpelt> if i don't have that in the filter, it works just peachy and will run all day long
[21:17:02 CET] <c_14> fatpelt: reencoding the audio?
[21:17:11 CET] <anotherRandomGuy> c_14: you mean the g and s switches, right? I'll try with these as well
[21:17:23 CET] <c_14> anotherRandomGuy: yes
[21:17:59 CET] <c_14> fatpelt: hmm, actually. it might just be a problem that libavfilter doesn't support mid-stream layout changes well (or at all). Still bug-worthy though
[21:21:12 CET] <anotherRandomGuy> c_14: indeed, '-metadata:s encoder="foo"' does change the stream metadata, thank you. any idea for the global one though?
[21:21:22 CET] <kwivix> Hi, i'm playing around with ffmpeg and ffserver, is it possible to 'capture' a stream and 'forward' it to a ffserver?
[21:22:15 CET] <fatpelt> c_14, in the short term, is there *any* way to downsample them *before* it hits libavfilter?
[21:22:37 CET] <llogan> kwivix: unfortunately nobody here really knows how to use ffserver (or uses it AFAIK). it is basically unmaintained.
[21:24:28 CET] <c_14> fatpelt: the only downsampling methods I know would use libavfilter...
[21:24:48 CET] <fatpelt> :)  heh.  ok.  been there and tried that with resample
[21:32:55 CET] <anotherRandomGuy> c_14: adding 'fflags +bitexact' makes the TSSE tag go away completely
[21:33:23 CET] <J_Darnley> strange, it usually gets degraded to "ffmpeg" when you do that.
[21:34:03 CET] <anotherRandomGuy> this behavior suits me, I didn't want that tag anyway, the stream one is what I need and this one stays so it's perfect
[21:37:05 CET] <JEEB> how muxers decide to follow the bitexact flag depends a lot
[21:37:23 CET] <JEEB> since its main part is that it makes the output exactly the same between runs between versions
[21:37:37 CET] <JEEB> so as long as the output isn't specific to the version, it flies
[21:38:49 CET] <anotherRandomGuy> I also noticed the stream "encoder" field is limited to only 9 characters, so even the default "Lavc57.28.103 libmp3lame" becomes "Lavc57.28"
[22:55:24 CET] <Wader8> hello
[22:58:06 CET] <Wader8> I'm wondering, is 2 video stream merging possible anywhere out there, specifically 2 videos with different aspect to create a video which has extra pixels from a secondary video, so it would work that a priority has to be selected which one retains the main part
[22:58:24 CET] <J_Darnley> yes
[22:59:59 CET] <Wader8_> either way you always get to lose some pixels, whichever you watch so it kinda makes it half-baked
[23:00:08 CET] <Wader8> and I didnt' get any msgs since my connection went out
[23:00:15 CET] <J_Darnley> You need to be more precise.
[23:00:26 CET] <J_Darnley> "create a video which has extra pixels from a secondary video"
[23:00:41 CET] <J_Darnley> where should those "extra pixels" go?
[23:01:09 CET] <Wader8> well there where the primary source video doesn't have them
[23:01:19 CET] <Wader8> I will explain
[23:01:21 CET] <J_Darnley> Huh?
[23:01:34 CET] <J_Darnley> The "primary" video has pixels everywhere
[23:01:41 CET] <J_Darnley> There are no holes
[23:01:55 CET] <J_Darnley> Yes you should explain.
[23:08:48 CET] <Wader8> actually, i need to do a quick test, I already have it written, might really work, moment
[23:09:14 CET] <Wader8> there's a problem that I didn't foresaw in my thinking, only noticed now, so I need to do a test
[23:11:04 CET] <pzich> do you want to share your command and thinking so we can familiarize ourselves with it while you're testing?
[23:19:29 CET] <Wader8> yeah, the point is, if it doesn't pan out in practise manually just one screenshot then it won't work even if i do the best explanation
[23:29:24 CET] <Wader8> okay just as I suspected, the zoom issue, so it wouldn't fill the entire top and bottom horizontal space, at least in 16:9 and 4:3 case
[23:31:11 CET] <Wader8> and the codec would have to do some content analysis to calibrate the enlargement of the secondary video
[23:37:32 CET] <J_Darnley> I still don't get it.
[23:37:52 CET] <J_Darnley> Are you trying to pad with another video rather than black?
[23:38:05 CET] <Wader8> im writting the explanation I started earlier ... it's taking some time
[23:38:19 CET] <J_Darnley> okay
[23:38:34 CET] <J_Darnley> I will wait
[23:53:08 CET] <Wader8> Basically, you'd have a 16:9 1080p video, and a 4:3  720x480 video, and you'd want to merge them (needs fancy name), you'd pick the 16:9 as primary in this case (priority) so that would be treated as a front layer, the other one will be in background, the codec would create a new blank video plane with extra vertical pixels in this case as there is more content on top and bottom with 4:3,...
[23:53:10 CET] <Wader8> ...but that would be based on a result from a calculation, first the codec would enlarge the secondary video in a calibrating process with the primary video to establish the center point, so the secondary video is properly enlarged while keeping it's aspect intact to fit perfectly with the content in the primary video, after that, the codec would be able to see the difference how much higher...
[23:53:11 CET] <Wader8> ...the secondary video's in-this-case vertical pixels differ from the first video, so it would create a new video with these dimensions to accomodate both videos, it would add enough height so the secondary video's top and bottom can be included, however this will always result in a video with 4 always black areas, one in each corner, as the content doesn't exist there so it's just rendered...
[23:53:13 CET] <Wader8> ...as black, the codec will then offer options to enhance secondary video color properties and other things to better blend in with the primary video if the user chooses, the downside is that the video cannot be displayed on a native screen, you'd always need higher resolution monitor to display native 1080p and the extra vertical area, otherwise the video will get downscaled to fit...
[23:53:14 CET] <Wader8> ...perfectly with the added vertical content
[23:53:33 CET] <Wader8> J_Darnley, there we go
[23:53:47 CET] <Wader8> http://i.imgur.com/kdI7VDM.png
[23:54:12 CET] <Wader8> Varied Aspect Ratio Video Content Merging - VARVCM :p
[23:55:29 CET] <J_Darnley> I think I was right when I said "pad with another video rather than black".
[23:55:45 CET] <J_Darnley> But I have no clue how "codec" comes into play.
[23:56:32 CET] <Wader8> whatever, the program, that's gonna do this beforer the codec, I'm not sure if this could be possible without codec having some kind of idea about it
[23:56:43 CET] <J_Darnley> or maybe this is "pan and scan"
[23:57:20 CET] <Wader8> I was looking for the first term, first i took Dissimilar, but then I recalled it could simply be called  variable VCM
[23:57:43 CET] <Wader8> ala VVCM
[23:59:32 CET] Action: llogan lost some IQ trying to read that
[00:00:00 CET] --- Fri Mar 25 2016


More information about the Ffmpeg-devel-irc mailing list