[Ffmpeg-devel-irc] ffmpeg.log.20170114

burek burek021 at gmail.com
Sun Jan 15 03:05:01 EET 2017


[00:01:02 CET] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#Examples-139
[00:01:04 CET] <JEEB> see th examples
[00:01:24 CET] <DocMAX> is there a switch in ffplay to send audio to pulse server?
[00:01:32 CET] <BtbN> should be in whatever the timebase is
[00:01:45 CET] <JEEB> and there's settb for setting the timebase
[00:02:02 CET] <JEEB> (also available as "TB" in the setpts filter as seen in the examples)
[00:04:36 CET] <pbos> hm, maybe ffplay doesn't eat it
[00:04:50 CET] <pbos> that's enough time spent on that, thanks though JEEB
[00:07:44 CET] <JEEB> np, I recommend something like mkvmerge(gui) for your job then
[00:07:54 CET] <JEEB> that lets you set the frame rate relatively sanely
[00:08:00 CET] <JEEB> when muxing into a container
[00:12:15 CET] <pbos> could also consider patching yamiencode, if it's not a bunch of pain :)
[00:12:21 CET] <pbos> that'd fix the actual issue
[00:13:02 CET] <JEEB> true that
[00:13:20 CET] <JEEB> I was pretty much looking at it from the point of view that a general #ffmpeg user has no idea how to do C
[00:13:34 CET] <JEEB> and/or that you wanted to keep that stream you transcoded
[01:59:38 CET] <faLUCE> In my project I want write a code which creates a http server which streams h264 video through HTTP. I googled for that, but I could not find libraries for creating HTTP streaming servers (instead, in the past I used live555 library for creating rtsp streaming server). Then I have to do this stuff in some other way. I was thinking of using libx264 for encding frames, and libavformat for encapsulating them into the right
[01:59:39 CET] <faLUCE> container. Then, push them to a http server. Is this all  I have to do? I mean, I just have to add the container to each frame and then push it with a post request to the http server (which will stream the content to the client), or is there something more complex behind this?
[02:03:24 CET] <JEEB> I recommend separating it into two things
[02:03:43 CET] <JEEB> one doing the transcoding and muxing into something, and another thing that handles the serving
[02:04:18 CET] <JEEB> so basically you feed your stream(s) into a server, and that server just has a buffer and presents the streams to users
[02:05:55 CET] <faLUCE> JEEB: I understand that, but I wonder if, after putting each frame into its container (-->muxing), I only have to push it to the webserver, os do I have to do more complex things
[02:06:45 CET] <JEEB> if the server can then serve that in one way or another, then that would be it
[02:07:27 CET] <faLUCE> JEEB: I don't  know how the server should serve the buffer to the client
[02:08:16 CET] <faLUCE> does it have to give one buffer for each get request from the client?
[02:09:08 CET] <faLUCE> I mean: does the client poll continously the server, or only once?
[09:32:39 CET] <Seylerius> Okay, this is some weird shit. So, I took the audio of a vp9/vorbis webm into Audacity to reposition and adjust. Everything went fine until I mixed audio from a different ogg vile into that audio, and re-exported the audio to mix with the video.
[09:35:57 CET] <Seylerius> So, the audio immediately before mixing in the external file syncs perfectly, but once I mix the external file in using Audacity, all of a sudden the audio winds up a half second or more late in the video.
[09:37:05 CET] <Seylerius> What on earth would cause that?
[09:41:00 CET] <Seylerius> So, the same ffmpeg command, with an audio file that's the same length, and with the music starting at the same point in the audio file, but they're syncing differently to the resulting video by a half-second or more.
[09:41:31 CET] <Seylerius> The only difference is that the audio that syncs wrong has had a fresh copy of the music mixed in.
[09:42:10 CET] <Seylerius> What in the void would cause /that/?
[09:49:27 CET] <Seylerius> Anyone have a guess at it?
[09:58:51 CET] <Seylerius> I'm really stumped.
[09:59:37 CET] <Seylerius> furq: You around?
[14:04:36 CET] <nirvanko> Hey guys, I would like to screencast my screen without sound, for the moment I am doing that like this: ffmpeg -f x11grab -video_size 1920x1080 -i $DISPLAY -f alsa -i default test.mp4
[14:04:46 CET] <nirvanko> How can I mute th sound?
[14:05:11 CET] <c_14> Just remove the -f alsa -i default ?
[14:06:46 CET] <nirvanko> c_14: yeah, that was easy indeed. Thanks
[15:29:34 CET] <Threads> anyway to normalize audio on live streams
[15:30:34 CET] <cutesykitties> hello, I'm using the libraries for a project of mine that involves reading frames from video files and im having a bit of an issue
[15:32:09 CET] <cutesykitties> http://pastebin.com/C8077vQV
[15:32:28 CET] <cutesykitties> the problem is, regardless of the video format, fmtContext->streams[firstVideoStream]->codec is always null
[15:47:03 CET] <furq> Threads: -af dynaudnorm
[16:42:46 CET] <faLUCE> How can I read from a v4l2 input with the libraries? For a file I can use:   av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);   , but what for a live v4l2 device ?
[16:44:47 CET] <faLUCE> I have also:  avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL))   <--- could I use this, instead?
[16:45:28 CET] <faLUCE> in_uri is a const char*  string, how can I specify a v4l2 input in this string?
[16:45:34 CET] <klaxa> cutesykitties: you need to copy the codec parameters, add something like: avcodec_parameters_to_context(codecContext, fmtContext->streams[firstVideoStream]->codecpar);
[16:46:15 CET] <klaxa> at least that's what i do and it works in my application
[16:46:29 CET] <klaxa> but i allocate a separate codec context, not sure if that is necessary or if you can do what i just posted
[16:47:53 CET] <JEEB> faLUCE: back when I did my own avio thing I just passed a false file name to avformat_open_input
[16:47:56 CET] <JEEB> https://github.com/jeeb/matroska_thumbnails/blob/master/src/matroska_thumbnailer.cpp#L139
[16:48:47 CET] <JEEB> I think if you're using a non-custom avio thing you just use the protocol "URL" header
[16:49:10 CET] <JEEB> or should I say the protocol in the URL :P
[16:49:26 CET] <cutesykitties> klaxa my version does not seem to have that function or AVCodecParameters, but thank you very much, i will update and try that
[16:51:07 CET] <JEEB> faLUCE: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavdevice/v4l2.c;h=ae51d837af3570d1f9c1640a0aafcced5f30a970;hb=HEAD#l810
[16:51:12 CET] <JEEB> I think that answers something :P
[16:54:00 CET] <faLUCE> JEEB: this doesn't answer... it grows confusion... do you mean that I can't use avio_open2() and I should use avformat_open_input() ?
[16:54:13 CET] <JEEB> no
[16:54:36 CET] <JEEB> also take a look at the source link I pasted :P
[16:56:06 CET] <faLUCE> JEEB: please, be more precise or don't answer at all. It's completely useless to give this kind of help in the channel...
[16:56:16 CET] <JEEB> fuck  you
[16:56:33 CET] <JEEB> I gave you an example and since you asked for a file name I gave you what the v4l2 thing probes
[16:58:41 CET] <faLUCE> JEEB: my question was very precise: "which API function do I have to invoke in order to open a v4l2 device?". Then you answered with something like "try to understand yourself by looking at a long code". This is completely useless for me. I did not ask for this kind of help and this kind of help is pointless for a NORMAL API.
[16:58:49 CET] <faLUCE> then, JEEB fuck you as well
[16:58:54 CET] <fritsch> the answer was very precise
[16:59:00 CET] <faLUCE> fritsch: really not.
[16:59:02 CET] <fritsch> if you cannot cope with it, that's your problem and not Jeebs
[16:59:13 CET] <fritsch> everything you asked was answered in the code snippet, even with the correct line
[16:59:27 CET] <fritsch> he could write the code for you, that would be next level
[16:59:42 CET] <faLUCE> fritsch: no. In a normal API, this way of proceeding is completely absurd. And I did not ask anyone to write code for me
[17:00:00 CET] <faLUCE> now, I don't want to discuss about that anymore.
[17:01:54 CET] <fritsch> ignore_list++
[17:02:45 CET] <faLUCE> if this kind of answers is given for this kind of questions then there are two cases 1) the API is obscure and bad 2) the person that answers wants to provoke.
[17:02:59 CET] <faLUCE> now, really I stop this useless discussion with you
[17:26:23 CET] <faLUCE> well, the answer is here:  https://gist.github.com/shahriman/619fd90ccbd17734089b   <--- without the stupid JEEB's way of provoking people in order to show them how difficult is to use libav for easy stuff.
[18:10:05 CET] <georgie> i Guys,
[18:10:06 CET] <georgie> I'm trying to do screen capture with FFMPEG on Windows.
[18:10:06 CET] <georgie> I'm recording my screen (Chrome browser - Full screen). The browser is capturing an animation (HTML5/Canvas) that has a timecode at 30 frames per second.
[18:10:37 CET] <georgie> At the moment, this PC is the best hardware that I have available to record the screen:
[18:10:37 CET] <georgie> Env: Windows 10
[18:10:37 CET] <georgie> CPU: Intel I5-6400  @2.7 GHz
[18:10:39 CET] <georgie> Graphic card: Radeon RX 480
[18:10:41 CET] <georgie> FFMPEG: Version N-83049-ge71b811
[18:10:52 CET] <georgie> The following command has provided the best result so far (duplicate frames/missing frames):
[18:11:08 CET] <JEEB> not sure how good the best windows screen capture API in libavdevice/ffmpeg is
[18:11:16 CET] <JEEB> I think DShow is the best?
[18:11:23 CET] <JEEB> or gdi? don't remember
[18:11:29 CET] <georgie> ffmpeg -rtbufsize 10000k -f dshow -v video="screen-capture-recorder" -c:v libx264 -r 30 -crf 18 -preset ultrafast -tune zerolatency -pix_fmt yuv420p -y  output.mp4
[18:11:36 CET] <georgie> Im currently using dshow
[18:11:53 CET] <JEEB> there's a very nice API available from I think either windows 7 or 8
[18:12:00 CET] <JEEB> vdub implements it but ffmpeg doesn't as far as I can remember
[18:12:19 CET] <JEEB> this thing I think http://www.virtualdub.org/blog/pivot/entry.php?id=356
[18:12:37 CET] <JEEB> so if you're encoding into a file you might want to look into vdub+ut video's VFW component into AVI
[18:12:42 CET] <JEEB> it's lossless and fast
[18:12:54 CET] <JEEB> you can then use better stuff to encode into something lossy for final uploads etc
[18:13:40 CET] <JEEB> you're on win10 so that API should be available for screen capture
[18:14:34 CET] <JEEB> would be interesting to see that thing implemented in lavd as well, but I think most people doing FFmpeg development are on !windows
[18:14:40 CET] <georgie> Direct3d & dshow are the same?
[18:15:20 CET] <JEEB> I think not
[18:15:26 CET] <georgie> ok
[18:15:49 CET] <georgie> Thanks Jeeb
[18:16:18 CET] <georgie> Let me check this "VirtualDub"
[18:21:40 CET] <JEEB> you can also make a feature request on trac linking to that article regarding the new screen capture API on windows
[18:21:50 CET] <JEEB> that way you can note that you'd like it in libavdevice
[19:36:18 CET] <faLUCE> I'm seeing that the encoding function of the lib delivers packets, and not frames:   https://www.ffmpeg.org/doxygen/3.1/group__lavc__encoding.html#ga2c08a4729f72f9bdac41b5533c4f2642  . Now I wonder: what about x264_encoder_encode() ? Does the ffmpeg function wrap it only for complete frames or for packets ofr the frame?
[19:52:18 CET] <_pseudonym> Hi!  I've cross-compiled ffmpeg for the raspberry pi following the guide (https://trac.ffmpeg.org/wiki/CompilationGuide/RaspberryPi), but it's seg-faulting when I try to use v4l2 with a USB webcam: http://pastebin.com/BmaRRhkT
[19:56:44 CET] <klaxa> faLUCE: have you read the description for AVPacket? https://www.ffmpeg.org/doxygen/trunk/structAVPacket.html
[19:57:00 CET] <klaxa> it should (typlically) contain one video frame
[19:58:04 CET] <faLUCE> klaxa: yes, but I suspect it was designed for managing chunks.
[19:58:33 CET] <klaxa> have you tried and tested it before suspecting things?
[20:02:09 CET] <_pseudonym> backtrace from gdb: http://pastebin.com/tKmXcKN8
[20:02:51 CET] <faLUCE> klaxa: yes.  And I'm sure that x264 can manage chunks instead of frames. Then, an appropriate wrapper for x264 should be to put packets into avcodec_encode_video2()
[20:03:31 CET] <faLUCE> so, the encoder doesn't block, and you can have multiple x264 encoders in the main loop
[20:03:35 CET] <faLUCE> without threads
[20:05:14 CET] <faLUCE> klaxa: am I wrong?
[20:05:26 CET] <klaxa> i don't know
[20:05:55 CET] <klaxa> but i don't see how writing complete frames would make it more blocking than writing chunks
[20:06:13 CET] <faLUCE> klaxa: because chunks are small
[20:06:14 CET] <iH2O> how can i convert a color mp4 video to black and white?
[20:06:23 CET] <faLUCE> then, it reduces latency
[20:06:55 CET] <klaxa> this sounds like premature optimization
[20:07:04 CET] <faLUCE> klaxa: you should read better what one is trying to say, before using that disturbing tone
[20:07:27 CET] <klaxa> what
[20:07:37 CET] <faLUCE> [19:58] <klaxa> have you tried and tested it before suspecting things?
[20:08:02 CET] <klaxa> well you said you have
[20:08:12 CET] <klaxa> so you have more knowledge than me about this already
[20:08:20 CET] <faLUCE> klaxa: I said that AFTER your sentence
[20:08:37 CET] <klaxa> look
[20:08:45 CET] <klaxa> i'm trying to offer my thoughts for free
[20:08:52 CET] <klaxa> if you don't want help, you can always leave
[20:08:52 CET] <_pseudonym> iH2O: what have you tried so far?
[20:09:00 CET] <iH2O> nothing yet!
[20:09:02 CET] <faLUCE> klaxa: nobody asks for your offer if this is your tone.
[20:09:06 CET] <faLUCE> keep it down.
[20:09:26 CET] <klaxa> i fail to see where i used bad tone
[20:09:41 CET] <_pseudonym> iH2O: have you used ffmpeg before?
[20:09:47 CET] <iH2O> yes, once in a while
[20:09:58 CET] <klaxa> it seems you are not used to discussion through text or deliberately think everyone hates you
[20:10:02 CET] <iH2O> still using it, for example to catenate videos
[20:10:51 CET] <_pseudonym> iH2O: what's your current command line that you want to modify to make the output grayscale?
[20:11:06 CET] <faLUCE> klaxa: really not. There are several people who use good manners in discussing. And there are other people who assume the tone of "gurus"
[20:11:17 CET] <faLUCE> now, I'm tired to discuss about that
[20:11:23 CET] <faLUCE> let's do my stuff
[20:11:41 CET] <iH2O> Id like to give the name of a video at the command line and get the same video in black and white as output
[20:12:25 CET] <iH2O> i dont know what ffmpeg options to give
[20:12:36 CET] <klaxa> iH2O: i found this, maybe this helps? https://ffmpeg.org/ffmpeg-filters.html#Examples-37
[20:17:05 CET] <_pseudonym> iH2O: `ffmpeg -i <input file> -vf "format=gray" <output file>`
[20:17:30 CET] <iH2O> thx folks
[20:17:48 CET] <klaxa> oh that's a lot easier, lol
[20:18:26 CET] <_pseudonym> from http://video.stackexchange.com/questions/18052/convert-variable-input-formats-to-black-and-white-mp4
[20:19:25 CET] <iH2O> color is hard on my eyes
[20:20:30 CET] <_pseudonym> iH2O: it might be easier to adjust your video player to play in grayscale
[20:20:41 CET] <iH2O> ???
[20:20:44 CET] <iH2O> what about mplayer?
[20:21:05 CET] <klaxa> i think there are saturation settings
[20:21:09 CET] <klaxa> at least in mpv there are
[20:21:22 CET] <iH2O> ok,i'll ask #mplayer
[20:21:42 CET] <_pseudonym> I know you can with VLC
[20:21:55 CET] <iH2O> i have it installed too
[20:23:01 CET] <_pseudonym> vlc > tools > effects & filters > video effects > essential > check "image adjust" and move "saturation" slider all the way left
[20:24:12 CET] <_pseudonym> or you can fine-tune the saturation value to where your eyes are happy
[20:24:24 CET] <iH2O> im already happy
[20:36:14 CET] <CoJaBo> cool, encoded half a terabyte of video in ffmpeg, and it didn't crash this time =D
[20:53:54 CET] <grublet> CoJaBo: thats a lot of video D:
[20:54:20 CET] <CoJaBo> Uncompressed 4K
[20:56:38 CET] <faLUCE> which is the function for resampling a frame from YUYV422 to YUV420P ?
[20:56:48 CET] <faLUCE> (in the library)
[20:57:26 CET] <grublet> CoJaBo: reminds me of crashing my system by using imagemagick to downsample 500 megapixel images :)
[20:57:31 CET] <JEEB> you can use either swscale or the zimg video filter if you build with the zimg library
[20:57:40 CET] <grublet> 12GB of ram only stretches so far it seems
[20:58:06 CET] <CoJaBo> I've got 4GB
[20:58:24 CET] <CoJaBo> :/
[20:59:12 CET] <faLUCE> thnks JEEB
[20:59:23 CET] <JEEB> faLUCE: back when I did my stuff there was only swscale so I used it for X->RGB conversion but the difference is that you just set the destination pix_fmt to YUV420P
[20:59:39 CET] <JEEB> here's the main part of what I did in 2013 (APIs could have changed since) https://github.com/jeeb/matroska_thumbnails/blob/master/src/matroska_thumbnailer.cpp#L321
[21:00:14 CET] <JEEB> I think there's an example in the FFmpeg code base for this, too
[21:01:16 CET] <faLUCE> JEEB: scaling_video.c
[21:01:19 CET] <JEEB> yea
[21:01:38 CET] <JEEB> does that use the scale filter or swscale straight?
[21:01:53 CET] <JEEB> because it seems like swscale_scale just outputs a buffer :P and not an AVFrame
[21:01:58 CET] <JEEB> which I think you want in most cases
[21:02:36 CET] <JEEB> because AVFrame then goes into avcodec and you get yer packets out of the encoder
[21:03:46 CET] <faLUCE> it uses  sws_scale... then I have to call av_decode in order to create the AVFrame
[21:04:05 CET] <JEEB> nah, av_decode is before
[21:04:12 CET] <JEEB> av_decode decodes coded video
[21:04:17 CET] <JEEB> then you get an avframe
[21:04:36 CET] <JEEB> then you use the buffer in the avframe as input for swscale
[21:04:47 CET] <JEEB> also man, I just found this article and I already love it https://trac.ffmpeg.org/wiki/swscale
[21:05:07 CET] <faLUCE> you are right JEEB,
[22:55:09 CET] <rkantos> Where could I find info for using qsv mjpeg decoder?
[23:19:08 CET] <Seylerius> Okay, I'm having a weird-ass ffmpeg & audacity problem, explained in more detail here: https://www.reddit.com/r/ffmpeg/comments/5nwmnl/mixing_additional_audio_into_videos_audio_track/
[23:20:29 CET] <Seylerius> In short, taking audio that syncs properly with a video, mixing other audio into the middle of it without changing the length, and rerunning ffmpeg: result is that the audio is suddenly a half second or more out of sync.
[23:20:45 CET] <Seylerius> What in the void is going on here?
[23:22:20 CET] <Diag> Seylerius: have you recalibrated the anamorphic dejigger for the audio conversion?
[23:22:30 CET] <Seylerius> ...
[23:22:38 CET] <Diag> sometimes the modulation can cause a desync
[23:22:55 CET] <Diag> Theres a flag for it somewhere
[23:23:08 CET] <Seylerius> Diag: That sounds more like technobabble than recognizable troubleshooting.
[23:25:17 CET] <Diag> Seylerius: damn you got me, im from x264 and im just poisoning the channel, mwuhahaha.
[23:25:26 CET] <Diag> no but seriously, someone smart around here should know whats going on
[23:25:32 CET] <Seylerius> Lulz.
[00:00:00 CET] --- Sun Jan 15 2017


More information about the Ffmpeg-devel-irc mailing list