[Ffmpeg-devel-irc] ffmpeg.log.20130731

burek burek021 at gmail.com
Thu Aug 1 02:05:01 CEST 2013


[00:16] <Jookia> Hello! I've written some code using the decode example, except I'm using huffyuv and RGB24. However, the output file fails to be loaded in to any media player, including ffmpeg. The only error I get from ffmpeg is "Invalid data found when processing input". My code is here: http://sprunge.us/iHcG Any help would be ... helpful?
[00:21] <teratorn> Jookia: without a container format, how would ffmpeg know what codec to use?
[00:22] <Jookia> teratorn: oh. i guess H264/MPG have magic numbers that ffmpeg picks up on?
[00:23] <teratorn> well, it looks at the file extension and tries to guess
[00:23] <teratorn> if you dont specify
[00:24] <teratorn> you *can* create mpeg2 video by concatenating encoded packets together in a file, but I'm not sure that will work for h264
[00:24] <teratorn> well, mpeg1 video anyway, im not sure about mpeg2
[00:24] <teratorn> but you should probably just use a container format like matroska
[00:24] <Jookia> The example encoding/decoding must do that then?
[00:24] <teratorn> or something
[00:24] <teratorn> they are doing mpeg1 right?
[00:24] <Jookia> They're doing h264 and mpg, yeah
[00:25] <teratorn> raw encoded h264 packets concatenated in a file?
[00:25] <teratorn> i'm not sure how it recognizes that on playback. some combination of file extension and guessing
[00:25] <Jookia> yeah
[00:26] <Jookia> Is there an example on how to write the packets in to a container (is that what I'm suppose to do?)
[00:26] <Jookia> muxing.c ?
[00:27] <teratorn> yeah look there
[00:27] <teratorn> you'll open up an AVFormatContext using one of the nice APIs
[00:27] <teratorn> then avformat_write_packet() calls writes your encoded packets to a stream in that format
[00:28] <Jookia> Ah, thanks so much
[00:28] <teratorn> good luck
[01:18] <kevint_> I'm piping raw images into ffmpeg to create a video. The images are not taken at a constant framerate, so is there any way to timestamp those images as they go in?
[01:18] <kevint_> so as to encode in "realtime" rather than as fast as possible at a specific framerate
[01:24] <klaxa> did you try specifying -r for the input?
[01:32] <kevint_> klaxa: yes, but -r sets a constant framerate while the frames are coming in at a variable frame rate
[01:32] <klaxa> ah... hmm...
[01:33] <saste> kevint_, we added a new image2 option
[01:33] <saste> ts_from_file or something
[01:33] <saste> unfortunately it is undocumented
[01:34] <durandal_1707> ffmpeg -h demuxer=image2
[01:35] <kevint_> that's okay I can walk through the code... how does it detect the timestamp of the incoming images?
[01:35] <kevint_> do I prepend it to the image?
[01:36] <kevint_> "If set to 1, will set frame timestamp to modification time of image file"
[01:36] <kevint_> wow
[01:36] <kevint_> how accurate should that be? milliseconds?
[01:40] <kevint_> looks like 1 second resolution, which isn't good enough unfortunately
[01:41] <t4nk674> Hello @ all
[01:42] <t4nk674> Something curious happened, it seems when I try to parse "deblockalpha" parameter to the new ffmpeg it does not recognize the option
[01:42] <t4nk674> Seems it is depreciated, does anybody know if it was replaced by another argument?
[01:43] <t4nk674> err I mean parameter
[01:43] <saste> kevint_, no it is a different thing
[01:43] <saste> it is taking the time from the file stats
[01:43] <saste> can't be used to provide timestamps
[01:44] <saste> there are some tickets related to the feature you ask for, please upvote the tickets
[01:44] <saste> i plan to work on it, but I don't know when
[01:45] <t4nk674> Hello saste
[01:45] <t4nk674> this channel almost feels spooky ^^
[01:46] <kevint_> saste - What if the file's last modified field is the timestamp I want to provide the frame with?
[01:46] <kevint_> I am creating the image file at the moment I want the frame to be timestamped as
[01:54] <t4nk674> wb saste
[01:54] <t4nk674> Do you happen to be involved with the development of ffmpeg?
[01:58] <saste> t4nk674, yes why?
[01:59] <t4nk674> Maybe I could intrigue you with my quesiton about whether deblock parameter has full depreciated without replacement
[02:03] <saste> t4nk674, what's that an x264 option? I don't know
[02:03] <t4nk674> it is a ffmpeg option
[02:04] <t4nk674> probably part of the x264 lib though
[02:11] <durandal_1707> t4nk674: it was removed years ago....
[02:11] <durandal_1707> you can pass all x264 options via AVOpt
[02:11] <durandal_1707> x264-params
[02:12] <t4nk674> AVOpt?
[02:12] <durandal_1707> if you are only using ffmpeg then you should not need to bother about AVOpt
[02:12] <t4nk674> would it be like ffmpeg -i bla bla.mp4 -AVOpt -someparameterfor-x264-lib output.mp4 ?
[02:12] <durandal_1707> nope
[02:13] <durandal_1707> there is documentation
[02:13] <durandal_1707> and its -x264-params
[02:13] <t4nk674> ah
[02:13] <durandal_1707> ffmpeg -h encoder=libx264
[02:13] <t4nk674> isnt the encoder already specified with vcodec ?
[02:14] <durandal_1707> above thing will show help
[02:14] <durandal_1707> for encoder libx264
[02:14] <t4nk674> ah
[02:15] <t4nk674> well i have enough of reading documentation
[02:15] <t4nk674> so to make it short, i can pass anything if i add -x264 before the x264 parameter
[02:15] <durandal_1707> i'm busy
[02:16] <t4nk674> there are few cases ffmpeg does not provide an equivalent parameter for x264
[02:17] <llogan> t4nk674: https://trac.ffmpeg.org/wiki/x264EncodingGuide#Overwritingdefaultpresetsettings
[02:21] <t4nk674> thank you llogan
[02:22] <llogan> t4nk674: although you should probably just be using the presets and not monkeying with various x264 options
[02:22] <phr3ak> durandal_1707: thanks
[02:22] <durandal_1707> phr3ak: for what?
[02:23] <t4nk674> llogan why would I want to use the presets
[02:23] <llogan> read the guide
[02:24] <t4nk674> there is no learning curve in using presets
[02:25] <t4nk674> llogan, where are the presets like normal, fast, ultrafast stored?
[02:25] <t4nk674> tried to look for them, but only found presets for ipod and stuff
[02:27] <llogan> they are not stored as files anymore
[02:29] <llogan> http://git.videolan.org/?p=x264.git;a=blob;f=common/common.c;hb=HEAD#l180
[02:41] <t4nk674> thanks llogan
[02:42] <t4nk674> good bye all
[02:42] <t4nk674> thank you for your help
[02:42] <t4nk674> im gonna start the next youtube site now!
[02:43] <t4nk674> lol
[03:16] <gamax92> Hello there, I'm trying to use ffplay on a audio file but It seems to ignore my filters and give me the showspectrum scale=sqrt video output instead. I'm tried as a more generic result -vf vflip but even that wouldn't make my spectrum upside down.
[03:16] <gamax92> The command I'm using is: ffplay -i Test.wav -vf showspectrum=scale=lin -loop 0
[03:18] <gamax92> I've also tried using "-vn -vf showspectrum..." and -"vf nullsink,showspectrum=scale=lin" but that didn't work either.
[03:22] <llogan> gamax92: see the example in the showspectrum documentation: http://ffmpeg.org/ffmpeg-filters.html#showspectrum
[03:25] <gamax92> "Argument 'asplit' provided as input filename, but ''amovie=Test.wav,' was already specified."
[03:27] <gamax92> http://pastebin.com/uPp1E937
[03:30] <llogan> oh, windows. maybe it doesn't like the single quotes.
[03:30] <llogan> change them to "
[03:33] <gamax92> Well, that worked, but why is it that i have to use lavfi to generate the filter?
[03:34] <gamax92> I also just realized that scale refers to color intensity and not frequency scale and so the entire effort is worthless.
[03:34] <llogan> i'm not sure
[07:20] <Prannon> I am attempting to set up a flash stream using a webcam and my local server. I have configured ffserver.conf and I am running this command to initiate the stream:
[07:21] <Prannon> ffmpeg -f v4l2 -s 640x480 -r 15 -i /dev/video0 http://prannon.net:8090/chopstick.swf
[07:21] <Prannon> I can visit the above URL and it appears that the server is trying to serve me the stream, but I don't see that my webcam is online and I don't see that the system is writing to the chopstick.swf file in /tmp/. I am not sure what I'm doing wrong.
[07:21] <Prannon> Are there any tips on what I should check?
[10:18] <nlight> how do I get the AV_PIX_FMT_* of a AVCodecContext  ?
[10:20] <nlight> heh, pix_fmt member, ok
[10:29] <luc4> Hi! Is there any way to avoid the need for seeking when muxing using libavformat?
[10:31] <myFfmpeg> hi all, I have a small question. Is there any way to place two input images next to each other using a filter?
[10:32] <myFfmpeg> I have two input sources and I need to place the m next to the other one
[10:43] <myFfmpeg> can I somehow create a video from two different sources but the resultant video will contain these two video side by side?
[10:44] <nlight> myFfmpeg, couldn't you just memcpy the two sources to a double-sized buffer?
[10:44] <nlight> and then encode with ffmpeg?
[10:45] <nlight> decode to rgb -> memcpy into a single buffer -> encode again
[10:45] <myFfmpeg> yes but actually I am trying to get my answer by simlplfying my main question
[10:45] <myFfmpeg> I have two image sources, one of them is yuv image and the other one is rgb
[10:45] <myFfmpeg> what I want to do is to have one resultant image which contains two input images side by side
[10:45] <myFfmpeg> I want to do it by one call if possible
[10:46] <JEEB> go look at the ffmpeg's video filter docs, seriously. Although the fact that you are dealing with two pictures of different colorspace can make it a bit less simple
[10:46] <myFfmpeg> I have looked at it JEEB
[10:47] <myFfmpeg> I see tile filter but I have no idea if I can solve my problem with it
[10:47] <JEEB> Have you, you know, TRIED it? Although I must agree that such less simple cases of filtering can be rather funky to get right the first time :P
[10:48] <JEEB> I have other things I've gotten used to using, and thankfully those work. But I'm pretty sure you can do what you want to do with just ffmpeg
[10:48] <JEEB> you just have to wrap your head around the filtering syntax of ffmpeg's
[10:48] <myFfmpeg> ohh, really? Can you give me a hint where to start?
[10:49] <myFfmpeg> "you just have to wrap your head around the filtering syntax of ffmpeg's" that I didn't understand
[10:49] <myFfmpeg> so you are saying that I don't need any filder? I can do it ffmpeg?
[10:50] <myFfmpeg> filder = filter
[10:50] <JEEB> ...
[10:50] <myFfmpeg> ok. let me read more
[10:50] <JEEB> you do know the -vf option, right?
[10:50] <myFfmpeg> thanks
[10:51] <myFfmpeg> well I am using API
[10:51] <JEEB> good luck and have fun
[10:51] <JEEB> it's surely possible
[10:51] <JEEB> but you'll still have to deal with the avfilter syntax and stuff
[10:52] <JEEB> hah
[10:52] <JEEB> and I already found you an example
[10:52] <JEEB> http://ffmpeg.org/ffmpeg-filters.html#Examples-34
[10:52] <JEEB> Compose output by putting two input videos side to side
[10:53] <JEEB> first try with command line, then try to poke that into API usage
[10:53] <myFfmpeg> ok
[10:53] <myFfmpeg> I am trying now
[10:53] <myFfmpeg> thanks a lot
[11:08] <myFfmpeg> another question came to my mind. is it possible to convert yuv to jpeg without converting it to rgb first?
[11:08] <myFfmpeg> ]
[11:08] <myFfmpeg> i mean is there any direct conversion?
[11:09] <myFfmpeg> or is it even possible
[11:09] <Mavrik> em
[11:09] <Mavrik> JPEG is stored in yuv usually :)
[11:09] <Mavrik> so yes.
[11:09] <myFfmpeg> how do you mean?
[11:09] <myFfmpeg> as far as I know jpeg2000 is a sort of wavelet transformation
[11:10] <myFfmpeg> ahh you mean it is applied on yuv channels seperately
[11:10] <myFfmpeg> is there any public algorithm for this?
[11:11] <myFfmpeg> I mean for the conversion?
[11:11] <Mavrik> myFfmpeg, JPEG compression works on YPbPr color space not RGB, so it's always converted
[11:12] <myFfmpeg> so please tell me this, I have stored a YUV image and I want to have jpeg out of it. what is the process to apply?
[11:12] <Mavrik> myFfmpeg, depends with that
[11:12] <Mavrik> *what
[11:13] <Mavrik> myFfmpeg, are you looking for API calls, command-line tools, ffmpeg command-line?
[11:13] <myFfmpeg> any of them :D
[11:13] <myFfmpeg> but I would preser API call
[11:13] <myFfmpeg> or even any algorith mthat I can apply myself
[11:13] <Mavrik> use swscale to convert image color space to PIX_FMT_YUVJ420P
[11:13] <Mavrik> then just initialize AV_CODEC_ID_MJPEG encoder and pass the image there
[11:14] <Mavrik> :)
[11:14] <myFfmpeg> :D
[11:14] <myFfmpeg> perfect man
[11:14] <myFfmpeg> thanks a lot
[11:14] <Mavrik> when you call avcodec_encode_video2 with mjpeg encoder you'll get a full JPEG image out everytime :)
[11:15] <myFfmpeg> i see
[11:18] <luc4> Hi! Anybody who knows what the write callback in AVIOContext is supposed to return? The total number of bytes written maybe?
[11:19] <Mavrik> luc4, yes
[11:20] <luc4> Mavrik: thanks
[11:25] <myFfmpeg> Mavrik, I need your help
[11:25] <myFfmpeg> :)
[11:25] <Mavrik> mhm.
[11:25] <myFfmpeg> my main problem was the following. I have two frames. one is in yuv format and the other is rgb.
[11:26] <myFfmpeg> My aim is to join these frames into one frame and encodi them with jpeg
[11:26] <myFfmpeg> well, from yuv to jpeg, it is done by your help
[11:26] <myFfmpeg> I can also convert rgb to jpeg
[11:26] <myFfmpeg> but do you know how I can join them?
[11:27] <myFfmpeg> or should I join them before encoding?
[11:27] <Mavrik> of course you should join them before encoding, it's the only way you can :)
[11:27] <Mavrik> myFfmpeg, how do you want to join them? put them up side by side?
[11:27] <myFfmpeg> yes exactly
[11:27] <myFfmpeg> they have the same res.
[11:29] <Mavrik> well you could initialize a filter but that would be a huge hassle
[11:29] <Mavrik> or you convert them to the same pixel format
[11:29] <Mavrik> then create a new AVFrame with the output resolution
[11:29] <myFfmpeg> so, first I will comvert RGB to J420P. and also convert YUV to J420P. Then do memcpy
[11:29] <Mavrik> and copy them in side by side
[11:29] <myFfmpeg> would this wokr
[11:29] <Mavrik> yes
[11:29] <myFfmpeg> perfect man
[11:29] <myFfmpeg> God bless you
[11:29] <Mavrik> just remember images are stored line by line
[11:30] <myFfmpeg> so memcpy won't work
[11:30] <myFfmpeg> or line by line memcpy
[11:30] <Mavrik> so if you're putting them side by side in horizontal fasion you'll need to run a loop that'll compose lines
[11:30] <myFfmpeg> ok I got it
[11:30] <Mavrik> and that planar formats (the "P" in format) use data[0] - data[2] for each plane
[11:30] <Mavrik> e.g. data[0] is the Y channel, data[1] is U, etc.
[11:31] <myFfmpeg> that part is the fun part :)
[11:31] <myFfmpeg> I will take fcsare of uit
[11:31] <myFfmpeg> thanks a lot
[11:31] <Mavrik> oh and that 420 format has half the resolution of UV channels than Y :)
[11:31] <Mavrik> it's fun :D
[11:31] <Mavrik> but not that hard once you get what's going on
[11:31] <myFfmpeg> yes but I will do memcpy after converting them to P420
[11:32] <myFfmpeg> so they will have the same resolution
[11:32] <myFfmpeg> I mean each channels
[11:34] <Mavrik> mhm
[11:35] <myFfmpeg> no?
[11:35] <Mavrik> myFfmpeg, Y will have 2x as much pixels as UV
[11:35] <Mavrik> since it's a 420 format
[11:35] <Mavrik> so if your image is 1280px wide
[11:35] <myFfmpeg> ahh, ok
[11:35] <Mavrik> data[0] will have 1280 elements, data[1] and [2] will have 640
[11:35] <myFfmpeg> I know that part
[11:35] <myFfmpeg> yes yes.
[11:35] <myFfmpeg> that's no problem :)
[11:35] <Mavrik> sorry, data[0] will have 1280xheight elements ;)
[13:13] <bAndie9100> hi people
[13:26] <bAndie9100> ffmpeg -f rawvideo -pix_fmt uyvy422 -vtag 2vuy -s 720x576 -r 25 -aspect 4:3 -vsync 2 -i pipe: -f alsa -acodec pcm_s16le -ac 1 -i default  -y output.mpg
[13:27] <bAndie9100> i want to record video and audio from different sources
[13:27] <bAndie9100> raw video on stdin and audio on microphone
[13:28] <bAndie9100> i attempt to do it with the command above
[13:29] <bAndie9100> but audio disappears after a while
[13:41] <nlight> can i pass utf-8 encoded filename to avformat_open_input ?
[14:15] <nlight> anyone?
[14:18] <spaam> it should work
[14:18] <spaam> i have seen some commits about this in the past
[14:24] <spaam> nlight: you can always try and see if it works :)
[18:28] <Netlynx> is it possible to capture a webcam (/dev/video0) and overlay text input from a serial port on the same output?
[18:49] <spreeuw> hi, is it possible to increase buffer on streams played with ffplay?
[20:50] <rexbron> Does any one know why the ass filter might not burn subtitles into a tif sequence?
[20:52] <t4nk149> I am trying to compile the resampling_audio in doc/examples in version 1.2 and it segfaults. The stack trace is http://pastebin.com/syHK8xkb . Any ideas ?
[21:06] <durandal_1707> t4nk149: what? compilation segfaults?
[21:07] <t4nk149> My bad, when I am running it segfaults..
[00:00] --- Thu Aug  1 2013


More information about the Ffmpeg-devel-irc mailing list