[Ffmpeg-devel-irc] ffmpeg.log.20140305

burek burek021 at gmail.com
Thu Mar 6 02:05:01 CET 2014


[00:53] <Jack64> jangle any luck ?
[00:53] <jangle> I'm about to find out
[01:00] <Jack64> jangle: did you replace the fread() call for your own memory reader?
[01:02] <jangle> I'm doing a sanity check first.
[01:19] <jangle> nope, didn't work
[01:20] <jangle> I realized i'd been linking the example directory against my macports build of ffmpeg and not the local build I was using for testing.  Corrected this, same problem
[01:22] <jangle> starting with the avcodec.c code
[01:22] <jangle> I changed the decoder to be h264, and included a call to video_decode when using the h264 command line flag for the program, decoder pukes
[01:23] <jangle> so clearly, theres more to it than the fread call, mister programmer.
[01:23] <Jack64> hmm
[01:26] <Jack64> is it possible to change your inputs?
[01:26] <Jack64> not having them in those in memory buffers
[01:26] <jangle> http://paste.lisp.org/display/141486
[05:03] <orbisvicis> hi, I'm running ffmpeg revision d41efc1f267c1b71d83c8c6dff72eab0967c4365, or roundabouts there anyway (mplayer doesn't give a specific version, but the commit corresponds to the date of the svn mplayer version I've built. does say "libavcodec version 54.92.100" though)
[05:04] <orbisvicis> curious if there have been any significant performance improvements to the x264 decoder since
[05:05] <orbisvicis> (that would be Feb 18 2013)
[05:17] <relaxed> orbisvicis: probably, but they come in small doses over time.
[05:19] <orbisvicis> so nothing headline-worthy | no major todo goals accomplished
[05:20] <orbisvicis> not to say I won't try it. just unlikely that miniscule performance improvements accumulated over 2 years would give my slow system enough of an edge
[05:25] <orbisvicis> s/x264 decoder/ffmpeg h264 decoder/
[08:32] <agentOrange> how would i go about setting up a server to listen for and receive RTMP streams, and hand them off to fmmpeg to transcode and broadcast?
[08:59] <bparker> mathis98: VLC can probably do it, not sure about the rtmp support though
[08:59] <mathis98> yeah the RTMP is the kicker
[08:59] <bparker> but I highly doubt you will get much help with such a huge question
[08:59] <mathis98> i think im going to have to write my own server
[08:59] <mathis98> right on
[09:00] <bparker> there is librtmp
[09:00] <mathis98> well ive got most of it figured out, just not the very front edge, the listening socket
[09:00] <mathis98> yeah im looking at it now
[09:00] <bparker> I know gstreamer supports it
[09:00] <bparker> (rtmp)
[10:35] <ranman> I have ~100s of images stored in a database. I'd prefer not to persist all of them to disk before passing them into ffmpeg to render into a video. Is there an example of how to do this? programming language doesn't matter...
[10:36] <JEEB> see the docs/examples/demuxing_decoding one in the git repo
[10:36] <JEEB> and stare at the doxygen
[10:36] <JEEB> good luck
[10:36] <JEEB> there are other examples in there, too
[10:37] <ranman> here ?https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/demuxing_decoding.c
[10:37] <JEEB> yes, although you might want to actually clone the git repo
[10:37] <ranman> yup, just glancing through it first, wanted to make sure I had the right file.
[10:38] <ranman> do you think ffmpeg is the right tool for this or should I consider something else?
[10:38] <JEEB> FFmpeg's libraries most probably are the right tool, not sure about ffmpeg the cli tool
[10:39] <JEEB> but if you're already ready to jump into code, then yes -- FFmpeg's libraries are going to do it for you (parsing and decoding the pictures, which you can then feed to an encoder, and then mux into a container)
[10:40] <ranman> ok, thanks for your help, one last question, do you know of any good python bindings for FFmpeg (I was not able to find any)
[10:40] <JEEB> no
[10:40] <JEEB> even if there were some, they are most probably outdated by now
[10:40] <ranman> to c it is :/
[10:40] <ranman> farewell for now
[11:01] <amigojapan> hey JEEB, are you the same JEEB that used to hang out in Japanese on the rizu/2chan networked channel thinggy?
[11:02] <JEEB> yes
[11:02] <amigojapan> JEEB nice to see you again EWvŠ
[11:03] <JEEB> JrU
[11:38] <ranman> another nooby question, the frame dropping when building a video using images doesn't seem evenly distributed
[16:07] <TekniQue> I'm trying to get started using the ffmpeg libraries in application development
[16:07] <TekniQue> one thing confuses me, playing with demuxing_decoding.c from doc/examples, it always claims the video/audio frames have no PTS
[16:08] <TekniQue> how on earth can they not have a PTS?
[16:08] <TekniQue> I'm not playing AVI files
[16:08] <TekniQue> this is coming from mp4 and ts files
[19:10] <anth0ny> I'm trying to make a video from a number of images.  When doing this, I set a framerate (say, 10fps).  I then want to merge this video with another video of a different framerate (say, 20fps).  Is it possible to create the first video at 20fps while still displaying the images at 10fps (ie, displaying each frame twice)?
[19:10] <anth0ny> I could, of course, resample the first video to 20fps and then merge it, but I'm hoping to avoid this step for the sake of efficiency
[19:10] <anth0ny> (merge = concatenate)
[19:24] <JodaZ> anth0ny, well, variable framerate videos might also not go over well with every player you want your videos to play on
[19:25] <anth0ny> JodaZ: I guess I should mention that these are MP4 videos, which avconv/ffmpeg complains about if it tries to concatenate videos of different framerate
[19:25] <anth0ny> hence the need to have a consistent framerate
[19:26] <anth0ny> basically, I want a video to have a twice as high framerate while still having the same duration and number of input frames (if that makes sense)
[19:26] <JodaZ> i understand what you want to do
[19:27] <JodaZ> anth0ny, are those videos at least same size, or do you need to re-encode anyways
[19:27] <anth0ny> yes, they are all the same size
[19:28] <JodaZ> anth0ny, note that the inelegancy in resampling should not be overly present as overhead in the resulting files as modern video codecs encode change in frames and there is no change between two such duplicate frames
[19:28] <anth0ny> basically, I'm making a vid of images. All images come from the same source.  some sections of imagery should play at a faster rate than others.  imagine a survalence camera where I want the night time images to play twice as fast as the daytime images.
[19:30] <anth0ny> JodaZ: the inelegancy that I mentioned was about adding another step to creating these videos (the resample).  I would rather avoid that if possible.  this is something that is going to run many many times
[19:30] <anth0ny> is that what you were referring to?
[19:32] <JodaZ> man, lots of people are working on security camera systems...
[19:34] <JodaZ> anth0ny, well, i can't really help you: http://www.ffmpeg.org/faq.html#How-can-I-concatenate-video-files
[19:35] <JodaZ> with mp4 you can't use file level concat, so you either have the concat filter or the concat demuxer
[19:35] <anth0ny> JodaZ: hmm... maybe mp4 isnt' the right format to be using...
[19:35] <JodaZ> or, i mean if you are makign the mp4 yourself from individual frames, you should propably switch to a container allowing file level concat
[19:35] <JodaZ> eh, yes
[19:36] <JodaZ> i think you'd maybe rather use raw .h264 or ts
[19:37] <anth0ny> btw, thanks for this help so far
[19:37] <JodaZ> i myself am having problems with concating video (actually my problems are rather with splitting it) currently
[19:39] <JodaZ> i think you should try making your frames into .ts or .h264 and then using the concat demuxer to join em, anth0ny
[19:39] <anth0ny> JodaZ: I'm using libx264 https://trac.ffmpeg.org/wiki/x264EncodingGuide, is this not raw h264?
[19:40] <JodaZ> well, h264 is the codec which gets wrapped in a container usually
[19:40] <JodaZ> mp4 is quite an elaborate container
[19:40] <JodaZ> .ts is simpler
[19:40] <JodaZ> and .h264 is no container (or barely one)
[19:41] <anth0ny> i see...
[19:41] <anth0ny> brb, lunch
[19:41] <JodaZ> well, i guess it doesn't really matter tho, just try the concat demuxer and report back
[20:03] <Jack64> anyone here a pro video splitter ?
[20:03] <Jack64> I need some help splitting videos
[20:04] <Jack64> here's the command line I'm using
[20:04] <Jack64> (in a script)
[20:04] <Jack64> ffmpeg -y -i /tmp/ram/$infile -t $intime0 -c copy $workingdir/smallfile0b.mp4 -ss $intime0 -c copy $workingdir/smallfile0e.mp4 </dev/null >/dev/null 2>/var/log/ffmpeg.log
[20:05] <Jack64> this splits it from beginning to $intime0 to smallfile0b.mp4 and from $intime0 to the end at smallfile0e.mp4
[20:06] <Jack64> now I should be able to feed the same command smallfile0e.mp4 and $intime1 and get from the end of $intime0 to $intime1 right?
[20:14] <Jack64> fixed it :P
[20:14] <klaxa> was about to ask what exactly you meant
[20:15] CTCP PING: 1394046935 534156 from average (average!~un_golan at wikimedia/Spetrea) to #ffmpeg
[20:18] <jangle> I'm attempting to build ffmpeg with debug info, so that I can step through ffplay.  My configure line includes --enable-debug and --disable-stripping.  The configure help suggests that the --enable-debug line takes a parameter for "debug level".  Internet searches suggest to try =3 and =gdb and to leave it alone, and in all 3 of those cases when I try to step through ffplay built in this way, after I set a breakpoint on main and then hit run and next, gdb
[20:18] <jangle> mentions that there is no line informaiton associated with it.  Can anyone offer suggestions about what I should look at next?
[20:19] <Jack64> well I'm generating this script using php and on the other iterations where it used smallfile0e.mp4 for example, it should use -t $intime1 and -ss $intime1 but it was using -t $intime0 -ss $intime1 , hence the malformed split
[20:20] <Jack64> jangle: still wrestling with the in memory buffer of encoded nals?
[20:21] <jangle> Jack64:  yes
[20:22] <Jack64> jangle: so you decided to step through ffplay and use it to get your frame to the canvas?
[20:23] <jangle> Jack64: I've decided to step through ffplay to see how it stands up the decoder after opening and reading an annex b file, and when I figure out how that happens, I'll do it myself with direct calls to the library
[20:23] <Jack64> cool
[20:24] <jangle> oddly enough, I have the same debug problems when trying to use the libav tools...
[20:25] <Jack64> exactly the same?
[20:25] <Jack64> maybe there's something wrong with your input ?
[20:26] <jangle> its not that the files don't play
[20:26] <Jack64> you can play it right?
[20:26] <jangle> its that I seem to not have been able to build up the libraries and programs with proper debug information, so that when I run them inside of gdb, gdb doesn't get enough informaiton to let me step through
[20:27] <jangle> at least, I think thats what's going on.
[20:28] <jangle> using list in gdb prints source listings, but not at the point of current execution, or where the breakpoint hits
[20:37] <JodaZ> jangle, are you really still at this
[20:38] <Jack64> JodaZ: he's obviously committed :)
[20:38] <Jack64> it's an interesting thing to learn, even if just to know how it works
[20:38] <JodaZ> well, i could have helped him yesterday, but for the sake of less spam here i might just as well now
[20:39] <Jack64> it's about time actually :P you let him sweat it hard eheh
[20:39] <JodaZ> the problem with the code example he had yesterday was that it needed full frames of input passed, not just arbitrary chunks of input buffer
[20:39] <JodaZ> ... as is said in comments in that code actually
[20:39] <jangle> I've ignored jodaz, he's not helpful.
[20:40] <Jack64> ha, he just was
[20:40] <Jack64> so it's your input after all
[20:40] <Jack64> you simply can't do it like that
[20:40] <jangle> i'm not running these tests on my input
[20:41] <Jack64> yea but the input of the tests was not full frames
[20:41] <Jack64> but arbitrary chunks of input buffer, like JodaZ says
[20:41] <Jack64> remember you said you were generating the frames?
[20:43] <jangle> no, I want to generate frames.  I don't have frames, I have only encoded data.  I assume things like, an sps, pps and one idr nal are required for generating one frame, and a new p frame nal relies on previous idr, sps, pps, nal for each new frame
[20:43] <JodaZ> so to get full frames, av_read_frame has to be used, and for that to work with a stream in memory and not from a file, you would propably use context set up as shown in avio_reading.c with a custom read_packet callback function
[20:45] <jangle> I also assume things like ffplay opens an annex b file, finds the nals, and feeds them to a decoder setup to expect h264.  the annex b has an sps and pps as its first 2 nals, so they either get read in however other nals are decoded, or get passed to the decoder in a different special way, and then after that, the user of the library must either simply continue to feed nals and wait for the decoder to return a full frame, or do things like reinject the sps
[20:45] <jangle> pps before certain other nals, or more complicated things.  Since my stream only emits one sps and pps nal,
[20:45] <jangle> so I've decided that even if it takes a while to do, if I step through ffplay as it reads in an annex b, I'll be able to figure out how the decoder is fed nals, and then do that myself
[20:48] <jangle> so now I'm at the point where I'm trying to get the programs compiled with enough information to step through, and once again, since it seems like this isn't something people do all the time, I'm running into problems.  I suspect my toolchain is messed up, so, I have much to learn and I appreciate pointers to help me figure out what is expected to be correct, so that I can determine what parts of this process are broken for me.
[20:52] <JodaZ> Jack64, now i wonder if he is redoubling his spamming efforts just to annoy me :)
[21:43] <Jack64> JodaZ: hah I think he's just excited and wants to learn :) besides, this is a chat platform, he's chatting on topic, so I don't consider that spamming. I'm probably going to do the same as him when I have time, I want to learn that low level video stuff too ..
[23:44] <sybariten> evening
[23:45] <sybariten> i have an .mp4 file with audio that seems to be .aac. I also have an mp3 file of a new soundtrack, that i would like to insert instead
[23:45] <sybariten> can this be done with ffmpeg? and what does the mp4 container think about mp3, will i need to do some sort of conversion?
[23:46] <llogan> you want the video from one file and the audio from another?
[23:47] <sybariten> hm, well yeah i guess... i already have an mp4 file which is video+audio . But the audio there is rather crappy, and has since been remixed
[23:47] <llogan> ffmpeg -i video.mp4 -i audio.mp3 -map 0:v -map 1:a -codec:v copy -codec:a aac -strict experimental output.mp4
[23:48] <sybariten> so now i would like to replace the audio track with a new one. THe soundtracks have the same length , but not down to single frames or so but maybe down to 1/4 second
[23:48] <relaxed> mp3 is supported in .mp4, no?
[23:48] <sybariten> aah, so you take the mp3 and give it a -codec:a
[23:49] <sybariten> is -strict exeperimental part of the options?  :)
[23:50] <llogan> i forgot -shortest
[23:51] <llogan> i added -strict experimental because i have no information about your ffmpeg build so I chose the native AAC audio encoder
[23:51] <sybariten> aah ok ...  and should i also use -shortest?
[23:51] <llogan> probably. it will make the output duration the same duration as the shortest input
[23:51] <sybariten> also, i noticed now....   i'm a fool, its an m4v not an mp4. Is m4v a similar container?
[23:52] <relaxed> yes, it's Apple's name for .mp4
[23:52] <llogan> similar enough. ffmpeg might run it through the "ipod" muxer...whatever that is
[23:52] <sybariten> NICE
[23:52] <llogan> i might be incorrect. i can't remember
[23:53] <sybariten> nah my Windows ffmpeg was too old for that  :)   "unrecognized option codec:v"
[23:53] <llogan> http://ffmpeg.zeranoe.com/builds/
[23:53] <sybariten> merci
[23:54] <llogan> if you want to just stream copy the mp3 instead of re-encoding: ffmpeg -i video -i audio -map 0:v -map 1:a -codec copy -shortest output.mp4
[23:54] <llogan> i can't remember if mp3 is officially supported in mp4 and i'm too lazy to read specs.
[23:55] <llogan> i think it is though
[23:59] <sybariten> llogan: i cant take the risk a.t.m.
[00:00] --- Thu Mar  6 2014


More information about the Ffmpeg-devel-irc mailing list