[Ffmpeg-devel-irc] ffmpeg.log.20150626

burek burek021 at gmail.com
Sat Jun 27 02:05:02 CEST 2015


[00:16:47 CEST] <feliwir> hey, what could be the reason for platform specifc bugs?
[00:26:34 CEST] <feliwir> avformat_open_input fails on windows, but not on linux with the exact file with exact same version
[00:29:42 CEST] <feliwir> how is that possible?
[00:41:22 CEST] <feliwir> someone has latest ffmpeg (from github) on windows who's willing to test sth for me
[00:48:33 CEST] <Exagone313> Hello, How to use multiple output? I tried "first_output|second_output" as described here: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs but it does not work, it try one output
[00:49:33 CEST] <Exagone313> ffmpeg -i rtmp://127.0.0.1/live/stream "http://127.0.0.1:8182/ogg.ffm|http://127.0.0.1:8182/webm.ffm"
[00:50:28 CEST] <Exagone313> ok, it works without the quotes
[02:35:40 CEST] <DHE> Exagone313: it says you must use "-f tee" as the output format
[07:49:38 CEST] <gurpartap1> I want to log pts_time along with clock time respective to that frame and also segment name for that part..I am making segments of 20 seconds.
[07:50:11 CEST] <gurpartap1> Here is commandline : ffmpeg   -rtsp_transport tcp -re -i rtsp://192.168.0.127 -itsoffset 00:00:1.2 -i stream.sdp -async 1 -tune zerolatency -v verbose -vstats_file log27.txt -vcodec libx264  -acodec  aac -strict -2  -s 600x480 -t 00:01:00 -f segment -segment_time 20 -segment_format mp4 -y "live_capture_%03d.avi"
[07:56:10 CEST] <gurpartap1> Here is code snippet from ffmpeg.c http://pastebin.com/ZVgZ8Dcf
[07:59:24 CEST] <gurpartap1> This is progress till now http://pastebin.com/Na1SBHgU
[09:46:41 CEST] <feliwir> someone has the latest ffmpeg version (master branch) on windows?
[10:35:57 CEST] <feliwir> noone?
[10:37:46 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/archive/master.zip
[11:29:52 CEST] <gurpartap1> BtbN: I am using segment filter to create chunks of 10 seconds.But it is creating chunks of wrong size?
[11:30:36 CEST] <gurpartap1> BtbN: Here is commandline:  ffmpeg -i rtsp://192.168.0.127 -map 0 -an -v "debug" -flags -global_header  -vcodec libx264  -f segment -segment_time 10 out%03d.mp4
[12:24:24 CEST] <Abhijit> how can i make sure that a given mp4 video has its metadata at the begining and not at the end?
[12:40:32 CEST] <hrw> is there a way to add language tags to audio streams?
[12:44:12 CEST] <chungy> -metadata:s:a:0 language=eng
[12:45:34 CEST] <hrw> chungy: thanks
[12:46:10 CEST] <circ-user-NSwOK_>  /msg nickserv help register
[12:56:03 CEST] <Abhijit> circ-user-NSwOK_, without space before "/"
[12:56:25 CEST] <Abhijit> circ-user-NSwOK_, and do it in server info tab. not here in any channel
[12:56:54 CEST] <circ-user-NSwOK_> thanx at abhijit
[13:20:17 CEST] <ashish_3805>  hello, I am new to open-source. I am second year engineering student in india. I know c++, c and webstuffs . I want to get my hands in development by contributing to ffmpeg.. please help
[13:30:24 CEST] <durandal_1707>  ashish_3805 : better ask on -devel channel and mailing list
[13:32:01 CEST] <ashish_3805> thanxx
[13:43:41 CEST] <Abhijit> i created 8 seconds segments of a given mp4 video. how can i make html video tag to play all of them a single video?
[14:56:34 CEST] <feliwir> can i set the looping behaviour of ffplay somehow? it should not try to seek but rather restart
[14:56:44 CEST] <feliwir> because the file is not seekable
[14:58:40 CEST] <sine0> in windows if i add a path variable i could use ffmpeg from any directory using cmd correct ?
[15:04:37 CEST] <durandal_1707> feliwir: what container?
[15:05:23 CEST] <feliwir> durandal_1707: ea
[15:06:30 CEST] <durandal_1707> you could try -loop for ffplay
[15:06:43 CEST] <feliwir> durandal_1707: i used that
[15:06:48 CEST] <feliwir> it tried to seek and fails
[15:07:41 CEST] <feliwir> i get this in console: ./data/movies/LoadingRing.vp6: error while seeking
[15:07:48 CEST] <feliwir> (after it played once succesfully)
[15:08:03 CEST] <durandal_1707> than either implement seeking in ea or do not use ffplay
[15:08:41 CEST] <feliwir> durandal_1707: this file isn't meant to be seeked. No player can seek it
[15:08:48 CEST] <feliwir> but i know that it is loopable
[15:09:50 CEST] <sine0> if I am going to do a batch convert of 8 files from m4a to wav, do i have to create a variable
[15:12:53 CEST] <sine0> ffmpeg -i *.m4a *.wav
[15:13:09 CEST] <sine0> (conserve filenames)
[15:13:36 CEST] <sine0> has to be a $.wav or something!
[15:17:19 CEST] <DelphiWorld> yo all
[15:17:30 CEST] <DelphiWorld> i added a cover art to a mp3 file to be uploaded to  youtube
[15:17:53 CEST] <DelphiWorld> ffmpeg -i mp3file.mp3 -i cover.jpg -loop 0 -acodec copy out.mp4
[15:17:58 CEST] <DelphiWorld> but the player dont play it
[15:19:38 CEST] <c_14> you probably want ffmpeg -i mp3file.mp3 -loop 1 -i cover.jpg -shortest -c:a copy out.mp4
[15:20:10 CEST] <DelphiWorld> c_14: ok, i'lle try
[15:20:24 CEST] <DelphiWorld> how to add ffmpeg to my path on windows...
[15:20:33 CEST] <DelphiWorld> i dont see where i could add to the path
[15:21:00 CEST] <c_14> You have to go to the system settings dialog and then click your way through n+1 different dialogs until you get to the PATH setting. Can't remember exactly where it is though.
[15:21:06 CEST] <c_14> That or put the binary somewhere already in your PATH
[15:22:56 CEST] <DelphiWorld> trying the cmd:P
[15:29:13 CEST] <DelphiWorld> ok c_14 so the cover becaume video
[15:29:16 CEST] <DelphiWorld> from 5MB to 11MB:P
[15:29:51 CEST] <c_14> You can try playing around with lowering the framerate, but I'm not sure if YouTube needs a minimum framerate.
[15:30:09 CEST] <c_14> You can also set the quality settings for the video encoder (in this case probably libx264).
[15:30:10 CEST] <sine0> c_14: done it
[15:30:56 CEST] <DelphiWorld> will use mpeg4 i think:P
[15:31:27 CEST] <c_14> DelphiWorld: If your build has libx264 support .mp4 defaults to libx264. It defaults to mpeg4 if you don't though.
[16:28:18 CEST] <RobertNagy> I'm trying to convert a m4v to mp4 but I keep getting the error 'pts has no value': 'ffmpeg -f h264 -i 1.h264 -c copy 1.mp4'.
[16:28:40 CEST] <RobertNagy> The only way I've found around it is the following: 'ffmpeg -f h264 -i 1.h264 -c copy -f avi - | ffmpeg -i - -flags +global_header -c copy 1.mp4'
[16:28:47 CEST] <RobertNagy> which is rather silly... is there a better way?
[16:30:01 CEST] <Eduardo_1> it should be ffmpeg -i 1.h264 -c copy -f mp4 1.mp4
[16:30:09 CEST] <DHE> is it an error? when I try it it's just a warning and I still get my video
[16:30:28 CEST] <RobertNagy> it's just a warning, however, some players can't play it correctly
[16:30:32 CEST] <RobertNagy> e.g. chrome
[16:30:55 CEST] <RobertNagy> @Eduardo_1: No difference.
[16:30:56 CEST] <DHE> my cmd: fmpeg -r 60 -i input.264 -codec copy output.mp4
[16:31:01 CEST] <DHE> chrome plays it for me...
[16:31:11 CEST] <RobertNagy> yes, it assumes it's 60 fps
[16:31:17 CEST] <RobertNagy> but if your source material is 24 fps
[16:31:19 CEST] <RobertNagy> it will play it
[16:31:26 CEST] <RobertNagy> but it will be very unsmooth
[16:31:46 CEST] <RobertNagy> and it's Chrome + MSE
[16:31:47 CEST] <DHE> my commandline specifies the framerate. specify your own
[16:32:07 CEST] <RobertNagy> you specify 60, which chrome seems to assume if it doesn't have the pts
[16:32:54 CEST] <RobertNagy> either way, the resulting mp4 is not fully compliant, and I can't get ffmpeg to generate the pts without piping it through avi first.
[16:34:41 CEST] <mpearrow> Are there any recommended tools for debugging corrupted/broken h.264 ?
[16:35:01 CEST] <DHE> there is an option, -fflags genpts   which MIGHT help...
[16:35:08 CEST] <DHE> just skimming the ffmpeg help
[16:35:22 CEST] <RobertNagy> yes, but that only works when inputting non-raw streams
[16:35:42 CEST] <RobertNagy> which is why, if you look at my workaround, I first have to convert to avi and then pipe it again to ffmpeg
[16:35:51 CEST] <RobertNagy> ooh, sorry, wrong workaround string
[16:36:08 CEST] <RobertNagy> this works: ffmpeg -f h264 -i 1.h264 -c copy -f avi - | ffmpeg -fflags +genpts -i - -flags +global_header -c copy 1.mp4 -y
[16:36:44 CEST] <RobertNagy> this doesn't: ffmpeg -fflags +genpts -f h264 -i 1.h264 -flags +global_header -c copy 1.mp4 -y
[16:53:30 CEST] <Filarius> hello, what software I can use to make graphic for crf/qp/bitrate  for video file with h264 video ? Target platform is Win
[16:56:13 CEST] <BtbN> a graphic?
[16:57:06 CEST] <mccoy_slack> hello, i'm using libavformat from ffmpeg to demux the output of the video card with gives me an mpeg transport with mpeg2video and mp2 audio, everything is but ffmpeg is always logging "first_dts not matching first dts in queue", what does that log mean?
[16:57:12 CEST] <jarr0dsz3> hi everyone, im trying to download a rtmp stream as mp4
[16:57:21 CEST] <jarr0dsz3> although it fails, any experts could advice on the Cannot read RTMP handshake response error?
[16:58:58 CEST] <jarr0dsz3> using something like  ffmpeg -re -i rtmp://127.0.0.1/stream/test -c:a mp3 -c:v copy -copyts test.mp4 but it fails though
[16:59:26 CEST] <jarr0dsz3> also i see a lot of ffmpeg output but no writing doing a ctrl + c to exit ffmpeg getting the handshake errors, tried debugging this for a full +1 hour, to no avail
[17:01:11 CEST] <BtbN> i'd recommend just doing a full stream copy to flv
[17:07:37 CEST] <Mista_D> anyway to change forced x264 level  in the file's header without transcoding?
[17:08:35 CEST] <c_14> Your friendly hex editor?
[17:08:59 CEST] <klaxa> i think someone did that and then it was suggested to write a bitstreamfilter
[17:09:05 CEST] <klaxa> not sure if anything came of that though
[17:09:39 CEST] <c_14> Ye, if you want to do it with ffmpeg, you'd need a bitstream filter. Don't think one exists that does that though.
[17:50:47 CEST] <nashgul> hi, good evening
[17:52:03 CEST] <nashgul> i've a doubt with this expression: ffmpeg -i input.avi -i longo.png -filter_complex "[0:v][1:0] overlay=x=25:y=25" output.mp4
[17:52:27 CEST] <nashgul> what does it make ' [0:v][1:v] ' ?
[17:53:05 CEST] <nashgul> i understand the rest of the expression
[17:53:26 CEST] <nashgul> but not that pair of brackets expressions
[17:54:59 CEST] <klaxa> overlay takes two inputs, [0:v][1:v] specifies those inputs
[17:55:18 CEST] <klaxa> [0:v] means from the first input (0) take the video stream (v)
[17:55:25 CEST] <klaxa> [1:v] means from the second input (1) take the video stream (v)
[17:55:55 CEST] <klaxa> your first input (0) in this case is input.avi and the second one (1) is longo.png
[17:56:36 CEST] <nashgul> klaxa: but if i don't put that pair of brackets, logo.png appears alike
[17:56:55 CEST] <nashgul> i think is not neccesary, not?
[17:57:10 CEST] <klaxa> i'm guessing the default behavior is to take the video streams of the first two inputs
[17:58:12 CEST] <klaxa> if you are building more complex filters, referencing like [0:v] makes it easier to keep track of things and is probably necessary even depending on the kind of filtering you do
[17:59:08 CEST] <nashgul> klaxa: umm, ok, i think i've understood now
[17:59:22 CEST] <nashgul> klaxa: thanks  :-D
[17:59:34 CEST] <klaxa> glad i could help :)
[18:09:29 CEST] <Mista_D> is there a Linux version of "direct264" bistream editor tool adapted to a more recent FFmpeg verison (need ISMV support)?
[19:19:54 CEST] <tomM___> so i'm looking for a way to create video from still images, but with time durations specified for each image
[19:19:57 CEST] <tomM___> can ffmpeg do that?
[19:20:27 CEST] <tomM___> (i.e. a "slideshow" but without each image being on the screen for the same amount of time)
[19:21:12 CEST] <relaxed> ffmpeg and scripting can do it
[19:21:48 CEST] <pzich> hmm, there's probably a way to with some complex filters, or you can duplicate/symlink the images and assemble that as a movie at whatever framerate you want
[19:22:03 CEST] <tomM___> relaxed: i'm comfortable scripting -- how would i go about it?
[19:22:58 CEST] <relaxed> I think the easy way would be to pipe rawvideo from mulitple ffmpeg's to one ffmpeg that's encoding the video
[19:24:34 CEST] <tomM___> relaxed: so one ffmpeg for each image in the video?
[19:24:47 CEST] <tomM___> pzich: hm, that's an interesting idea. would that make it very slow?
[19:25:00 CEST] <pzich> only one way to find out ;)
[19:25:02 CEST] <pzich> (I'm not sure)
[19:25:36 CEST] <relaxed> how many different images are there?
[19:26:46 CEST] <tomM___> relaxed: well, i was hoping to be able to get as long as i want
[19:26:53 CEST] <tomM___> e.g. up to 45 mins
[19:27:00 CEST] <tomM___> with maybe an image every 5 secs
[19:28:10 CEST] <nashgul> hi again, i'm using -filter_complex with some expressions, for a drawtext scroll i need the frame or the time which -filter_complex is launched, 'n' and 't' returns the the frame and the total time
[19:29:21 CEST] <nashgul> exactly i'm doing this: drawtext=text='string1':x=w-(n*5):y=(h/PHI):enable=between(t,30,50)
[19:30:01 CEST] <nashgul> this line works if i launch the filter at the beginning, but if the filter starts at 30' doesn't works
[19:30:58 CEST] <pzich> so subtract that from n?
[19:31:58 CEST] <nashgul> ummm, n grows with the time, the solution would be A=n, x=w-A-(n*5)
[19:32:47 CEST] <nashgul> can i use variables in ffmpeg?
[19:36:01 CEST] <nashgul> sorry, would be: x=w+A-(n*5)
[19:40:05 CEST] <relaxed> how are you quoting the filter? you're example isn't working here
[19:41:00 CEST] <relaxed> your*
[19:42:01 CEST] <nashgul> ffmpeg -i input.avi -filter_complex "drawtext=text='string':x=w-(5*n):y=(h/PHI)+th:enable='between(t,20,30)'" -strict -2 output.mp4
[19:48:10 CEST] <nashgul> i can calculate an offset: framerate=25 -> A=25*(10 secs)=250 => x=w+250-(n*5) and put in the script but i doesn't exact
[19:48:20 CEST] <nashgul> i can calculate an offset: framerate=25 -> A=25*(10 secs)=250 => x=w+250-(n*5) and put in the script but it is not exact
[19:48:24 CEST] <tomM___> ok, i'm running some timing tests with the symlinks now. if it's way too slow, can anyone recommend any other tools than ffmpeg (though i know i'm on #ffmpeg :) )
[19:50:45 CEST] <relaxed> nashgul: it works here if I add setpts=PTS-STARTPTS in the beginning of the filter chain.
[19:51:45 CEST] <relaxed> wait, it works without it too :)
[19:52:13 CEST] <pzich> tomM___: you may be better off using image2pipe then you can read in the image once in your script and replay it to ffmpeg as many times as you want
[19:53:15 CEST] <relaxed> nashgul: which version are you using?
[19:53:32 CEST] <nashgul> relaxed: 2.7.1
[19:54:01 CEST] <nashgul> relaxed: setpts doesn't works
[19:54:34 CEST] <relaxed> so nothing happens?
[19:54:49 CEST] <nashgul> the strings does not appears at 10'
[19:54:53 CEST] <nashgul> the string does not appears at 10'
[19:56:23 CEST] <nashgul> relaxed: is there any way to set n=0 when the filter is launched?
[20:10:39 CEST] <nashgul> bah, i'm not very clever, if i use t instead n i can do the scroll easier
[20:10:42 CEST] <nashgul> :-D
[20:13:09 CEST] <nashgul> enable='between(t,10,20)':x=w+(10*45)-(45*t)
[20:21:51 CEST] <samons> hi
[20:22:34 CEST] <samons> unrecognized option c:v
[20:24:04 CEST] <samons> http://pastie.org/10260630
[20:24:31 CEST] <llogan> FFmpeg version SVN-r23418, Copyright (c) 2000-2010
[20:24:34 CEST] <llogan> absolutely ancient
[20:25:41 CEST] <samons> i don't know how i get that version? i've k-lite media codec pack. I just grab latest version of FFMPEG
[20:25:56 CEST] <llogan> what is your OS?
[20:26:00 CEST] <samons> XP
[20:26:08 CEST] <llogan> http://ffmpeg.zeranoe.com/builds/
[20:26:23 CEST] <samons> 20150626 version i've
[20:26:23 CEST] <llogan> hopefully it will work on your old OS
[20:27:14 CEST] <samons> which date to look?
[20:27:35 CEST] <samons> 2013 or 2014?
[20:27:39 CEST] <llogan> none. just use the latest available
[20:27:52 CEST] <samons> i'm using latest :)
[20:28:04 CEST] <samons> maybe need to reset ffmpeg path
[20:28:16 CEST] <llogan> you're using something from 2010
[20:28:43 CEST] <samons> yeah, that what i figured to use c:v i need newer version
[20:29:16 CEST] <tomM___> soo... i tried pzich's suggesion of using symlinks to create a photo slideshow, and setting e.g. framerate to 24fps -- and what happens is strange. i tried alternating 2 images and switching between them, and the timing is not regular
[20:29:47 CEST] <tomM___> so like it plays back a flickering video that doesn't flicker evenly
[20:29:55 CEST] <chungy> Hmm. Too bad it doesn't seem any of the stable versions are available for windows
[20:30:09 CEST] <tomM___> ( pzich relaxed ^^)
[20:30:13 CEST] <llogan> current git master is considered stable
[20:30:21 CEST] <llogan> releases are for distros
[20:31:03 CEST] <llogan> and Zeranow also provides most releases if you need that for whatever reason
[20:31:04 CEST] <tomM___> the command i'm using is: "ffmpeg -framerate 24 -i /tmp/foo/img%06d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4"
[20:32:02 CEST] <tomM___> it's also saying it's got 126 frames, and i'm using 100 images at 24fps... which seems very wrong
[20:32:25 CEST] <tomM___> (in the progress bar it says "frame=  126")
[20:36:43 CEST] <samons> i use abspath of where ffmpeg is extracted it seems works
[20:37:29 CEST] <pzich> tomM___: you're specifying that the input is 24fps and encoding at 30fps, which is probably why your frames aren't even
[20:38:00 CEST] <samons> thanks :)
[20:38:41 CEST] <samons> it show N-73165 2000-2015
[20:38:49 CEST] <llogan> that's better
[20:39:27 CEST] <samons> i don't know how ffmpeg is installed older version need to google it :)
[20:43:26 CEST] <samons> thanks for help :)
[20:54:06 CEST] <pzich> tomM___: so I wrote this script which repeats images based on the filename, seems to be working in my tests: http://pastebin.com/q5cnq889
[20:58:26 CEST] <tomM___> pzich: thanks
[21:05:24 CEST] <tomM___> so pzich : $C in your script represents the number of frames to play the image for?
[21:11:32 CEST] <tomM___> pzich: hm, it also seems not to work -- i just get a 00:00 video
[23:06:21 CEST] <Exagone313> Hello, I try to convert a RTMP (h.264 & mp3) to ogg + webm for live streaming. But it does not work and I get one of these errors: "av_interleaved_write_frame(): Connection reset by peer" "Missing video stream which is required by this ffm". ffmpeg version 2.7.1-static http://johnvansickle.com/ffmpeg/
[23:06:31 CEST] <Exagone313> What can I do? I can link my cfg
[23:07:51 CEST] <pzich> I don't know the answer, but they're going to want to see a paste with the whole command and its output
[23:09:18 CEST] <Exagone313> for the second error - is there a command to assume that if there is no video stream, use a black frame while waiting for the video?
[23:09:33 CEST] <Exagone313> or the last frame
[23:14:23 CEST] <Exagone313> http://pastie.org/private/14l6bgaksot3inrsvyng
[23:19:57 CEST] <nashgul> hi again, ffmpeg -i input.avi -f tee "[f=ogg]salida.ogg|[strict:2]salida.mp4"  =>> output file #0 does not contain any stream
[23:20:07 CEST] <nashgul> what's the problem with that line?
[23:20:26 CEST] <llogan> Exagone313: you trimmed the console output
[23:20:40 CEST] <nashgul> ok, sorry
[23:20:58 CEST] <Exagone313> llogan: you need the headers?
[23:21:03 CEST] <llogan> of course
[23:21:07 CEST] <Exagone313> the footers said the message is repeated
[23:21:24 CEST] <llogan> yes, you can trim the multiple repeating lines
[23:21:32 CEST] <Exagone313> http://pastie.org/10260867
[23:23:06 CEST] <nashgul> this is my script: http://termbin.com/keg2
[23:23:30 CEST] <nashgul> this is the output of ffmpeg: http://termbin.com/b0pc
[23:24:20 CEST] <llogan> please provide provide an unscripted command
[23:24:34 CEST] <nashgul> :-D, ok ok, sorry again
[23:26:27 CEST] <nashgul> this is my line and the output: http://pastie.org/10260871
[23:48:14 CEST] <llogan> nashgul: tee muxer is used to output the same data to various formats, without needed to perform multiple encoding
[23:49:05 CEST] <llogan> what video and audio formats are you trying to place in both ogg and mp4?
[23:49:51 CEST] <nashgul> llogan: ok, i'm reading about that now, i'm beggining to think i'm using tee in wrong way
[23:50:05 CEST] <nashgul> llogan: i'm trying with 'split'
[23:50:54 CEST] <llogan> that will be a better option
[23:51:02 CEST] <nashgul> llogan: ok
[23:53:33 CEST] <llogan> nashgul: for video only, it could be like: split[v0][v1]" -map "[v0]" -c:v libx264 out.mp4 -map "[v1]" -c:v libtheora out.ogg
[23:54:00 CEST] <nashgul> llogan: http://pastie.org/10260904
[23:54:14 CEST] <nashgul> but that line does not streams audio  :\
[23:54:32 CEST] <nashgul> i want to stream video+audio
[23:55:45 CEST] <llogan> you need to map the audio too.
[23:56:00 CEST] <nashgul> llogan: i don't know how i do that
[23:56:06 CEST] <llogan> add "-map 0:a" for each output
[23:56:20 CEST] <nashgul> umm
[23:57:28 CEST] <nashgul> llogan: yeah! thanks a lot!  :-D
[23:59:28 CEST] <Exagone313> llogan: do you know something about my errors?
[00:00:00 CEST] --- Sat Jun 27 2015


More information about the Ffmpeg-devel-irc mailing list