[Ffmpeg-devel-irc] ffmpeg.log.20131118

burek burek021 at gmail.com
Tue Nov 19 02:05:01 CET 2013


[02:20] <ItsMeLenny> what was the command just to convert the video without the audio?
[02:21] <ItsMeLenny> i guess just one stream
[02:24] <klaxa> add -an to the command line
[02:25] <klaxa> that will drop audio
[02:59] <ItsMeLenny> klaxa, thanks, i found that, it wasnt the one i was thinking of tho, but is probably much better for what i needed to do
[02:59] <klaxa> that's nice to hear :)
[07:18] <prkhr4u> Could not find codec parameters for stream 0 (Video: mpeg4, yuv420p): unspecified size exact output @ http://pastebin.com/xpyYYP7V
[10:14] <Peace-> hi is there a way to NOT show the window when i play sounds with ffplay?
[10:15] <ubitux> -nodisp
[10:18] <Peace-> ubitux: mm ok this could work ... but for video it disable video too
[10:19] <ubitux> yes
[10:19] <Peace-> i am trying to patch dolphin to play audio and video when the mouise is overing a file  on kde
[10:19] <Peace-> well i could do a little script that check if it' sa video or an audio
[10:20] <Peace-> let me try
[10:21] <ubitux> in combination with ffprobe that should be easily doable, but ofc you can also patch ffplay easily
[10:24] <Peace-> ubitux: i have this idea i check the extension of a file if the extension is  flac oga wav etc then ... nodisp else
[10:24] <Peace-> ...
[10:25] <Peace-> so no ffprobe but ... maybe you idea could be better
[10:25] Action: Peace- investigate
[10:26] <ubitux> Peace-: ./ffprobe -v 0 -show_entries stream=codec_type -of flat input
[10:26] <ubitux> (you can output json or whatever)
[10:27] <ubitux> you will get output like that:
[10:27] <ubitux> streams.stream.0.codec_type="data"
[10:27] <ubitux> streams.stream.1.codec_type="video"
[10:27] <ubitux> streams.stream.2.codec_type="audio"
[10:27] <ubitux> you have all kind of writers (xml, json, compact, csv, ..)
[10:29] <ItsMeLenny> does anybody know why MTS files wont play correctly in VLC on ubuntu, they worked fine on my friends mac, what seems to happen is it plays the top 1/8 of the screen then smears the rest of the image downwards
[10:30] <JEEB> unless you are using some special VLC package that's newer than what the general ubuntu repos contain, both the VLC and the libav* libraries are old
[10:31] <ItsMeLenny> oh
[10:31] <ItsMeLenny> which is a shame because nobody will be able to see what this amazing dancer is doing https://drive.google.com/file/d/0B_kjh0L1etroRzFMMV9Db0xjNXc/edit?usp=sharing <-- image of the problem
[10:31] <JEEB> since VLC 2.1 was released seemingly too close to the last Ubuntu release
[10:32] <JEEB> and because debian/ubuntu was still within a libav (the project debian/ubuntu is using instead of the ffmpeg project) version transition, the libraries are quite old as well
[10:32] <JEEB> on OS X VideoLAN is building the libraries themselves and the person has grabbed probably the newest version?
[10:33] <ItsMeLenny> ahh
[10:33] <ItsMeLenny> yeah i think they had close to the original version
[10:33] <ItsMeLenny> but i have vlc 2.2.0, and the thing is, movie player plays the video without the audio, where as vlc plays the audio with the video skewed
[10:34] <JEEB> so you built your own VLC?
[10:34] <JEEB> some random nightly
[10:34] <JEEB> in any case, this probably is more related to #videolan than to #ffmpeg
[10:37] <ItsMeLenny> yeah nobody responds in videolan :P but yeah a random nightly, their nightly im pretty sure
[10:37] <ItsMeLenny> ahh maybe i was in #vlc before not #videolan
[10:38] <JEEB> never been to #vlc, #videolan is the general videolan-related channel
[10:52] <tachyean> hello
[10:54] <tachyean> i'm trying to capture jpeg images from an rtmp stream and save them to disk, i've managed to do it but when i run the ffmpeg command i must wait between 1 and 2 minutes for the capture to start, there is any way i can make it start sooner than 1 minute?
[10:56] <tachyean> this is the command i'm running: http://pastebin.com/wKBwTPB6
[10:56] <tachyean> any1?
[11:29] <tachyean> -analyzeduration 0 was the problem, thank anyway
[11:30] <tachyean> thanks*
[13:03] <vl4kn0> Hi, I've got single filter link: buffersrc -> fps -> buffersink, where fps is initialized with argument "fps=4". Now I feed the  filter graph exactly by the doc/examples but I always get more AVFilterBufferRef coming out of the filtergraph than going in.  (About 167 per one decoded frame). Is there anything I should look out for?
[13:41] <Mavrik> vl4kn0, proper PTS
[14:09] <sspiff> hmmm, I did ../configure --disable-everything --enable-demuxer=mpegts --enable-decoder=dvbsub, but I can't seem to open a TS file anymore, any suggestions?
[14:10] <sspiff> avformat_open_input fails, I'm still trying to find out why :)
[14:11] <JEEB> hint: you might need a protocol
[14:11] <JEEB> (file is a protocol as well)
[14:13] <sspiff> JEEB: thanks!
[14:16] <sspiff> JEEB: even without --enable-protocol=file, it gives a list of enabled protocols at the end of ./configure though, including file
[14:45] <DrSlony> Hi! Please help, why do my screencasts end up playing 2-3 times as fast? Watch the seconds in the clock in the taskbar. Screencast here http://rawtherapee.com/bugs/issue795/rt795_zoom-center.mp4  Log file here http://paste2.org/fwhUxZKc
[14:46] <DrSlony> When I found this a week or two ago I was using ffmpeg-1.0.7 or 1.2.4, so I installed 2.1 and no change.
[14:47] <klaxa> using -r for the input causes that
[14:48] <klaxa> x11grab is not able to get frames fast enough, therefore waits until it has accumulated enough frames for the desired framerate and puts those frames into a second
[14:48] <vl4kn0> Mavrik: I set frame->pts like this: frame->pts = av_frame_get_best_effort_timestamp(frame); just like the doc/examples says
[14:48] <klaxa> that's why it is playing back too fast
[14:48] <DrSlony> so how do i overcome that?
[14:48] <klaxa> try either using a lower framerate or omit it completely, i think that will result in a variable framerate
[14:49] <Mavrik> vl4kn0, make sure timebases are properly set on filters
[14:49] <vl4kn0> Mavrik: wil ldo
[14:49] <vl4kn0> will do*
[14:49] <Mavrik> fps filter needs a valid timebase since it works according to realtime :)
[14:55] <vl4kn0> Mavrik: I'm using AVStream->codec->time_base for filter initialization which reports: time_base=1001/48000, but AVStream->time_base reports: 1/48000. Which one should I use?
[14:57] <Mavrik> vl4kn0, the time base your packet PTS are in 
[14:57] <Mavrik> that's usually the stream TB
[15:06] <vl4kn0> Mavrik: I set the time_base for buffersink to AVStream->time_base and now it works. But the doc/examples/filtering_video.c uses AVStream->codec->time_base and the same function for calculating frame->pts. Why is that? Can I safely assume that the program will work everywhere with every codec as expected?
[15:07] <Mavrik> vl4kn0, hmm, I don't really understand the question
[15:07] <Mavrik> time_base sets the units in which timestamps on packets are expressed in the stream
[15:08] <Mavrik> if you have timebase of 1/48000 that means that packets with timestamp "1" and "2" are 1/48000th of a second apart
[15:08] <Mavrik> and packets "1" and "48000" are 1 second apart
[15:09] <Mavrik> so if you want to check how many frames do you have in 1 second you need to know the same timebase that timestamp number on packet is - that's why you need the value from AVStream
[15:11] <vl4kn0> Mavrik: let me ask again then. doc/examples uses time_base from AVCodecContext instead of AVStream. For me, time_base from AVCodecContext does not work. Why is that?
[15:12] <Mavrik> weeelll.....
[15:12] <Mavrik> lemme see how to explain that
[15:13] <Mavrik> you can have several streams in one container... and those streams can contain different type of frames with different requirements... so timestamps for each stream can be represented in different timebase
[15:13] <Mavrik> (e.g. usually video frames are timestasmped in 1/90000 timebase, audio in 1/sample_rate timebase)
[15:14] <Mavrik> however, when you store them into a container (e.g. MKV), you still need to sync them there somehow
[15:15] <Mavrik> so the container may have it's own timebase in which packet timestamps (which are different than video/audio frame timestamps, but related) are stored
[15:15] <Mavrik> and AVCodecContext has a "global" container timebase, while each stream's AVStream structure has timebase for that stream
[15:15] <Mavrik> decoded frames always carry PTS in steam timebase, while encoded AVPackets have PTS in CodecContext's timebase
[15:15] <Mavrik> IIRC.
[15:17] <vl4kn0> Mavrik: cool, I did not know that. Thanks for explanation.
[15:21] <Mavrik> hmm, I might have been wrong about AVPacket timebase - it seems it gets adjusted
[15:33] <ddsss> is there any way to test .wav files for correctness?
[15:36] <JEEB> how would you? other than for the headers, it's just raw pcm audio
[15:37] <ddsss> JEEB, yeah. I thought that much. Headers are "audio/x-wav; charset=binary" - so I asume magic byte is fine.
[15:39] <JEEB> that looks like http headers, not headers of the actual wav file
[15:39] <JEEB> as it should contain the type of audio, amount of tracks etc.
[15:39] <ddsss> JEEB, I just ran: file  -ifile.wav
[15:39] <ddsss> JEEB, I just ran: file  -i file.wav
[15:40] <JEEB> ffmpeg -i welp.wav or ffprobe welp.wav and see what it can output
[15:40] <JEEB> if it gets the audio type and amount of channels right
[15:42] <ddsss> JEEB, avprobe w8wn8.wav,  gets this: http://paste.ubuntu.com/6437787/
[15:43] <ddsss> JEEB, looks good to me?
[15:43] <ddsss> JEEB, but for some reason IOS7playback doesn't always work with that file.
[15:44] <JEEB> have fun reading through the API :)
[15:44] <JEEB> regarding various limitations of what that API can take in
[15:45] <ddsss> JEEB, API of what?
[15:45] <JEEB> whatever that thing you just noted is/uses
[15:47] <sspiff> JEEB: turned out, I needed --enable-parser=dvbsub, in case you're interested :)
[15:50] <JEEB> sspiff, well yes -- kind of thought it was something like that :P
[15:53] <sspiff> JEEB: thanks for the hint, I wouldn't have found it as fast as I did without it :)
[16:00] <MPLAYER_> I followed the centos compilation guide and it is working great, however i was wodnering if it is a standalone binary or does it use external binaries or libraries?
[16:02] <MPLAYER_> for example the yasm binary which is compiled during the installation guide
[16:02] <MPLAYER_> does the ffmpeg binary use this external binary or not?
[16:02] <MPLAYER_> or is this yasm binarie included in the ffmpeg binary
[16:08] <JEEB> yasm is a x86 assembler
[16:08] <JEEB> it compiles asm to object files
[16:08] <JEEB> which can then be linked
[16:09] <JEEB> also what the binary needs can be seen with ldd
[16:09] <JEEB> as far as libraries go
[16:36] <MPLAYER_> wht i just mean
[16:36] <MPLAYER_> is the binary of ffmpeg a standalone binary
[16:37] <sacarasc> Depends if you compiled it as static or not.
[16:37] <MPLAYER_> as it doesnt depend on the locations of the yasm binary
[16:37] <klaxa> i don't think yasm is required after compiling?
[16:37] <MPLAYER_> and the x264 command
[16:38] <relaxed> yasm is used to build the binary, that is all.
[16:38] <klaxa> ffmpeg uses libx264 either as a shared library or compiled in a static binary
[16:38] <MPLAYER_> so the ffmpeg binari in the centos compilation guide is really standalone
[16:38] <MPLAYER_> i can move it along in the distro
[16:39] <klaxa> if you compiled it with --extra-ldflags=-static it should be
[16:39] <MPLAYER_> and delete al the other files
[16:39] <MPLAYER_> the guide lists --extra-ldflags="-L$HOME/ffmpeg_build/lib"
[16:40] <relaxed> then no
[16:40] <klaxa> then it will not be a static build
[16:42] <JEEB> <MPLAYER_> is the binary of ffmpeg a standalone binary <- just use ldd to see what libraries the ffmpeg binary needs
[16:42] <JEEB> as I already said
[16:42] <JEEB> and then start looking into what of those you can and which of those you want to remove
[16:45] <relaxed> You can use yum to list the package to install based on a specific file's name and location. It would be trivial to write a small shell script that iterates over each lib from `ldd`'s output and install everything you need on each host.
[16:47] <relaxed> or this http://johnvansickle.com/ffmpeg/
[17:12] <MPLAYER_> If I have a server with 24 cores and I have nodejs running on it and nginx and ffmpeg. Do I have to divide the number of cores for each process or can I set for each process that there are 24 cores available?
[17:14] <relaxed> I think libx264 will only use 12, but you can force it to use <=12 with -threads $number
[17:15] <relaxed> actually you might be able to force >12, I've never been in the position to try :)
[17:16] <MPLAYER_> but do I have to set for nginx also the no of cores at 24 or can i best divide them to each other for eaxample 12 for ffmpeg and 12 for nginx
[17:17] <MPLAYER_> or both at 24
[17:18] <relaxed> That might be wise. Try some different loads and monitor `htop` to see how effectively they're used.
[17:22] <relaxed> maybe start with -threads 6
[17:46] <DrSlony> klaxa you were right, without -r the fps matches real time, though more jaggy
[19:05] <cccp3> windows problem!- if i use say, ffmpeg -i img%06d.ppm e.avi -r 500 - in a batch file, instead of the simply %06d showing to ffmpeg, it uses its path, (ikr that it is windows' fault, just speading the word)
[19:05] <cccp3> well, its path then a "d"
[19:06] <cccp3> say, path is d:\batch.bat then it will be d:\batch.batd instead of %06d.ppm
[19:07] <cccp3> hellooo?
[19:07] Action: cccp3 rings a bell that sounds all over the earth
[19:07] Action: cccp3 shouts bye and leaves
[20:15] <slackerr> hi all. where can i find some encoding settings for ffmpeg to obtain youtube-like videos in quality from 144p, 240p .. to 1080p
[20:15] <slackerr> for network streaming
[20:17] <llogan> slackerr: are you hosting these videos? is it live streaming or progressive download?
[20:18] <slackerr> llogan: both cases. rtmp hosting for streaming live videos, and also youtube-like viewing of placed media
[20:19] <slackerr> i tried to find settings in web, but now without success
[20:23] <llogan> slackerr: ffmpeg -i input -c:v libx264 -crf 28 -preset slow -vf scale="1280:trunc(ow/a/2)*2" -c:a libfdk_aac -vbr 4 -movflags faststart output.mp4
[20:23] <llogan> for progressive download that could be a good start
[20:24] <llogan> https://trac.ffmpeg.org/wiki/x264EncodingGuide
[20:25] <llogan> if there is a minimum client transfer rate you want to support then use -bufsize and -maxrate
[20:25] <llogan> http://mewiki.project357.com/wiki/X264_Encoding_Suggestions#VBV_Encoding
[20:25] <llogan> Example 3
[20:27] <llogan> scale="1280:trunc(ow/a/2)*2,format=yuv420p" would be better depending on input format
[20:27] <slackerr> llogan: did you mean 'progressive download' is typical youtube video placed somewherre on server?
[20:28] <llogan> "streaming" generally means the encoder is working in real time and "progressive download" means the viewer is downloading a video from the server and playing it. the video was previously encoded.
[20:31] <slackerr> ok. i understood. will begin from progressive download first
[20:34] <slackerr> is it neccesary to use youtube-like x264 codec for progressive download, or easier xvid is fine too?
[20:35] <slackerr> is it possible to see standard specs for streaming in youtube style? are they exists?
[20:35] <slackerr> in specs i mean screen size, audio codec, video codec, bitrates
[20:37] <llogan> no and no
[20:38] <llogan> as in use H.264, not "xvid" (MPEG-4 part 2 video)
[20:38] <llogan> and there are no standard specs.
[20:41] <slackerr> ok. what about containers? youtube places his videos in .flv, .mp4, .webm, containers. and for audio is mp3 with lower freq in smaller formats. is it ok? i will not use webm vp8/vorbis  in my tasks
[20:42] <JEEB> just use AAC or AAC derivatives if you won't be using the vorbis-specific stuff
[20:42] <JEEB> you have no real reason to use MP3 at all
[20:42] <llogan> you don't have to try to copy youtube exactly
[20:43] <slackerr> i dont want to copy. but i can't imagine better examples in video codec/format/size etc
[20:43] <JEEB> ugh, you are completely missing the fact that whatever 'tube does is just what they've decided to do, what you should do is up to YOUR needs and YOUR audiences
[20:44] <llogan> why don't you just use youtube then if it is teh brest
[20:44] <slackerr> as you said, there are no specs. so i should do it myself with youtobe examples and instructions you gave me.  or some other hosting for examples. thank guys very much for useful infos
[20:45] <llogan> i just gave you a "generic" example. but explaining more about your requirements and the audience will help
[20:51] <slackerr> my global target is to create profiles for encoding video uploaded on server into several qualities , as youtube has. its for progressive download - quality in 144p, 240p, 360p, 480p, 720p, 1080p. To do it, i need to have knowledge in encoding 'standards'. But, if as you said,  they're not exists, i will teach it by your docs
[20:52] <llogan> just use my example and change the "1280" to each of your desired widths
[20:53] <llogan> include the format=yuv420 part
[20:54] <llogan> you will have to compile ffmpeg to use libfdk_aac though
[20:54] <llogan> http://trac.ffmpeg.org/wiki/CompilationGuide
[20:54] <llogan> https://trac.ffmpeg.org/wiki/AACEncodingGuide
[20:59] <slackerr> for what it needs, a  libfdk_aac ?
[20:59] <slackerr> is it better than typical aac?
[21:04] <llogan> slackerr: see the AAC link
[21:08] <_methods> can anyone point me to a good tutorial for setting up ffserver to use a remote ipcamera as an input?
[21:08] <_methods> i'm having some difficulty figuring out the config params
[21:09] <llogan> _methods: might help -> http://trac.ffmpeg.org/wiki/Streaming%20media%20with%20ffserver
[21:09] <llogan> ffserver is basically unmaintained
[21:09] <_methods> llogan: oh
[21:10] <_methods> why it seems awesome
[21:10] <llogan> because nobody has volunteered to do so
[21:10] <_methods> oh hahah
[21:11] <_methods> well i don't have the skills reqd for that
[21:11] <_methods> but i will say it's an awesome capability i think
[21:11] <llogan> that wasn't a joke though. maybe the code is a mess, but i don't know
[21:12] <llogan> it might work for you though
[21:12] <_methods> yeah i kinda got it to work
[21:12] <_methods> but i think i'm giving my cameras configs wrong
[21:12] <_methods> i'll try feeding with straight ffmpeg feed and modify from there i guess
[21:12] <llogan> maybe you should test the input with ffmpeg or ffplay first
[21:13] <_methods> yeah
[21:13] <slackerr> how to see list of supported codecs for  my binary ffmpeg?
[21:14] <_methods> well the ipcam is outputting .asf i think
[21:14] <llogan> slackerr: ffmpeg -codecs, ffmpeg -encoders, ffmpeg -decoders
[21:14] <_methods> i still need to do some checking
[21:14] <_methods> i was just wondering if there was some more documentation somewhere
[21:14] <_methods> i don't mind doing the RTFM thing
[21:15] <_methods> i'm just trying to find a good manual lol
[21:15] <llogan> what is the protocol the camera uses to output to (udp, http, etc)?
[21:15] <_methods> well it's http
[21:15] <_methods> but i think it will do rtsp
[21:15] <llogan> http://ffmpeg.org/ffmpeg-protocols.html
[21:15] <_methods> so i'll probably get it working with a simple webcam ffmpeg capture first
[21:15] <_methods> then try to use the ipcamera
[21:16] <_methods> cause the ipcamera coudl be the problems
[21:16] <llogan> http://ffmpeg.org/ffmpeg-devices.html#video4linux2_002c-v4l2
[21:16] <llogan> webcam
[21:17] <_methods> yeah
[21:17] <_methods> thaank you
[21:25] <Samus_Aran> anyone know why ffmpeg sometimes doesn't start encoding, and just eats CPU for several minutes or longer?
[21:26] <Samus_Aran> # ffmpeg -i 00010.MTS -ss 00:01:02 -to 00:01:05 -acodec copy -vcodec libx264 -preset fast -crf 12 -vf super2xsai out-2xsai.avi
[21:27] <Samus_Aran> that for instance has been eating 106% CPU (dual-core) for 10 minutes so far, without having started encoding any frames
[21:27] <relaxed> Samus_Aran: -ss after the input decodes the video untill that point. If you want fast seeking move it before the input.
[21:28] <relaxed> until*
[21:28] <relaxed> also, you don't want h264 in avi. Use mkv or mp4.
[21:30] <Samus_Aran> mp4 doesn't accept pcm, mkv was producing dts errors if clips were cut, but AVI is fine.
[21:30] <relaxed> warnings or errors?
[21:33] <Samus_Aran> it was mangling timestamps, and causing video freezing, but that's not what I'm trying to get help with.  just trying to find out why it can sit there for 10 minutes
[21:34] <llogan> you should always include the complete console output
[21:35] <Samus_Aran> it doesn't complete...
[21:35] <Samus_Aran> that was the point
[21:36] <llogan> relaxed already told you why, but also see "-ss" option in http://ffmpeg.org/ffmpeg.html
[21:46] <Samus_Aran> relaxed, llogan: thank you.  I had read that recently and forgot about the position mattering.  sorry.
[22:18] <Samus_Aran> when I cut segments with -ss and -t, and then play it, it displays a single frame for around a half second before the video starts
[22:18] <Samus_Aran> is there any way to make the audio to match?
[22:21] <Samus_Aran> with -t 3 the video is exactly 3 seconds long, but the audio is longer
[22:23] <Samus_Aran> these would be the same issues I was having with mkv timestamps
[22:24] <Samus_Aran> I assume
[22:27] <Samus_Aran> with -t 3 it produces a 3.72 second audio clip, and a 3.72 second video clip, but the first 0.72 seconds of the video are unchanging
[22:27] <Samus_Aran> and very low bitrate in the h.264 encoding part
[22:40] <Samus_Aran> http://pastie.org/pastes/8490848/text
[22:40] <Samus_Aran> llogan: pasted
[22:43] <relaxed> Samus_Aran: try again with -bf 0
[22:47] <brontosaurusrex> what would be a "tv to pc" scale filter?
[22:47] <Samus_Aran> relaxed: no luck (still longer than 3 seconds, with the gap in video)
[22:47] <brontosaurusrex> basically i'd like to clap or compress 0-16 and 230-255 values
[22:49] <llogan> give it "the clap"
[22:52] <llogan> Samus_Aran: is this from an HMC105?
[22:52] <llogan> *150
[22:53] <Samus_Aran> Sony NEX-5 (AVCHD)
[22:54] <Samus_Aran> I tested with avi, mkv and mp4 containers, and avi and mp4 both have the .72 second gap before the video starts moving.  the mkv starts immediately, due to timestamps
[22:55] <Samus_Aran> but the mkv still contains 3.72 seconds of audio and video streams
[22:55] <Samus_Aran> it just hides it
[22:56] <Samus_Aran> and then breaks once I start combining several mkvs, as long gaps and freezes appear
[23:05] <Samus_Aran> this issue isn't specific to the .mts format, I re-encoded a whole .mts file to .mkv and then tried to cut a piece out with -codec copy, same problem
[23:05] <Samus_Aran> it produces a 4 second clip instead of 3 seconds
[23:21] <bewilled> what is better flv or mp4?
[23:21] <bewilled> for a video sharing site
[23:28] <Samus_Aran> bewilled: neither is better
[23:29] <Samus_Aran> flv and mp4 are container formats, they can both contain h.264 video
[23:29] <Samus_Aran> what is your goal?  high quality, small size, compatibility, etc.?
[23:37] <bewilled> Samus_Aran, compatibility, and look for the perfect ratio between small size and quality
[23:43] <brontosaurusrex> bewilled, mp4 is more compatible with different players than flv, quality as allready pointed out can be the same.
[23:54] <Samus_Aran> bewilled: mp4 (avc + aac), flv (avc + aac), and webm (vp8 + vorbis) can all be good quality:size ratio
[23:55] <Samus_Aran> for encoding h.264/avc, use a slower preset for smaller files with the same quality
[00:00] --- Tue Nov 19 2013


More information about the Ffmpeg-devel-irc mailing list