[Ffmpeg-devel-irc] ffmpeg.log.20150905

burek burek021 at gmail.com
Sun Sep 6 02:05:01 CEST 2015


[00:44:04 CEST] <y2k> hi! in a demuxer, I don't know how many mp3 frames I have written out. now I am trying to figure out the time offsets for certain points in the mp3 data stream (for adding chapter support). is there a way to do this easily?
[01:23:59 CEST] <y2k> https://trac.ffmpeg.org/ticket/3216 <- I am running into this too.
[01:27:01 CEST] <y2k> can I implement something like "ffprobe -dump_packets" in my own demuxer? the -dump_packets is able to see correct pts, dts, and duration.
[02:48:05 CEST] <lzhou> i have no ffplay
[02:48:10 CEST] <lzhou> why?
[02:48:19 CEST] <c_14> You didn't install it?
[02:48:39 CEST] <lzhou> build ffmpeg from source, there is no ffplay
[02:48:50 CEST] <c_14> Do you have libsdl-dev installed?
[02:49:03 CEST] <lzhou> need that?
[02:49:08 CEST] <c_14> For ffplay, yes
[02:49:14 CEST] <lzhou> oh
[02:49:42 CEST] <lzhou> let me try again.
[02:49:47 CEST] <lzhou> thanks man.
[02:58:45 CEST] <Polochon_street> hi! Is there a link somewhere of some list of metadata that I can retrieve from an avdict?
[03:03:34 CEST] <klaxa> you could just iterate through the dict
[03:03:46 CEST] <klaxa> and get all metadata
[03:05:38 CEST] <Polochon_street> but what if I want to take only one in particular? Like, the track name?
[03:09:33 CEST] <Polochon_street> I found this https://www.ffmpeg.org/doxygen/2.5/group__metadata__api.html but I want the track number, and when I use "track" tag, all I got is the total number of tracks...
[03:14:25 CEST] <klaxa> it really depends on how whoever tagged it chose the tags
[04:08:28 CEST] <Azkort> Got some of the bash strangeness, set var1="-ss 8"; then execute $ffmpeg $ss ...; it runs as `ffmepg '-ss 8' ...` and fails. http://pastebin.com/CxCDhvBG
[04:09:04 CEST] <Azkort> not var1=, but ss=^
[04:09:21 CEST] <c_14> Azkort: try storing it as an array instead
[04:45:04 CEST] <fling> ffmpeg -i "concat:204_0039.MP4|20410039.MP4" -c copy out.mkv
[04:45:16 CEST] <fling> But I'm getting only the first part in out.mkv otoh no error message.
[04:45:26 CEST] <c_14> That doesn't work with mp4
[04:45:32 CEST] <c_14> try the demuxer
[04:45:35 CEST] <c_14> or remux to .ts
[04:46:10 CEST] <fling> or cat?
[04:46:23 CEST] <c_14> Cat won't work with mp4
[04:46:46 CEST] <fling> ok.
[04:47:01 CEST] <fling> How to also add a thm thumbnail file?
[04:47:09 CEST] <fling> I want to mux it into the container&
[04:47:39 CEST] <c_14> containers don't have thumbnail files? (that i know of)
[04:49:06 CEST] <fling> youtube-dl is inserting thumbnails somehow into files&
[04:49:31 CEST] <c_14> Most likely it's just your operating system generating them.
[04:49:34 CEST] <c_14> file manager usually
[04:51:57 CEST] <fling> c_14: no, they are downloaded from youtube and inserted into the container somehow& I will investigate.
[04:52:27 CEST] <fling> there is also atomicparsley for mp4
[04:52:41 CEST] <c_14> If it's just a secondary video stream, that's easy.
[04:52:56 CEST] <astroty> Do any of you know how to multicast a stream on a local network? I am using a raspberry pi to record the image of a webcam and was curious if it was possible
[04:53:31 CEST] <c_14> Just push it out over udp?
[04:55:34 CEST] <fling> or ffserver or erlyvideo or vlc&
[05:56:11 CEST] <Azkort> c_14: thanks
[06:16:02 CEST] <Demon_Fox> Does webp use Y420 color space on lossless images?
[06:16:14 CEST] <Demon_Fox> Well
[06:16:23 CEST] <Demon_Fox> Y'CbCr 420
[07:59:31 CEST] <Azkort> Been trying to get text with spaces into "-vf drawtext" filter in the script. http://pastebin.com/YKBUFnyR
[08:50:44 CEST] <Max-P> Azkort: I'm not sure, but I think -vf and the drawtext= should be two different parameters. Maybe try moving the -vf out of the $filter variable? What's the error ffmpeg gives you?
[08:51:33 CEST] <Max-P> erm, almost an hour late, oops
[13:40:19 CEST] <Mr-Jonze> any cli grus about could possible help me get an overlay video working
[13:40:54 CEST] <Mr-Jonze> tried  so-o-o-o many permutations of example codes .....
[14:34:25 CEST] <Mr-Jonze> sure i would be happy to
[14:35:26 CEST] <Mr-Jonze> brb
[14:58:38 CEST] <Mr-Jonze> my current working script in this gist --> https://gist.github.com/bill-auger/9480205a38d9d00d2fa3
[14:59:49 CEST] <Mr-Jonze> it currently captures screen via X11 and sound via alsa or jack and successfully logs in and streams to a media server
[15:00:33 CEST] <Mr-Jonze> i would like to add my webcam overlay in the bottom corner
[15:01:38 CEST] <Mr-Jonze> the $OVERLAY param in that script is the bugger
[15:15:00 CEST] <durandal_1707> Mr-Jonze: and whats ffmpeg output?
[15:15:19 CEST] <durandal_1707> from console?
[15:15:21 CEST] <Mr-Jonze> sure i could add that if you like
[15:18:09 CEST] <Mr-Jonze> as i do i can say right off that the first output line is syntax error in the -filter_complex segment
[15:18:20 CEST] <Mr-Jonze>  line 28: =-i 'me.png' -filter_complex 'overlay=10:main_h-overlay_h-10': command not found
[15:24:16 CEST] <Mr-Jonze> ok the full console dump is added to the gist
[15:40:02 CEST] <Mr-Jonze> sry just realized there were some errors in that version - i have updated it
[15:42:46 CEST] <Mr-Jonze> the original error is back now i fixed the typos "'me.png': No such file or directory" - but the png is there readable in the same directory as the script - have also tried same result with an absolute path
[15:43:41 CEST] <Mr-Jonze> again tho i really want a webcam as overlay i was simply trying a static image thinking it maybe simpler to start with
[15:45:31 CEST] <anYc> Hi, if I store the AVPackets I pass to avcodec_decode_video2(), is there a way to identify the AVPacket which contains a resulting AVFrame? I tried to compare the DTS with frame.pkt_dts but the DTS of the resulting frames start with 4 while packet.dts start with 0.
[15:45:55 CEST] <anYc> it's a h264 video stream
[15:48:04 CEST] <satiender> Hey guys please give me suggestion for which tool I can use for video filtering "ffmpeg" or "OpenGL"
[15:48:21 CEST] <satiender> which more fast for video processing and efficient
[15:48:27 CEST] <satiender> please help ??
[15:48:51 CEST] <Mr-Jonze> ffmpeg has a wealth of filters and is probably greatly more efficient than opengl
[15:50:25 CEST] <durandal_1707> satiender: depends
[15:50:56 CEST] <satiender> thank you sir for reply
[15:51:11 CEST] <durandal_1707> Mr-Jonze: have you checked documentation?
[15:51:31 CEST] <Mr-Jonze> i read man ffmpeg-filters
[15:51:36 CEST] <satiender> please give me your some contact info I want discuss my problem with you please
[15:51:58 CEST] <Mr-Jonze> and a bunch of forum posts with examples
[15:52:49 CEST] <satiender> Mr-Jonze : please give your contact info I want discuss my project with you
[15:53:11 CEST] <Mr-Jonze> Mr-Jonze is my contact info - i am right here
[15:53:38 CEST] <satiender> durandal_1707 : sir I want live filtering on video
[15:54:01 CEST] <satiender> Mr-jonze: I want live processing on video
[15:54:17 CEST] <durandal_1707> Mr-Jonze: perhaps you need to change order of inputs
[15:54:36 CEST] <Mr-Jonze> that is exactly what i want too - i am here now asking a question for help with my video  just like you are
[15:54:36 CEST] <durandal_1707> satiender: what kind of?
[15:55:30 CEST] <satiender> durandal_1707: I am using ffmpeg on android that working perfectly
[15:55:31 CEST] <Mr-Jonze> that what i was thinking at first but the docs seem to indicate the main (back layer) should be declared first then the overlay layer
[15:56:02 CEST] <satiender>  durandal_1707: now I tell you about my project
[15:56:36 CEST] <Mr-Jonze> the example from man ffmpeg-filters that is in my gist has it in that order - but i shall try just cause
[15:56:55 CEST] <satiender>  durandal_1707: In my project first I record a video from mobile camera in android mobile
[15:57:39 CEST] <satiender>  durandal_1707: after recording completed then my app play that video
[15:58:00 CEST] <satiender> durandal_1707: now I want blur that video
[15:58:14 CEST] <satiender>  durandal_1707: I already done that
[15:58:41 CEST] <satiender>  durandal_1707: but problem is that , that is not live
[15:59:27 CEST] <satiender>  durandal_1707: means I want touch the screen and see the video with blur
[15:59:41 CEST] <satiender>  durandal_1707:  So please how I can do that
[15:59:48 CEST] <satiender> help
[16:00:27 CEST] <satiender> Mr-jonze : can we talk on phone
[16:00:43 CEST] <Mr-Jonze> odd result with the static image first the error went away and both streams appear in the cli output but is no longer streaming to the server
[16:03:38 CEST] <satiender> Mr-jonze : I think we both want achieve same thing
[16:05:32 CEST] <satiender> Mr-jonze : we have 8 member team in my company , Some people from my team implement shadow , cartoon , color changing filters in OpenGl
[16:05:59 CEST] <satiender>  Mr-jonze : And sir that is very fast and smooth
[16:06:11 CEST] <satiender>  Mr-jonze : I want use ffmpeg for that
[16:06:36 CEST] <Mr-Jonze> ive added the new output to my gist with
[16:07:18 CEST] <Mr-Jonze> filename 'ffmpeg-cli-output-with-static-img-first'
[16:08:40 CEST] <Mr-Jonze> i suspect the problem now is that the static png is treated as the main source and is not allowing the time transport to progress
[16:10:36 CEST] <Mr-Jonze> but i am very new to ffmpeg and there are many so switches to pull it is indeed daunting
[16:11:08 CEST] <satiender> Hii , please help anyone if that achieved
[16:11:23 CEST] <durandal_1707> so what you want to overlay
[16:12:04 CEST] <durandal_1707> satiender: ask on the mailing list
[16:13:51 CEST] <satiender> durandal_1707: Sir I don't know about mailing list
[16:14:01 CEST] <satiender> please help
[16:15:06 CEST] <durandal_1707> look at ffmpeg web page
[16:15:12 CEST] <satiender> ok
[16:15:45 CEST] <satiender> durandal_1707: you mean http://ffmpeg.gusari.org/
[16:17:46 CEST] <durandal_1707> nope, thats forum
[16:22:28 CEST] <Mr-Jonze> oh sry was tweaking
[16:23:04 CEST] <Mr-Jonze> yes i want a webcam to overlay small in the bottom corner with screencap fullscren behnd
[16:23:59 CEST] <Mr-Jonze> i noticed if i set the second input to null it throws a different eror "[AVFilterGraph @ 0x8fcfae0] No such filter: 'overlay=10:main_h-overlay_h-10'
[16:23:59 CEST] <Mr-Jonze> Error configuring filters.
[16:23:59 CEST] <Mr-Jonze> "
[16:26:46 CEST] <Mr-Jonze> and the filter syntax of "-vf \"movie=me.png " gives error  "[NULL @ 0xa0a9440] Unable to find a suitable output format for '[watermark];' [watermark];: Invalid argument"
[16:27:22 CEST] <durandal_1707> dont use movie, use -i
[16:28:23 CEST] <Mr-Jonze> @  satiender: this the link to join the mailing list http://ffmpeg.org/mailman/listinfo
[16:29:05 CEST] <Mr-Jonze> the -i is the input - the 'movie' param is the filter argument to -vf
[16:29:28 CEST] <Mr-Jonze> line 30 in my gist
[16:29:41 CEST] <satiender> Mr-Jonze : Hii , Thank you sir !
[16:30:32 CEST] <Mr-Jonze> again that script is almalgamation of every example ive found - i am not sure which is correct as none seem to work for me
[16:30:41 CEST] <satiender> but which channel I can join for suitable result related to my problem
[16:30:56 CEST] <satiender> There have no. of channels
[16:31:03 CEST] <satiender> like : ffmpeg-devel
[16:31:26 CEST] <Mr-Jonze> no that is for ffmpeg developers
[16:31:50 CEST] <Mr-Jonze> you probably want to ask on ffmpeg-user and libav-user
[16:33:00 CEST] <satiender> ok
[16:33:14 CEST] <Mr-Jonze>  satiender:  if you have not already - i suggest you read the manpage for ffmpeg-filters
[16:33:29 CEST] <satiender> ok
[16:33:52 CEST] <Mr-Jonze> that was the biggest help for me
[16:33:54 CEST] <satiender> You see my discussed problem
[16:34:29 CEST] <satiender> According to you , Is ffmpeg helpful for me and which I want achieve
[16:34:32 CEST] <Mr-Jonze> yes but i am no expert either i probably can not help you much
[16:35:00 CEST] <Mr-Jonze> yes i am sure it can make you movie blurry
[16:35:14 CEST] <satiender> But you give very useful links
[16:35:19 CEST] <satiender> thanks for that
[16:35:49 CEST] <satiender> Actually I want blurry movie on live
[16:36:28 CEST] <satiender> means you click on blur button and your player play that blurry movie
[16:37:47 CEST] <Mr-Jonze> yes np ffmpeg will use live or recorded inputs just the same
[16:38:06 CEST] <satiender> ok
[16:38:26 CEST] <satiender> I hope that solve my issue
[16:38:29 CEST] <satiender> thanks
[16:42:27 CEST] <satiender> Mr-jonze : Please one another thing , I am very proficient in C programming
[16:43:09 CEST] <satiender> Can we write our own single filter for video blurring , I mean without using ffmpeg library
[16:43:41 CEST] <Mr-Jonze> thats more what opengl is for i think
[16:44:15 CEST] <Mr-Jonze> if you want to get really dirty you caould learn how to program the pixel shaders inside the GPU
[16:45:22 CEST] <Mr-Jonze> ffmpeg is a binary not a library as far as i know - so from a C code you  would callit via `system` command or ffi
[16:45:59 CEST] <Mr-Jonze> but i would not be surprosed if you could pass it a memory address of a library you wrote as a sort of 'plugin'
[16:47:11 CEST] <Mr-Jonze> if you only want blurry you could probably write that in masty old scool C if yo uwanted to
[16:47:50 CEST] <Mr-Jonze> just look up the formula for "gausian blue" and implement it in raw C bit-mangling
[16:48:11 CEST] <Mr-Jonze> sry "gausian blur" lol - me getting tired
[16:50:59 CEST] <satiender> Any resource for that or any link
[16:51:46 CEST] <satiender> Mr-jonze : nice if you also trying
[16:52:04 CEST] <Mr-Jonze> search the web for "gaussian blur" formula
[17:12:48 CEST] <AreaScout> hi all, what -tune options are belong to decoder and which one to encoder, or are they apply to both ?
[17:14:37 CEST] <AreaScout> ops sorry this param is x264 only
[17:49:16 CEST] <Mr-Jonze> i think ive had a breakthrough here
[17:49:55 CEST] <Mr-Jonze> it seems that the static "logo" image was the problem - when i put the webcam first it seems to be working
[17:59:46 CEST] <fred1807> : I have this line to convert a mp4 video to h264, it outputs the same filename (with new extension) on the same dir as input file. I need to change to output dir (realtive to where script runs, not relative to input file).. How can I do it?  Example folder name: output  , relative to script location.  My cmd is: ffmpeg -i "$i" ${i%.*}.h264
[18:03:50 CEST] <Mr-Jonze> fred - that is really a bash scripting question - id need to look it up but theres a way to get the script dir like __FILE__ - look it up on the web - it has probably been asked 100 on SO alone
[18:04:15 CEST] <fred1807> ok
[18:11:57 CEST] <Mr-Jonze> ok i think i found that major problem - the segment -filter_complex 'overlay=10:main_h-overlay_h-10' i had assigned to a bash var quoted and the script was failing but when i pasted the raw t4ext into the ffmpeg command it succeeds
[18:12:26 CEST] <Mr-Jonze> although the command line dumped via echo is identical - was a pesky bug that one
[18:14:05 CEST] <Mr-Jonze> assigning to bash var unquoted also works
[18:14:59 CEST] <Mr-Jonze> one site i read warned about the pickiness of escaping, double and even triple escaping in different environmets
[20:03:01 CEST] <Mr-Jonze> everything working nearly as i had hoped - but this small snag has got me in a pinch now
[20:04:38 CEST] <Mr-Jonze> it seems i must specify -filter_complex literally with its params in single quotes - this is not only very ugly but this means i can not do any programatic substitution
[20:07:47 CEST] <Mr-Jonze> ok got it i can split it - picky stuff :)
[20:08:19 CEST] <durandal_1707> you can write scripts
[20:18:38 CEST] <Mr-Jonze> i am tryung but it it very tricky because i can not use spaces
[20:19:51 CEST] <Mr-Jonze> also ithe text width is not known at launch time so i can no center the text
[20:21:00 CEST] <Mr-Jonze> vlc would accept new commands at runtime - does  ffmpe have such a feature ?
[20:21:26 CEST] <Mr-Jonze> all i really want now is centered text that is changed (and re-centered) programatically
[20:30:34 CEST] <Mr-Jonze> is this where i need  expansion=strftime
[20:30:51 CEST] <Mr-Jonze> the post im reading says its deprecated
[21:45:41 CEST] <Nanashi> What did I do wrong? ffmpeg -ss 00:17:11 -t 00:00:51 -i source.mp4 blah blah
[21:45:50 CEST] <Nanashi> It came out as a 8:29 long video clip.
[21:45:55 CEST] <Nanashi> instead of 51 seconds
[21:46:47 CEST] <Nanashi> oh, -t after -i
[22:20:39 CEST] <Nanashi> Last question. When you encode with 2-pass, how does it "use"/"reference" the first when that was outputted to NUL?
[22:47:59 CEST] <DHE> Nanashi: the first pass will leave some log files in the current working directory (unless overridden) which will be re-read
[22:48:26 CEST] <DHE> to that end, do not run several multipass instances at the same time in the same working directory
[22:52:55 CEST] <DHE> is there a preference for the best looking scale algorithm? I'm mainly looking to downsample video. The bilinear default seems okay but is there better?
[22:52:59 CEST] <Nanashi> Oh right. I see the log file.
[22:59:01 CEST] <DHE> actually maybe 2 recommendations... one where CPU usage is a non-concern, one where I need top performance while still being presentable
[23:12:35 CEST] <klaxa> afaik lanczos is considered one of the best downscaling algorithms
[00:00:00 CEST] --- Sun Sep  6 2015


More information about the Ffmpeg-devel-irc mailing list