[Ffmpeg-devel-irc] ffmpeg.log.20160405

burek burek021 at gmail.com
Wed Apr 6 02:05:01 CEST 2016


[00:08:41 CEST] <nadermx> I have a bizzar issue, I have ffmpeg on two servers.  Installed the same on both, but when I try and run a command using http_proxy it works on one but other gives 403 error so I figure the other isn't using the proxy
[00:12:41 CEST] <nadermx> the only thing i see different when running in debug mode is [https @ 0x5d030a0] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[00:13:05 CEST] <nadermx> vs the one that works seems to not have 'httpproxy' as a whitelist option
[04:18:40 CEST] <circ-user-wCvyX> does ffmpeg  support persistent connections when playing HLS?
[04:52:00 CEST] <circ-user-wCvyX> hello
[05:56:12 CEST] <taliatina> Hi all
[05:56:44 CEST] <taliatina> Could u please guide me ...
[05:57:16 CEST] <taliatina> I wish to know is it ffmpeg use to encode to h264?
[05:57:53 CEST] <taliatina> does ffmpeg use to encode to h264?
[05:58:02 CEST] <petecouture> yes
[05:58:09 CEST] <petecouture> but only if you wish it hard enough
[05:59:20 CEST] <petecouture> taliatina: I'd recommend starting with a google search for beginners guide +h264. You'll find what you're looking for.
[05:59:47 CEST] <taliatina> I am reading a paper it is used ffmpeg. this is the sentence of paper "e sequences were first compressed with ffmpeg 30 times (static quantizer scale values ranging 231 are supported by ffmpeg)."
[06:00:31 CEST] <taliatina> I cant understand what does it mean?
[06:01:10 CEST] <petecouture> Ya that's beyond my knowledge at the moment. I would look for a breakdown of the protocol or look up the RTC spec on it if you need to get dirty.
[06:03:06 CEST] <taliatina> I am very new in this field
[06:03:52 CEST] <taliatina> paper called the result of ffmpeg "MPEG-4 files"
[06:04:48 CEST] <taliatina> does it mean that ffmpeg enchode the video to mpeg-4?
[06:06:39 CEST] <taliatina> which format of encode file is possible to have by converting the video if using ffmpeg?
[06:08:19 CEST] <taliatina> I would appreciate if you can guide me
[06:09:08 CEST] <FlorianBd> Hi there! Just encoded an avi to mp4 for html5 use, and the audios is messed up, here's the output. Anything wrong you notice? Any suggestion? THanks!  http://pastebin.com/iiZ1MV1J
[06:11:02 CEST] <taliatina> I am reading an article  about Evalvid-RA
[06:11:26 CEST] <taliatina> it is written that as a pre process they gave video to ffmpeg
[06:11:55 CEST] <taliatina> the result is mpeg-4 files
[06:12:28 CEST] <taliatina> then they gave the mpeg-4 files to mp4.exe to have ASCII trace files
[06:12:44 CEST] <taliatina> the ASCII trace files goes through the ns2
[06:13:04 CEST] <taliatina> i was looking att the site of ffmpeg
[06:14:17 CEST] <taliatina> to find any information about that. to understand what is really the goal of ffmpeg? and is it really possible to change video to mpeh-4?
[06:14:22 CEST] <taliatina> if yes
[06:14:33 CEST] <taliatina> mpeg-4*
[06:14:54 CEST] <taliatina> what else is possible to make by ffmpeg
[06:35:05 CEST] <petecouture> How do I tell when a feature was added to ffmpeg? I'm getting option not found for use_localtime
[08:32:04 CEST] <momomo> kepstin, i think i will keep this version and try to update in maybe a month if this patch might get in later on?
[08:33:00 CEST] <momomo> even better would be an option for being able to set a target duration .. which could prove useful to force prefetching for slow networks
[09:20:59 CEST] <AleXoundOS> Hi. Does it matter if I apply -fflags to input or output?
[10:45:58 CEST] <cowai> Hi, do anybody know how I can add more caching time for hls as a input?
[10:46:33 CEST] <cowai> the hls server that I use as input is very slow and sometimes leads to timeouts
[10:52:46 CEST] <DHE> a bigger -hls_time may help. Maybe 6-10 seconds?
[10:53:08 CEST] <cowai> my output is udp mpegts
[10:53:18 CEST] <cowai> isnt hls_time only for hls output?
[10:53:31 CEST] <DHE> oh, you using wowza or something?
[10:54:03 CEST] <cowai> I dont know what the hls is produced by
[10:54:06 CEST] <cowai> its a live stream
[10:54:40 CEST] <cowai> I just want to have like a 30second buffer to let it get all segments in time if there is any timeouts
[10:54:49 CEST] <cowai> I am restreaming it as udp mpegts on my local lan
[10:55:06 CEST] <DHE> oh, you mean HLS to UDP?
[10:55:51 CEST] <cowai> yes exactly
[10:55:56 CEST] <cowai> input is hls, output is udp
[10:56:03 CEST] <DHE> normally I go the other way...
[10:56:20 CEST] <cowai> yes I know its a little unusual
[10:56:50 CEST] <cowai> I am monitoring a hls channel but the equipment I am using only supports udp
[11:29:56 CEST] <momomo> cowai, i have been going through similar issues
[11:30:05 CEST] <momomo> it's not an easy problem to resolve
[11:31:06 CEST] <momomo> but basically, once the first hls segment and playlist is ready, you need to block your users for some time .. the easiest is to block them access for the same amount of time as the hls segment lenght + 1s .... that way, when they get the playlist file it will contain at least 2 segments
[11:31:13 CEST] <momomo> but start playing on first
[11:32:06 CEST] <momomo> if you want your wait to be lower then you need more advanced approach and manipulate the EXT_TARGET value accordinngly but only the first time .
[14:24:04 CEST] <meldron> Hi guys, is it possible to cut a x264 video without reencoding
[14:40:30 CEST] <BurnerGR> yes
[14:42:28 CEST] <furq> meldron: yes, if you cut on IDR frames
[14:43:36 CEST] <meldron> furq: is there a simple approach to do so with ffmpeg?
[14:43:39 CEST] <thunfisch> is it possible to record a rtsp stream, which is already h264 encoded, and add audio from a alsa source without reencoding? https://paste.xinu.at/Ihkk/ using this right now, but it's reencoding. :(
[14:44:42 CEST] <BurnerGR> meldron, something like this will usually work: ffmpeg -ss <seconds> -t  <seconds> -i input.mp4 -c:a copy -c:v copy cut_scene.mp4, but as furq say, you will have to find the IDR frames in order to do it properly
[14:45:35 CEST] <furq> that will work, it'll just seek to the nearest (next?) IDR frame
[14:45:48 CEST] <meldron> BurnerGR: this was my first approach, but without regard to the IDR frames and very bad results
[14:45:53 CEST] <furq> if you want to cut on any other frame you need to reencode
[14:48:00 CEST] <meldron> BurnerGR: if I do it your way, the first x seconds are counted as negative numbers
[14:50:28 CEST] <BurnerGR> meldron, it seek -ss seconds into the file, find the next IDR frame, and start encoding -t seconds from there
[14:50:51 CEST] <BurnerGR> I'm not sure what you mean by negative numbers?
[14:52:10 CEST] <meldron> BurnerGR: http://storage6.static.itmages.com/i/16/0405/h_1459860759_1217091_2c37086a52.png
[14:52:37 CEST] <BurnerGR> thunfisch, you need to add -c:v copy in order to tell ffmpeg to copy the video codec instead of reencoding
[14:52:38 CEST] <meldron> so i seeked in 4 seconds, but the cut started at 0 and this 4 seconds were shown as negative numbers
[14:53:41 CEST] <thunfisch> BurnerGR: tried that at first, but got the error "Unknown decoder 'copy'"
[14:53:51 CEST] <furq> thunfisch: put it after -i
[14:53:52 CEST] <BurnerGR> meldron, I suppose frame 0 were the closest IDR frame then
[14:54:16 CEST] <thunfisch> furq: still won't work
[14:54:30 CEST] <furq> after the second -i
[14:55:01 CEST] <thunfisch> oh, wow, yes.
[14:55:05 CEST] <thunfisch> that works. thanks alot!
[14:56:04 CEST] <furq> meldron: you can use -avoid_negative_ts 1 if the cut is close enough for you
[16:02:59 CEST] <meldron> furq: thanks, i think i will reorder my general approach
[17:39:46 CEST] <sb_> is there a way using the public api (av_opt_set_int) to set an MOV flag in AVOutputFormat's priv_data?
[17:59:11 CEST] <ac_slater> hey all. I want to use ffmpeg as input to another application (an external h.264 encoder). I fear that `ffmpeg -i ... | my_enc` won't work as my_enc needs to know the size of the input. Is there a way for ffmpeg to spit out output size?
[17:59:25 CEST] <ac_slater> size of each frame*
[18:00:46 CEST] <paule32> hello
[18:01:10 CEST] <paule32> i have following config: http://fpaste.org/349918/98720251/
[18:01:27 CEST] <paule32> what have i do, to see the stream in browser or vlc?
[18:02:30 CEST] <ac_slater> paule32: is this an ffserver config?
[18:02:39 CEST] <paule32> nginx
[18:02:49 CEST] <paule32> have i do a ffserver?
[18:02:50 CEST] <ac_slater> ah, good luck there.
[18:02:56 CEST] <ac_slater> paule32: google it mate
[18:03:10 CEST] <ac_slater> https://ffmpeg.org/ffserver.html
[18:03:17 CEST] <furq> don't use ffserver
[18:03:25 CEST] <furq> paule32: https://github.com/dailymotion/hls.js/
[18:03:34 CEST] <furq> you should be able to play the rtmp url in vlc, or the hls .m3u8
[18:03:44 CEST] <paule32> thx
[18:20:09 CEST] <john_doe_jr> I'm trying to record my audio using the following command but when I play the out.mpg with ffmpeg it is not very loud and of poor quality&any ideas why? Here is the command I am using: ffmpeg -f avfoundation -i ":0" out.mpg
[18:20:52 CEST] <furq> it's probably encoding it with the builtin mp2 encoder using the default settings
[18:21:04 CEST] <furq> do you have some good reason for using mpg
[18:21:38 CEST] <john_doe_jr> furq: Nope&I'm a newbie trying to learn ffmpeg&what's the best quality and loudness I can get then?
[18:21:48 CEST] <furq> the best quality would be wav or flac
[18:21:53 CEST] <furq> or some other lossless codec
[18:22:11 CEST] <furq> you'd probably need some kind of audio filter for loudness
[18:23:29 CEST] <bencoh> not sure what you mean by "best loudness" though
[18:23:42 CEST] <bencoh> EBU-R128 compliance?
[18:24:01 CEST] <bencoh> or just "louder"?
[18:24:09 CEST] <durandal_170> dynaudnorm,volume,compand,acompressor,alimiter,
[18:24:29 CEST] <john_doe_jr> bencoh: just louder..I can hardly hear it
[18:24:38 CEST] <furq> john_doe_jr: -af volume=volume*2 will double the volume
[18:24:56 CEST] <furq> there are more scientific ways to do it
[18:26:27 CEST] <john_doe_jr> furq: so this would be a better command: ffmpeg -f avfoundation -i ":0" -af volume=volume*2 out.wav
[18:26:29 CEST] <furq> er, -af volume=volume=2
[18:26:38 CEST] <furq> but yeah
[18:27:31 CEST] <furq> if you don't care about listening while it's being recorded you'd be better off normalising it after the recording is done
[18:27:38 CEST] <john_doe_jr> furq: it still sounds far away
[18:27:59 CEST] <john_doe_jr> furq: normalizing it?
[18:28:30 CEST] <furq> normalising it will bring the peak volume to 0.0dB
[18:28:44 CEST] <furq> you obviously don't know what the peak is until you can process the whole thing
[18:28:48 CEST] <john_doe_jr> furq: I've found this link: https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio
[18:29:14 CEST] <john_doe_jr> furq: so what would be the command line to do that then?
[18:30:43 CEST] <john_doe_jr> furq: it's recording through the microphone&.maybe I should redirect output using sound flower or something
[18:31:01 CEST] <bencoh> john_doe_jr: see what d.urandal_170 said
[18:31:33 CEST] <bencoh> and read corresponding audio filters documentation :)
[18:31:53 CEST] <furq> http://sprunge.us/EANi
[18:31:56 CEST] <furq> something like that if you're on *nix
[18:32:04 CEST] <john_doe_jr> furq: I'm on mac
[18:32:10 CEST] <furq> that should work then
[18:34:14 CEST] <john_doe_jr> fuq	
[18:34:46 CEST] <john_doe_jr> furq: what does the command have it's input as "-i test.flac" when I'm attempting to record the sound card on my mac&just wondering
[18:35:02 CEST] <furq> run those after you've finished recording
[18:35:12 CEST] <furq> otherwise you'll need to use a more complicated filter like dynaudnorm
[18:36:22 CEST] <john_doe_jr> furq: so the test.flac is actually my output.wav that I'm currently outputtting
[18:36:37 CEST] <furq> right
[18:37:13 CEST] <john_doe_jr> furq: should I just call it test.flac now instead of output.wav?
[18:37:21 CEST] <furq> it doesn't really make any difference
[18:39:17 CEST] <john_doe_jr> furq: First command, "ffmpeg -i test.flac -af volumedetect -f null /dev/null 2>&1 | grep max_volume" just hangs
[18:43:48 CEST] <paule32> ok
[18:43:59 CEST] <paule32> ffserver is running
[18:44:00 CEST] <paule32> http://fpaste.org/349946/74588145/
[18:45:30 CEST] <zamba> hi.. i'm in the process of digitalizing some old vhs tapes.. they have some distortion on them, maybe some frames here and there that needs to be just removed from the stream.. can ffmpeg help out here? or do i have to use some other tool for this?
[18:45:36 CEST] <zamba> if so, which tool do you recommend?
[18:45:57 CEST] <zamba> i basically just need to replace these frames with a frame before or after, to maintain the sync
[18:46:12 CEST] <zamba> but it'll be a manual process, i fully understand that
[18:46:17 CEST] <furq> john_doe_jr: what happens if you remove | grep max_volume
[18:46:31 CEST] <furq> zamba: avisynth has some good filters for cleaning stuff like that up
[18:46:53 CEST] <kepstin> zamba: it's doable with e.g. the 'select' filter, then put an 'fps' filter after it to duplicate frames to replace the ones you dropped.
[18:47:24 CEST] <zamba> furq: it can do it automatically as well?
[18:47:31 CEST] <furq> it depends
[18:47:36 CEST] <zamba> kepstin: ffmpeg?
[18:48:11 CEST] <kepstin> zamba: well, the tricky bit is figuring out whether or not you want to keep a frame
[18:48:12 CEST] <zamba> because it's vhs, it's basically just some frames that are totally off, so if you analyze the video sequentially, you should be able to spot them pretty easily
[18:53:10 CEST] <paule32> can you help?
[18:53:55 CEST] <furq> paule32: i thought you were using nginx
[18:54:03 CEST] <furq> ffserver is basically abandoned and nobody uses it
[18:54:34 CEST] <paule32> furq: out of date?
[18:54:42 CEST] <paule32> what can i use?
[18:54:53 CEST] <furq> nginx?
[18:55:19 CEST] <paule32> i think nginx is not so good, in your post?
[18:55:31 CEST] <furq> ?
[18:55:57 CEST] <paule32> i show you a config, and someone said good by
[18:56:33 CEST] <furq> i said "don't use ffserver"
[18:57:25 CEST] <paule32> ok, i misplaced
[19:04:01 CEST] <momomo> furq, good news ...  i am manipulating the  target duration on the fly now .. and it works just flawless :D
[19:04:14 CEST] <zamba> furq & kepstin: do any of you have any more details about either method? :)
[19:04:16 CEST] <momomo> was pretty complicated though
[19:05:39 CEST] <paule32> what means: Tue Apr  5 12:04:57 2016 xx.xx.xx.xx - - [POST] "/video.flv HTTP/1.1" 404 146
[19:17:38 CEST] <paule32> http://fpaste.org/349964/14598766/
[19:17:46 CEST] <paule32> while idle?
[19:18:11 CEST] <paule32> i must say i sit behind a nat
[19:18:27 CEST] <paule32> have i use a vpn? other port?
[19:18:37 CEST] <furq> if this is with nginx-rtmp then that's the wrong url
[19:18:58 CEST] <furq> you want -f flv rtmp://abc.de/live/xyz
[19:19:53 CEST] <paule32> its with ffserver
[19:20:02 CEST] <paule32> http://<ffserver_ip_address_or_host_name>:<ffserver_port>/<stream_name>
[19:20:52 CEST] <paule32> https://trac.ffmpeg.org/wiki/ffserver
[19:23:18 CEST] <shincodex> anyone know if jsoncpp is a leaky junk library?
[19:33:45 CEST] <paule32> what port should i get, to get back upstream flv video?
[19:33:56 CEST] <paule32> i want to use port 80
[19:34:06 CEST] <paule32> because it s not blocked
[19:34:18 CEST] <paule32> but it seems it does not work
[19:44:06 CEST] <petecouture> Arrgg, I recently upgraded to the latest version of ffmpeg and now my script doesn't work. ffmpeg runs like it's encoding but it doesn't show any framecount increasing. Anyone help :-(    http://pastebin.com/8szPhCuE
[19:44:45 CEST] <paule32> http://fpaste.org/349985/59878274/
[19:45:04 CEST] <paule32> what is wrong with ffmpeg/server
[19:47:37 CEST] <melkor> I have an image, I want to create an animation where I start at the bottom and pan to the top. Is ffmpeg good for that, or should I write a quick program in java?
[19:47:41 CEST] <tuelz> I've been playing around with this idea in my head, that I'll likely never get around to implementing, but it's still fun to think about...but basically having a p2p live streaming service where the central server is only handling presence and registry problems, while each viewer of a channel downloads and then uploads the video stream to a few people themselves to help it scale without incurring too much
[19:47:43 CEST] <tuelz> latency from hops
[19:48:04 CEST] <petecouture> melkor: I think you'd want Flash or ImageMagik
[19:48:11 CEST] <pzich> melkor: that is possible to do, but if you want things like animation easing and easy controls, you're probably better off with something else
[19:48:31 CEST] <tuelz> the biggest problem I don't know how to solve and I'm curious if it's possible...is I would like for the tree to heal itself via logic on the presence server and then via logic on the presence server to heal the video buffer that you lose while healing the p2p tree
[19:48:35 CEST] <melkor> pzich: it is going to be to be a scroll up and then dwell a bit on the last image.
[19:48:49 CEST] <tuelz> is stitching together two video streams in order to build back up a buffer difficult?
[19:48:57 CEST] <petecouture> melkor: ffmpeg isn't the type of tool you use for animations like that.
[19:49:15 CEST] <petecouture> I'd recommend Flash and learning how to tween on the timeline
[19:49:27 CEST] <pzich> I really wouldn't recommend Flash if Flash isn't what he wants
[19:49:36 CEST] <melkor> I don't do flash.
[19:50:16 CEST] <melkor> I can do a different way pretty quickly, but if there was a convenient ffmpeg way to do it I was going to learn.
[19:50:18 CEST] <petecouture> Flash would be the easiest to understand for someone just starting animation rendering.
[19:52:04 CEST] <tuelz> so, a usecase.  a first tier viewer is downloading and serving one person video...everyone starts with ~5 second buffer.  When tier one person leaves, logic upgrades that tier 2 video to a tier 1 who is now bringing in data from the broadcaster directly, but it took 1 second to do that and now that viewer has 2 seconds worth of buffer....is it possible to use another tier 1 viewer to upload to our focus view
[19:52:06 CEST] <tuelz> and heal his buffer back up to 3 seconds?
[19:52:34 CEST] <tuelz> (I think my numbers are off there, pretend I said 3 second buffer at the start)
[19:53:10 CEST] <petecouture> melkor: I'm not an advanced user of ffmpeg so I don't know if object timing and position can be rendered on an image the way you're looking for. But if you're looking to create some sort of automated animation creation system you could create the animation itself in x11 and then have ffmpeg record that screen.
[19:53:46 CEST] <pzich> I also wouldn't recommend screen recording if you want it to be decent quality and no potential for dropped frames.
[19:54:22 CEST] <melkor> I can pretty quickly create the sequence of images I want, and then use ffmpeg to turn it into an animated gif or short video.
[19:54:32 CEST] <tuelz> essentially my presence server would know which part of the buffer my focus viewer needed and could send only that video, so my problem and I'm hoping ffmpeg has ways to deal with it, is stiching together video data seamlessly
[19:54:39 CEST] <petecouture> pzich: but that would be the way to do it within ffmpeg without creating an application that uses the source though right?
[19:54:41 CEST] <pzich> There are ways to control the filters via variables and other input value (e.g. time and frame number), so you could so something with crop similar to what they're doing with zoom here: http://stackoverflow.com/questions/23240841/zooming-animation-in-ffmpeg
[19:55:18 CEST] <pzich> if you have a way to create the images you want and turn them in to a sequence, that may be better. Particularly if you aren't already familiar with variable and function usage in ffmpeg filters
[19:55:26 CEST] <petecouture> ^
[19:56:18 CEST] <petecouture> pzich: I get why you could use filters to do it but without some sort of tween interface to design the transitions it's a nasty way to get the job done.
[19:56:51 CEST] <petecouture> Also highly limited in animation capibilities. Assuming there's no easing packages
[19:56:52 CEST] <pzich> agreed, if I really needed to do something like that in ffmpeg I'd probably probably write a script to generate the filter string for me.
[19:57:15 CEST] <pzich> I mean you can go hardcore if you want, I believe there's sin(), cos() and friends :D
[19:57:42 CEST] <melkor> I put my buddies head on a hot body, and I want it to scroll up to his face.
[19:57:47 CEST] <pzich> but if it's an option, I'd definitely recommend animating elsewhere, Flash and screengrabbing just wouldn't be my first choice (or personally, my choice over raw ffmpeg)
[19:58:51 CEST] <furq> melkor: you should be able to do it with the crop filter
[19:58:56 CEST] <pzich> yeah
[19:58:57 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#crop
[19:59:05 CEST] <pzich> and the time or frame number as the input value
[19:59:17 CEST] <melkor> I can definitely get 1 image the way I want with the crop filter.
[19:59:22 CEST] <furq> see the second and third from last examples
[19:59:35 CEST] <furq> they show how to do a dynamic crop based on frame count
[20:00:28 CEST] <paule32> furq: http://fpaste.org/349985/59878274/
[20:00:47 CEST] <furq> why did you highlight me with that
[20:00:51 CEST] <furq> i don't know how to use ffserver
[20:00:59 CEST] <paule32> sorry
[20:01:24 CEST] <furq> the only ffserver advice i can offer is "don't use ffserver"
[20:01:29 CEST] <pzich> :D
[20:01:33 CEST] <furq> i doubt anyone else in here will be able to improve on that
[20:01:35 CEST] <petecouture> this seems like sooo much work for just panning a simple mage
[20:01:51 CEST] <petecouture> It can be done in 2 seconds in Flash and rendered to MOV
[20:01:59 CEST] <pzich> but that's Flash
[20:02:34 CEST] <petecouture> He's not looking for a long term solutions. It sounds like it's just a joke
[20:02:38 CEST] <melkor> furq: So I would change the input framerate and the output framerate and then 1 image would become N images and I can specify the location of the crop box.
[20:02:45 CEST] <paule32> me?
[20:02:53 CEST] <furq> why would you need to change the input framerate
[20:03:00 CEST] <petecouture> not u paule
[20:03:04 CEST] <melkor> 1 image -> many images.
[20:03:20 CEST] <pzich> I think you need -loop 1
[20:03:58 CEST] <furq> if you just want to pan up a static image then yeah, use -loop or -loop_input or whatever it is these days
[20:03:59 CEST] <pzich> I think ffmpeg -loop 1 input-image.jpg -t <duration> -vf ...
[20:04:07 CEST] <pzich> err, -i should be in there
[20:04:35 CEST] <pzich> looks like -loop 1 is the new -loop_input: https://ffmpeg.org/pipermail/ffmpeg-user/2011-October/002531.html
[20:04:57 CEST] <furq> -vf crop=w=320:h=240:x=0:y=-n
[20:05:00 CEST] <furq> or something along those lines
[20:05:14 CEST] <pzich> tweak and iterate as needed
[20:05:22 CEST] <furq> you might need to figure out how many frames you need and pass that to -loop
[20:05:24 CEST] <pzich> math is at your disposal!
[20:05:31 CEST] <melkor> Ill give it a shot and use -stream_loop
[20:06:51 CEST] <furq> -vf crop=w=in_w:h=320:x=0:y=(in_h-320-n)
[20:06:52 CEST] <furq> or that, rather
[20:11:03 CEST] <pzich> I'm not sure what happens when that hits a negative number, but you should be able to do something like min(0, in_h-320-n) if you need to
[20:11:22 CEST] <pzich> err, actually max() in this case
[20:15:00 CEST] <melkor> Soo close.
[20:18:03 CEST] <rjp421> anyone have luck outputting ffmpeg to castnow? http://pastie.org/pastes/10786517/text
[20:18:48 CEST] <rjp421> if i dont pipe and just put file.mp4, castnow will play the file
[20:19:04 CEST] <melkor> Awesome, got it working, it seems like the overflow just sets the crop to the last good one.
[20:19:07 CEST] <rjp421> can i do it live?
[20:23:25 CEST] <furq> rjp421: does -f flv work
[20:24:48 CEST] <rjp421> furq, just ...'-f flv - | castnow --quiet -'?
[20:24:56 CEST] <furq> yeah just replace -f mp4 with -f flv
[20:25:05 CEST] <rjp421> sec
[20:25:08 CEST] <furq> failing that you can try -f mp4 -movflags frag_keyframe+empty_moov
[20:26:33 CEST] <rjp421> furq, it started to encode but gave "Error: Load failed".. ill try with those ty
[20:27:22 CEST] <furq> that looks like a castnow error
[20:27:28 CEST] <furq> the line above that will probably be the ffmpeg error (if there is one)
[20:29:52 CEST] <rjp421> furq awesome the movflags worked ty :D
[20:29:59 CEST] <pzich> sweet
[20:50:11 CEST] <thunfisch> i'm getting a 403 error when trying to stream to a ffserver on the same machine. acl allows 127.0.0.1. any  ideas whats going wrong there?
[20:58:13 CEST] <john_doe_jr> I would like to stream directly to another computer&the following is not working but why? ffmpeg -re -f avfoundation -i ":0" -acodec libmp3lame -f rtp rtp://192.168.12.55
[21:08:18 CEST] <john_doe_jr> anybody?
[21:14:26 CEST] <melkor> john_doe_jr: how do you know it is not working?
[21:24:05 CEST] <john_doe_jr> melkor: I get the following error: "av_interleaved_write_frame(): Can't assign requested address
[21:24:06 CEST] <john_doe_jr> Error writing trailer of rtp://192.168.12.55: Can't assign requested address"
[21:25:07 CEST] <furq> you probably need to assign a port
[21:26:08 CEST] <john_doe_jr> furq: that worked!
[21:27:09 CEST] <john_doe_jr> furq: well on the client I"m getting: "Guessing on RTP content - if not received properly you need an SDP file describing it"
[21:46:21 CEST] <john_doe_jr> ok now I'm getting "Protocol not on whitelist 'file'"
[21:55:57 CEST] <petecouture> john_doe_jr: IU can help with that
[21:56:07 CEST] <petecouture> on a call one sec
[21:56:43 CEST] <petecouture> So the new ffmpeg needs to have somethings whitelisted for security reasons I guess. Here's what i have ffmpeg -loglevel debug -protocol_whitelist file,udp,rtp,crypto -re -y -probesize 2147483647 -analyzeduration 2147483647 -i
[21:56:54 CEST] <petecouture> that cuts off right beofre my sdp file load
[21:57:06 CEST] <petecouture> but you can see the -protocol_whitelist parameter
[21:59:41 CEST] <petecouture> I'm actually having a lot of issues with RTP input on the latest version of ffmpeg
[22:25:57 CEST] <paule32> i get "av_interleaved_write_frame(): Connection reset by peer" how to fix?
[22:28:56 CEST] <petecouture> looks like the rtp client is reseting?
[22:30:11 CEST] <paule32> http://fpaste.org/350070/45988818/
[22:31:38 CEST] <petecouture> this comes first [libx264 @ 0x2e3be60] non-strictly-monotonic PTS
[22:32:33 CEST] <paule32> what does it mean, how to fix?
[22:32:53 CEST] <john_doe_jr> petecouture: I figured it out
[22:33:11 CEST] <petecouture> cool, that flag wasn't documented when I ahd to look into it
[22:33:19 CEST] <petecouture> it took 2 days to figure itout
[22:33:32 CEST] <john_doe_jr> petecouture: I just googled the error
[22:35:05 CEST] <petecouture> There wasn't anything in searchs when I looked a month ago. Nothing worth while at least.
[22:37:34 CEST] <john_doe_jr> This is the command line that I have currently but the performance is that good&How can I make this better?
[22:37:38 CEST] <john_doe_jr> ffmpeg -re -f avfoundation -i ":0" -acodec libmp3lame -ar 11025 -f rtp rtp://192.168.8.143:1234
[22:38:17 CEST] <petecouture> -vn ?
[22:38:25 CEST] <petecouture> your just streaming audio yes?
[22:38:29 CEST] <john_doe_jr> yes
[22:38:37 CEST] <BtbN> why are you using such a crappy samplerate?
[22:39:02 CEST] <john_doe_jr> BtbN: did not notice that&what would be a good sampling rate?
[22:39:07 CEST] <BtbN> the default
[22:39:19 CEST] <john_doe_jr> BtbN: so don't even define it
[22:39:32 CEST] <BtbN> unless you need a specific one, no
[22:40:17 CEST] <john_doe_jr> BtbN: so this is what I have so far: ffmpeg -re -f avfoundation -vn -i ":0" -acodec -f rtp rtp://192.168.8.143:1234
[22:40:31 CEST] <BtbN> well, you should still specifc your audio codec.
[22:40:35 CEST] <BtbN> *y
[22:41:56 CEST] <john_doe_jr> BtbN: ok..that was just a copying mistake, this is what I have:  ffmpeg -re -f avfoundation -vn -i ":0" -acodec libmp3lame -f rtp rtp://192.168.8.143:1234&is that the best?
[22:42:30 CEST] <BtbN> looks ok, acodec isn't up to date anymore though, just use c:a
[22:42:45 CEST] <BtbN> but what exactly is the problem you are trying to solve? That command looks like it works?
[22:42:47 CEST] <furq> -vn is an output option
[22:42:52 CEST] <furq> move it after -i ":0"
[22:43:35 CEST] <furq> i also don't think you should be using -re with a capture source
[22:43:56 CEST] <petecouture> I had that issue myself BtbN, I've been scratching my head all day and removed -re and now my script works
[22:44:17 CEST] <petecouture> Though the documentation on -re is confusing. It says not to use it for live sterams but it should be used for live streaming....
[22:44:28 CEST] <furq> don't use it if your input is a live stream
[22:44:34 CEST] <furq> use it if your output is a live stream
[22:44:44 CEST] <furq> the first one takes precedence
[22:45:08 CEST] <john_doe_jr> furq: so maybe I should output my audio to a file and then live stream that file
[22:45:20 CEST] <petecouture> Honestin I just reread what I wrote and realized why I got it wrong
[22:45:21 CEST] <furq> ?
[22:45:26 CEST] <paule32> 2	video.flv	x.y.z	HTTP/1.1	WAIT_FEED
[22:45:33 CEST] <paule32> what that?
[22:45:56 CEST] <petecouture> ? That look slike osme sort of caching script for that file
[22:45:56 CEST] <paule32> the browser pops out a dialog - but vlc is not display/started
[22:47:27 CEST] <john_doe_jr> This is what I have so far: ffmpeg -re -f avfoundation -vn -i ":0" -acodec libmp3lame -f rtp rtp://192.168.8.143:1234....how do I remove the -acodec and use c:a?
[22:47:40 CEST] <furq> you do the thing you just said
[22:47:44 CEST] <furq> replace -acodec with -c:a
[22:48:12 CEST] <furq> acodec is just the old name for that option
[22:48:26 CEST] <john_doe_jr> furq: When I do that I get, "Unable to find a suitable output format for 'rtp'"
[22:48:40 CEST] <furq> paste the command
[22:48:53 CEST] <john_doe_jr> furq: ffmpeg -re -f avfoundation -i ":0" -vn c:a -f rtp rtp://192.168.8.143:1234
[22:49:03 CEST] <furq> 21:47:44 ( furq) replace -acodec with -c:a
[22:49:03 CEST] <petecouture> -c:a libmp3lame
[22:49:05 CEST] <furq> do precisely that
[22:50:15 CEST] <petecouture> furq: What if the input and output is a live stream. Do you use -re for that?
[22:50:20 CEST] <furq> no
[22:50:43 CEST] <furq> well, it depends on the type of live stream, but usually no
[22:50:55 CEST] <furq> if it encodes at the correct framerate for the output stream then don't use -re, it just complicates everything
[22:51:15 CEST] <john_doe_jr> furq: so should I use it?
[22:51:24 CEST] <furq> probably not
[22:51:37 CEST] <furq> if it starts encoding much too fast then put it back
[22:52:11 CEST] <john_doe_jr> furq: so this is the best I can do on the streaming command: ffmpeg -f avfoundation -i ":0" -vn -c:a libmp3lame -f rtp rtp://192.168.8.143:1234
[22:52:20 CEST] <furq> i guess
[22:52:37 CEST] <furq> you probably want to specify a bitrate or quality level for mp3
[22:52:42 CEST] <furq> or use a newer codec like aac
[22:52:44 CEST] <petecouture> ^
[22:52:57 CEST] <furq> i think libmp3lame defaults to 128k which should be fine
[22:56:37 CEST] <john_doe_jr> I'm getting the following error message now: "A description in SDP format is required to receive the RTP stream.  Note that rtp://URIs cannot work with dynamic RTP payload format (97)&.any ideas what that means?
[23:03:32 CEST] <petecouture> can you paste your sdp
[23:03:46 CEST] <petecouture> in a link
[23:07:59 CEST] <paule32> ok, i qm happy, all works
[23:08:08 CEST] <paule32> great software
[23:11:37 CEST] <petecouture> Anyone ever get this error when trying to load a m3u8 in vlc VLC can't recognize the input's format
[23:11:46 CEST] <petecouture> It finally plays in iOS
[23:11:55 CEST] <petecouture> But it crashes VLC when I try to load it
[23:15:49 CEST] <paule32> wrong format that vlc does not support?
[23:21:18 CEST] <petecouture> paule32: It's encoded to h264 and aac which is should open.
[23:21:36 CEST] <petecouture> this didn't used to work on iOS but I'm using the HLS muxer now and it that seems to have taken care of the problem.
[23:22:16 CEST] <petecouture> But now i"m trying to get VLC to open the live stream and no dice. It can play the feed coming from the satalite going through a professional encoding service. I'm trying to match that.
[23:29:17 CEST] <sb_> Anyone know if there's a way to configure the build to include all headers in the include path?
[23:32:12 CEST] <JEEB> the public ones are always installed unless you disable header installation. the private ones will never be installed because they are not and will not be part of the public APIs and if you use them your stuff can blow up at any point
[23:34:23 CEST] <sb_> so JEEB: if I'm looking to set a flag on the private muxer, `av_opt_set_int(outputFormatContext->priv_data, "flag_i_want", 1, 0);` hasn't worked
[23:34:44 CEST] <sb_> was hoping to cast to an MOVMuxContext and pack the flag in myself
[23:38:58 CEST] <JEEB> ok, so you're like wanting to set a thing for f.ex. the dash muxer's internal mov muxer?
[23:41:16 CEST] <sb_> right, so specifically I've added a custom flag in AVOptions in the MOV/MP4 muxer (works using -movflags from the cli), now I want to set that flag programatically using the libavformat
[23:42:03 CEST] <JEEB> you want the DASH muxer to set it or you just want to set a flag for movenc itself?
[23:42:13 CEST] <sb_> to set one for movenc
[23:42:21 CEST] <JEEB> ok, then no private APIs needed there
[23:43:32 CEST] <JEEB> as far as I can tell av_dict_set(POINTER, "movflags", "one+two+three", 0); should do it
[23:43:57 CEST] <sb_> oh that's interesting
[23:44:19 CEST] <JEEB> in a similar way to https://ffmpeg.org/pipermail/libav-user/2013-June/004817.html
[23:44:20 CEST] <sb_> would AVFormatContext->priv_data be an AVDictionary in addition to AVOptions?
[23:44:54 CEST] <sb_> oh using the second arg in avformat_write_header..
[23:45:00 CEST] <JEEB> pretty sure you don't have to touch the priv_data
[23:45:27 CEST] <sb_> I had looked at the source for that and it didn't appear to pass through the options when calling in to the private muxer
[23:45:34 CEST] <sb_> thanks, I'll give that a go
[23:45:39 CEST] <JEEB> what do you mean with "private muxer"
[23:45:59 CEST] <JEEB> if it's a muxer that calls a muxer (like dashenc), it gets funkier
[23:47:23 CEST] <sb_> I meant to say the way av_write_header calls through to moveenc
[23:47:41 CEST] <sb_> mov_write_header takes a single AVFormatContext arg
[23:48:33 CEST] <JEEB> I hope you took a look at the muxing example
[23:49:51 CEST] <JEEB> muxing for an API user in general is a more high level thing than looking at the avformat internals
[23:51:40 CEST] <sb_> yeah I've already built a muxer
[23:52:05 CEST] <sb_> I've got a fork that I'm adding functionality too for an eventual contribution back to core
[00:00:00 CEST] --- Wed Apr  6 2016


More information about the Ffmpeg-devel-irc mailing list