[Ffmpeg-devel-irc] ffmpeg.log.20150316

burek burek021 at gmail.com
Tue Mar 17 02:05:01 CET 2015


[00:06:26 CET] <Amethist> again i need help
[00:06:35 CET] <Amethist> i need execute the followed parameter : "ffmpeg -i C:\Users\bil\Videos\Evanescence - My Immortal.mp4"
[00:06:51 CET] <Amethist> but it get error cuz execute just ffmpeg -i C:\Users\bil\Videos\Evanescence and stop in -
[00:21:34 CET] <eCronik> Hey, guys! Is somebody here?
[00:22:18 CET] <eCronik> I have a problem which I can't figure out - maybe one of you can help me. Hopefully! :)
[00:23:07 CET] <eCronik> I am running ffmpeg on Ubuntu 14.04 in a screened process - it is a player that reads video files out of a folder, randomly.
[00:24:29 CET] <eCronik> The whole process goes through a while loop. In the screen session the switchting between files works fine - but I can't detach with crtl+a and d or with q - it is starting the process again. If I just close the terminal, ffmpeg crashes when the next file change comes.
[00:25:00 CET] <eCronik> Same, when I try starting the screen already detached, or if I forward the process to dev/null and into a log.
[00:25:26 CET] <eCronik> How can I have a working, attachable, of ffmpeg running in the background?
[00:25:37 CET] <eCronik> Hope somebody can help me out. :)
[00:25:48 CET] <eCronik> *process
[00:26:12 CET] <eCronik> Btw, am happy that still so many people use the IRC obviously. Good old days _
[00:26:27 CET] <eCronik> Eventhough most are bounces, probably. :D
[00:26:31 CET] <eCronik> *bouncers
[00:29:05 CET] <eCronik> I am using Jon Severinsson PPA btw
[00:30:51 CET] <eCronik> So many user and not a single one actively chatting? :/
[00:31:10 CET] <eCronik> Shit, I need to get myself a bouncer too
[00:32:35 CET] <eCronik> The thing is, when I try to detach from screen with c+trl from screen, it waits a while and kicks me back into the ffmpeg process.
[00:32:48 CET] <eCronik> I can#t even exit that loop with ctrl+c or so
[00:33:07 CET] <eCronik> I wish I had more knowledge about Ubuntu
[00:49:08 CET] <relaxed> eCronik: what's the command?
[00:49:38 CET] <eCronik> This is my bash script
[00:49:39 CET] <eCronik> :
[00:49:51 CET] <relaxed> pastie.org
[00:49:53 CET] <eCronik> K
[00:50:10 CET] <relaxed> also show how you're starting screen
[00:50:17 CET] <eCronik> Bash: http://pastie.org/10028641
[00:50:43 CET] <eCronik> Error from the log when sending it to dev null: http://pastie.org/10028642
[00:51:16 CET] <eCronik> Hi relaxed, btw :)
[00:54:31 CET] <relaxed> it seems pointless to run a screen session for each loop there
[00:54:57 CET] <relaxed> just run the entire bash script in one seesion
[00:54:59 CET] <eCronik> Yeah, I realized that when I've added the detach option to screen already - a lot are open
[00:55:09 CET] <eCronik> How can I do that?
[00:55:57 CET] <relaxed> screen -S vod -d -m /path/to/script
[00:56:19 CET] <relaxed> would start the script and detach from it
[00:57:09 CET] <eCronik> Yeah, that is what I've tried already
[00:57:19 CET] <eCronik> But that doesn#t work either, unfortunately
[00:57:21 CET] <relaxed> "for file in $(ls *)"
[00:57:25 CET] <relaxed> is a no-no
[00:57:59 CET] <eCronik> Oh, why? When I start it attached, it seems to work to pick a file randomly
[00:58:13 CET] <relaxed> http://mywiki.wooledge.org/ParsingLs
[01:00:08 CET] <relaxed> http://mywiki.wooledge.org/BashGuide
[01:01:47 CET] <eCronik> Sorry, like I said - I am a bloody Ubuntu beginner.
[01:01:53 CET] <eCronik> So, would that work:
[01:01:54 CET] <eCronik> http://pastie.org/10028652
[01:01:57 CET] <eCronik> ?
[01:02:15 CET] <eCronik> Or replace Ubuntu with Linux in general :D
[01:02:39 CET] <relaxed> that script is no different
[01:03:28 CET] <eCronik> Ok, another try
[01:04:27 CET] <relaxed> remove using screen within the script
[01:05:52 CET] <eCronik> That would be grea.t And how can I leave it running in the background then as I have no possibility to detach or get out of it again?
[01:07:04 CET] <relaxed> by running the script like this, screen -S vod -d -m /path/to/bashscript
[01:07:39 CET] <eCronik> Ohh
[01:07:48 CET] <relaxed> then you can attach with screen -dr -S vod
[01:07:59 CET] <eCronik> Misunderstood u when u said it earlier. Let me try this.
[01:08:48 CET] <relaxed> now remove the $(ls * ...) and replace it with an array of the files instead
[01:09:21 CET] <eCronik> Isn't it possible to let it randomly pick a file?
[01:09:49 CET] <eCronik> http://pastie.org/10028660
[01:09:55 CET] <relaxed> pick a file from the array
[01:09:57 CET] <eCronik> Probably not like that, but another try :)
[01:10:42 CET] <relaxed> http://mywiki.wooledge.org/BashGuide/Arrays
[01:11:20 CET] <relaxed> you're on the right track though
[01:11:43 CET] <eCronik> Hihi, challenge accepted
[01:11:45 CET] <eCronik> :D
[01:14:21 CET] <eCronik> Hm, I have no clue tbh, as I don't know about the parameters. Not even the "-e" as a starting point :/
[01:15:15 CET] <eCronik> Maybe I need to use it without the "-e"?
[01:15:27 CET] <meego> Hi, I'm researching how to best downmix 5.1->stereo. It seems a lot of media these days as most if not all dialogues in C(front center) channel. These can result in very low volume for dialogues. I found the
[01:15:39 CET] <meego> aaah typed enter too fast sorry
[01:22:37 CET] <eCronik> Any better? http://pastie.org/10028696
[01:23:39 CET] <meego> Hi, I'm researching how to best downmix 5.1->stereo. It seems a lot of media have dialogues mixed heavily/exclusively in C(front center) channel. These can result in very low volume for dialogues when downmixing to stereo because of the formula used by ffmpeg for downmix (FL = FL + 0.707FC + 0.707BL | FR = FR + 0.707FC + 0.707BR). Does anybody know
[01:23:39 CET] <meego>  the thought process behind these values, and if they can be improved ? I understand no formula can be a one-size-fit-them-all and as a matter of fact, even the AC-3 spec (http://www.atsc.org/cms/standards/a_52-2010.pdf) does not recommend precise, unique values for downmixing, but Im still curious.
[01:26:12 CET] <eCronik> Or http://pastie.org/10028703
[01:26:21 CET] <eCronik> Relaxed, still there?
[01:26:36 CET] <relaxed> yes
[01:26:58 CET] <eCronik> Does it look better/closer?
[01:28:09 CET] <Amethist> anyone here undersund ph?p
[01:30:08 CET] <relaxed> eCronik: http://pastie.org/pastes/10028711/text
[01:30:58 CET] <relaxed> but you probably want files=(/path/to/the/dir/*)
[01:31:55 CET] <Pyro_Killer> hello everyone, I'm kind of new to this entire conversion thing, I see a lot of people using 2 pass, what is the function of enconding something twice, and what are the benefits of doing so, VP8 specifically in this case
[01:33:21 CET] <relaxed> eCronik: run this a few times,  files=(1 2 3 4 5 6 7 8 9 10); echo "${files[$RANDOM % ${#files[@]}]}"
[01:33:28 CET] <eCronik> No, the script is in the same directory as the files. Let me try this.
[01:35:36 CET] <eCronik> What is the purpose of that?
[01:35:56 CET] <relaxed> to show it's random
[01:36:22 CET] <eCronik> Hey, it indeed does work :)
[01:36:25 CET] <eCronik> Ok
[01:36:43 CET] <eCronik> The video play, is there a possibility now to attach and detach to it?
[01:36:48 CET] <eCronik> *plays
[01:37:08 CET] <relaxed> screen -S vod -dr
[01:37:57 CET] <eCronik> What does -dr? Reattach a session and if necessary detach it first?
[01:38:12 CET] <relaxed> yes
[01:38:13 CET] <eCronik> Eventhough that would be -d -r
[01:38:24 CET] <eCronik> Is it possible to use them together like this?
[01:38:39 CET] <relaxed> try it
[01:40:06 CET] <eCronik> So I guess the r is for reattach, and d for deattach
[01:40:12 CET] <eCronik> Why has the -S to be in there?
[01:40:20 CET] <relaxed> Pyro_Killer: 2 pass is used to target a specific size
[01:41:38 CET] <relaxed> eCronik: read man screen
[01:41:56 CET] <Pyro_Killer> sounds good, okey, I'll take an example, because no comprende: http://wiki.webmproject.org/ffmpeg/vp9-encoding-guide
[01:42:37 CET] <Pyro_Killer> in this example it runs an encoding and sends the file to dev/null
[01:43:21 CET] <eCronik> I did, relaxed. But isn't it to specify a name when creating it, not when reattaching?
[01:43:32 CET] <Pyro_Killer> so, if it just sends the output to null, what is the point, if it retains it and keeps the same bitrate, then the quality woun't be any better?
[01:43:49 CET] <relaxed> the first pass creates a log file that the second pass uses to distrbute the bits in the second encode. The video/audio from the first pass isn't needed.
[01:45:10 CET] <relaxed> that's why the pipe it to /dev/null. Make sense?
[01:45:24 CET] <Pyro_Killer> that is very clever
[01:46:21 CET] <relaxed> eCronik: you can have multiple screen sessions running at the same time
[01:47:51 CET] <eCronik> I know, but as screen -r vod brings it back as well, I guess I can just use screen -dr name ?
[01:47:53 CET] <Pyro_Killer> another question, in the first pass it "sacrifices quality for speed", does that translate to the final product at all, I can understand why in both cases, but courious
[01:48:14 CET] <eCronik> Seems to work, but is it basically not right?
[01:48:31 CET] <relaxed> eCronik: use -S sessionname
[01:50:00 CET] <eCronik> Ok, thanks a lot, relaxed! Appreciate the constructive help an generosity
[01:50:19 CET] <relaxed> you're welcome
[01:50:23 CET] <Pyro_Killer> eCronik: https://www.youtube.com/watch?v=hB6Y72DK8mc
[01:50:35 CET] <Pyro_Killer> helped me a lot
[01:50:39 CET] <eCronik> Is it somehow possible to reduce the gap time between the files or even make it seamless with ffmpeg? Probably not, right?
[01:51:17 CET] <eCronik> Thanks, Pyro_Killer! Two minutes seems a bit short, am curious what it's about! :)
[01:51:31 CET] <Pyro_Killer> it's about screen
[01:51:50 CET] <eCronik> Yeah, I know - listening to it now
[01:54:17 CET] <eCronik> Cool, I didn't know that I can go back and forth between the windows :)
[01:58:30 CET] <eCronik> I think if maybe sending something like a black screen in between or so - eventhough this is probably like changing to another file as well. Or is there a way?
[01:59:47 CET] <relaxed> eCronik: you could concat the videos https://trac.ffmpeg.org/wiki/Concatenate
[02:02:58 CET] <eCronik> Sound promising - so if I create a playlist, and use "ffmpeg -f concat -i mylist.txt -c copy ServerAddress" it is copying everyhing (audio and video) from the files and sends them exactly like they are?
[02:04:58 CET] <eCronik> Or do I need to add acodec and vcodec as well?
[02:09:16 CET] <Pyro_Killer> Thank you for your help
[02:13:19 CET] <eCronik> Nvm, I guess I can figure that out with trial and error. Thanks. I have one last question, and then I am done bothering you. :)
[02:13:35 CET] <eCronik> My videos are 720p
[02:14:08 CET] <eCronik> And I have a .png with transparent background I wanna add - it is 1080p and needs to be downsized to 720p as well then of course. How can I do that?
[02:14:57 CET] <eCronik> So the GFX is preserving it's original placement, eventhough it's downsized
[02:15:50 CET] <eCronik> Or downscaled... :)
[02:17:49 CET] <relaxed> eCronik: you should be able to stream copy with concat so the png won't be needed
[02:18:52 CET] <eCronik> Oh, the question about the .png has no connection to the seamless or reduced gap between the videos
[02:19:00 CET] <eCronik> I need it as an overlay
[02:19:07 CET] <relaxed> ah
[02:31:07 CET] <eCronik> Sorry, more questions popped up. When I am reattached to the screen - how can I deattach it then, as this always restarts with a new file and leaves me in the session? And how can I combine your "${files[$RANDOM % ${#files[@]}]}" with the mylist.txt? Sorry :/
[02:39:29 CET] <eCronik> Relaxed, I apologize that I am so clueless - hope you can help me out on this one as well
[02:52:33 CET] <Randomcouch> Hey relaxed
[02:52:37 CET] <Randomcouch> are you still there?
[02:56:36 CET] <Randomcouch> I need help with rtsp_transport protocol please
[02:56:46 CET] <Randomcouch> I'm using rtsp input from youtube video rtsp links, and streaming that to an rtmp server. It works, but there are some issues: 1. At the beginning of each video, there's a big frame drop, and then everything continues, video resyncs, and plays normally after
[02:57:08 CET] <Randomcouch>  -max_delay 0 -initial_pause 0 -rtsp_transport udp -i " + inputLink + " -vcodec libx264 -acodec mp3 -ab 48k -bit_rate 450 -r 25 -s 640x480 -f flv "  + stream
[03:04:03 CET] <Randomcouch> Otherwise does anyone know if there's a way to have ffmpeg with quvi / libquvi support on Windows?
[04:04:34 CET] <eCronik> Well, thanks for your help, relaxed. I guess with the other stuff I am on my own then - but thanks to you I've made great progress already. So thanks!
[04:16:24 CET] <eCronik> cya
[06:22:13 CET] <magickpal> hi, i'm trying to supply an alpha channel to a video stream so i can place a PNG underneath it and have it show through, not sure what codec to use, i assume i can just make 2 files some how, i'm currently outputting a quicktime animation codec rgba and that works fine but the file is 2.4 gigs, i figure i can get away with a lot better compression for what i'm doing
[06:24:31 CET] <magickpal> basically i want a sane way (smaller file size) to provide a key to my actual video
[06:27:34 CET] <magickpal> assuming i can probably use -filter_complex join some how but not sure what format my key should be output from after effects as
[10:07:34 CET] <baran> hello guys
[10:08:23 CET] <baran> i try to extract an image from a video, and i get a grey picture (jpeg2000 format)
[10:08:25 CET] <baran> http://pastebin.com/vKmk7Jhp
[10:08:35 CET] <baran> here's input/output
[10:08:54 CET] <baran> maybe someone can point out the problem?
[10:37:58 CET] <Ders> Hi. Can anyone tell me if there's a way to tell when ffmpeg is done processing? (when it's done with the command I gave it)
[10:38:46 CET] <Ders> I want to delete some files after ffmpeg is done with them
[10:45:59 CET] <saste> Ders, you can use a script for that: ffmpeg -i blah ...; rm blah.avi
[10:46:31 CET] <Ders> a script? To give some more information, I'm using ffmpeg inside a c++ project
[10:46:35 CET] <t4nk159> Hello, how can I set an AVDictionary option? How do I know the exact key?
[10:46:53 CET] <Ders> I'm calling a command that is executed like in CMD
[10:49:57 CET] <saste> Ders, exec? that is blocking
[10:50:04 CET] <saste> you just have to wait for it to finish
[10:50:23 CET] <saste> or you execute it in a separate thread and call a callback when it's done
[10:51:28 CET] <Ders> @saste something like this: http://stackoverflow.com/questions/5631229/using-exec-to-execute-a-system-command-in-a-new-process
[10:51:43 CET] <Ders> with execl?
[10:53:43 CET] <Ders> I'm currently using popen() to create a command if that's helpful to know
[10:54:09 CET] <saste> Ders, more like this: http://stackoverflow.com/questions/3736210/how-to-execute-a-shell-script-from-c-in-linux
[10:54:28 CET] <Ders> I'm on windows saste
[10:54:34 CET] <saste> exec was not a good idea
[10:54:48 CET] <saste> Ders, then I'm sorry for you ;-)
[10:54:56 CET] <Ders> haha :) damn
[10:55:34 CET] <saste> what's the equivalent of system in windows?
[10:55:57 CET] <Ders> system should work I think: http://www.cplusplus.com/reference/cstdlib/system/
[10:56:07 CET] <Ders> I've just found that that is a blocking command
[10:56:20 CET] <Ders> or it should be at least
[11:18:02 CET] <magickpal> i'm trying to do an alpha merge and and and overlay and getting a buffer queue overflow, command is  -i output.mkv -i alphaextracted.mov -i composite.png -filter_complex "[0:v] [1:v] alphamerge [comp]; [2:v][comp] overlay=eof_action=endall"
[11:18:33 CET] <magickpal> heres the mapping and error, not sure why it isn't working http://dpaste.com/27QZETE
[11:19:08 CET] <magickpal> oh maybe i need to loop the composite.png
[11:21:51 CET] <saste> magickpal, what happens if you place a fifo filter before the overlay
[11:21:54 CET] <saste> ?
[11:23:43 CET] <magickpal> will check it out, just running a test render
[11:32:42 CET] <magickpal> saste: like this?  "[0:v] [1:v] alphamerge [comp]; [2:v] scale=w=852:h=480 [bg];[bg] fifo [bg]; [bg][comp] overlay=eof_action=endall"
[11:33:04 CET] <saste> magickpal, that won't work
[11:33:24 CET] <saste> "[0:v] [1:v] alphamerge [comp]; [2:v] scale=w=852:h=480, fifo [bg]; [bg][comp] overlay=eof_action=endall"
[11:33:59 CET] <magickpal> ahh okay cheers
[11:34:27 CET] <magickpal> looks like that doesn't fix it anyway, just adding -loop 1 before the -i composite.png does
[11:34:52 CET] <magickpal> i guess the png stream needs repeating so it can be overlayed eh
[11:35:11 CET] <saste> magickpal, uh
[11:35:28 CET] <saste> dunno, in case you want to loop the gif, yes
[11:35:49 CET] <wh-hw> hi, could i input multi files , and output multi files  in one cmd?
[12:13:40 CET] <techtopia> o/
[12:14:05 CET] <techtopia> hello guys, I have a japanese transport stream cap that im having trouble with
[12:14:28 CET] <techtopia> i can't cut it with video redo, and if try to encode it with ffmpeg it errors out
[12:19:13 CET] <techtopia> http://pastebin.com/xaWNYZbS
[12:19:18 CET] <techtopia> here is the console log
[12:20:01 CET] <BtbN> looks like the video stream, or the whole file, is corrupted
[12:21:55 CET] <techtopia> :( it plays fine in a media player, and i ran it through ts doctor and it found no errors
[12:22:14 CET] <techtopia> is it possible that japanese characters in the header info could be throwing it off?
[14:05:01 CET] <sly007> Hello, I am trying to transcode a TS file which I grabbed from my PVR, I have a « non-existing PPS 0 referenced » error. What shall I do?
[14:24:43 CET] <d3fault> given a directory of audio and video files with loosely overlapping times, does ffmpeg (or any open source tool) provide a way to automatically mux/sync the two streams' overlapping time-spans together based on their timestamps and durations alone?
[14:25:50 CET] <d3fault> i know it can be done with ffprobe and duct tape, i'm about to code it... but thought i'd check to see if anyone else has done it already
[14:36:00 CET] <juzzlin_> Hi, I'm trying to cross compile ffmpeg for Android on Ubuntu 14.04. My problem is that the configure says "ERROR: x265 not found using pkg-config". But the thing is that the Android version of x265 library is installed under the NDK and if I run pkg-config in Ubuntu terminal it is found correctly.
[14:36:29 CET] <juzzlin_> And yes, I need x265..
[14:38:05 CET] <juzzlin_> I have set PKG_CONFIG_LIBDIR environment variable to point to the correct location under the NDK.
[14:41:24 CET] <termos> How would I go about getting a static image as input to my filter graph using the FFmpeg library?
[14:44:29 CET] <cousin_luigi> Greetings.
[14:45:08 CET] <cousin_luigi> How come I'm seeing a significant quality loss from h264 to h265 even with crf=28 ?
[14:46:55 CET] <techtopia> your encoding from h264 to h265?
[14:48:01 CET] <techtopia> 1, thats dumb. 2, crf=28 is low quality
[14:48:23 CET] <techtopia> go for crf18 if you want good quality
[14:48:32 CET] <techtopia> but encoding from x264 is pointless imo
[14:49:08 CET] <techtopia> if it were from a better source to x265 it would make sense
[14:49:16 CET] <techtopia> but what your doing is gigo
[14:50:07 CET] <d3fault> unless you factor in bitrates
[15:13:08 CET] <cousin_luigi> techtopia: "gigo"?
[15:13:36 CET] <cousin_luigi> techtopia: I was under the impression crf=28 was the best possible on h265
[15:14:29 CET] <cousin_luigi> I'm trying to resize stuff. Also from wmv to h265.
[15:14:49 CET] <cousin_luigi> (although the quality loss is apparently insignificant)
[15:15:20 CET] <JEEBsv> there's no "best", and the CRF ranges etc are completely different between x264 and x265
[15:15:58 CET] <JEEBsv> also the presets may share names between the two encoders, but you should not think they are comparable
[15:16:25 CET] <JEEBsv> in other ways than "placebo is the slowest" and "ultrafast is the fastest"
[15:17:32 CET] <cousin_luigi> I see.
[15:18:04 CET] <cousin_luigi> How do you suggest I proceed if I wanted to reduce the size of h264 videos without ruining them too much?
[15:18:56 CET] <JEEBsv> the same way you do with x264. find the highest crf value that still looks good, pick a preset, adjust the crf according to the preset if needed, encode
[15:19:55 CET] <cousin_luigi> JEEBsv: does the preset also affect quality?
[15:21:03 CET] <JEEBsv> they affect the results of CRF; different settings affect various parts of CRF calculations
[15:21:48 CET] <JEEBsv> which is why I have the adjustment there after picking the preset, although you can also do the preset picking first, and then with that preset find your CRF value
[15:21:55 CET] <JEEBsv> (for that kind of content)
[15:47:32 CET] <eCronik> Hey, guys! I have a question regarding Concatenate. I am playing video files out of a folder - one by one. With Concatenate, would it be possible, to not have any gaps between the file switches anymore?
[16:59:56 CET] <Fr4nkster> Hi - I've a problem adding an overlay with transparency to my video. Using ffmpeg 1.2.6 - what is the right command to add a .png without any alignment or scaling (720p - 720p overlay)? Thanks! :)
[17:11:22 CET] <brontosaurusrex> any clues what I could try here http://paste.debian.net/plain/161569 ? "Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument"
[17:14:53 CET] <techtopia> [mpeg2video @ 0x7fabb9836000] Invalid frame dimensions 0x0.
[17:14:58 CET] <techtopia> think thats your issue
[17:15:12 CET] <brontosaurusrex> techtopia: and?
[17:15:33 CET] <techtopia> do you know the source resolution?
[17:15:38 CET] <brontosaurusrex> yes
[17:15:42 CET] <brontosaurusrex> 1920x1080
[17:17:08 CET] <skyroveRR> Hello
[17:22:46 CET] <__jack__> does someone there knows a tool for creating mpeg-dash mpd ? I've a bunch of chunks and I want to stream them with dash
[17:23:03 CET] <__jack__> (maybe someone have a mepg-dash mpd docs !)
[17:59:45 CET] <eCronik> Guys, could somebody assist me please for a simple overlay solution (no alignment, no scaling - just adding it 1:1)?
[18:04:01 CET] <eCronik> I have 720p footage and a 720 .png with partial transparency - so I really just need to lay it over. But I tried -vf, -filter_complex and a lot of options already. Some without any errors - but it doesn#t show up.
[18:09:27 CET] <kmax> Hello, I am trying to stream RTSP from an IP camera to flash in a web browser. I know ffmpeg is working because I was able to use it to create an image from my rtsp stream every 1 second.  I am using the ffserver.conf file I found in the test directory of the install  *Here is the ffmpeg command I am have been trying: ffmpeg -i "rtsp://admin:admin@192.168.1.214:554/cam/realmonitor?channel=1&subtype=1" -f flv -r 25 -s 640x480 -an "rtmp:
[18:10:52 CET] <kmax> here is the console output
[18:11:15 CET] <kmax> http://pastebin.com/eFqD4w7N#
[18:13:05 CET] <techtopia> well
[18:14:06 CET] <techtopia> as for your encoding line it should be -r 15 not 25 and -s 704x480 not 640x480
[18:14:21 CET] <techtopia> to match the source
[18:14:22 CET] <techtopia> Stream #0:0: Video: h264 (Main), yuvj420p(pc), 704x480, 15 fps, 100 tbr, 90k tbn, 30 tbc
[18:14:31 CET] <techtopia> but the issue is a connection issue
[18:16:00 CET] <techtopia> the rtsp stream is forwarding you to tcp://localhost:1935 which is probably the service it's running
[18:16:12 CET] <techtopia> and ffmpeg cannot access that
[18:21:17 CET] <kmax> thanks for the info about the command having to match the source parameters.
[18:21:21 CET] <kmax> hummmm... I am just doing this on the local LAN... so I did not think I would have to open up port 1935
[18:21:39 CET] <kmax> its on a CENTOS box
[18:26:44 CET] <kmax> does the ffserver.conf have to contain the port 1935?
[18:28:29 CET] <hassoon> are is avconv's syntax the same as ffmpeg's ?
[18:29:19 CET] <__jack__> hassoon: nope, not always
[18:29:39 CET] <__jack__> well, avconv's may work with ffmpeg, ffmpeg's may not work with avconv
[18:30:56 CET] <hassoon> 'thing here is that ffmpeg is missing in debian 8 ya knaw
[18:31:07 CET] <hassoon> but there is avconv around :s
[18:31:19 CET] <hassoon> should i download ffmpeg and install it manually then ?
[18:31:51 CET] <__jack__> install it from unstable
[18:33:18 CET] <__jack__> just configure apt not to upgrade all packages
[18:55:59 CET] <kmax> anyone have any tips with a connection error on a local host port 1935?  i should not need to open ports on my router because its local
[21:03:46 CET] <spaam> ubitux: nice guide :)
[21:03:58 CET] <ubitux> e
[21:04:10 CET] <ubitux> feel free to make a more appropriate trac page
[21:04:41 CET] <ubitux> might be better to redirect users to the trac than a random blog post
[21:07:25 CET] <spaam> yes. easier to find :)
[22:27:59 CET] <goulard> running a transcode/cut of an encoding using libffmpeg, and the output video is brighter than the input video.  Is there something I need to do to maintain the color brightness characteristics between decode and reencode?
[22:44:11 CET] <Mavrik> not really.
[22:44:26 CET] <Mavrik> It seems something it going wrong in your library or your settings.
[22:49:21 CET] <goulard> Mavrik - Can you think which setting I might have wrong?
[22:49:42 CET] <Mavrik> I'd say brightness no? :P
[22:49:52 CET] <Mavrik> You didn't really provide any useful information beyond that :/
[23:35:15 CET] <jgh-> Is there any way of synchronizing multiple outputs?  I.e. I am ingesting a live stream, transcoding to multiple layers, and then outputting to rtmp but i'm finding the source quality level is a couple seconds ahead of the transcoded layers
[23:35:58 CET] <kmax> am i still connected?
[23:40:45 CET] <kmax> can someone help me with a connection error?
[23:41:06 CET] <kmax> i am just doing this on the local LAN so there should be no ports to open etc
[23:41:08 CET] <kmax> http://pastebin.com/eFqD4w7N#
[23:56:28 CET] <pqatsi> Compiling a software in my gentoo, i can choose use fdk or libav-aac to encode aac. What should i prefer nowadays?
[23:59:29 CET] <relaxed>  fdk
[00:00:00 CET] --- Tue Mar 17 2015


More information about the Ffmpeg-devel-irc mailing list