[Ffmpeg-devel-irc] ffmpeg.log.20160526

burek burek021 at gmail.com
Fri May 27 02:05:01 CEST 2016


[00:07:43 CEST] <andross> okay this encoding example got weird
[00:08:00 CEST] <andross> why couldnt the example just encode an input wav rather than a test tone
[00:09:05 CEST] <JEEB> doesn't require an input file I guess
[00:09:21 CEST] <JEEB> there's other examples doing demuxing and decoding into avframes
[00:09:37 CEST] <JEEB> and then you just feed the encoder those avframes, om nom nom
[00:12:45 CEST] <andross> think ill leave it there for now
[00:44:51 CEST] <SeanM_> Hi was wondering if anyone could help me. I know that -ss 00:00:4.000 is command to start ffmpeg at a specific time (4 seconds in), but is there a way to do that for the ending? I want to remove (outdated) outros from a large group of videos. Basically cut off the last 5 seconds of a movie, where all the lengths vary but the part being removed is always the same length (5 sec).
[00:45:39 CEST] <JEEB> not that I know, you can just set a seek and a duration
[00:52:24 CEST] <SeanM_> Thanks. Was hoping for an easier method though, but I guess it might not be possible to do it any other way.
[00:52:26 CEST] <BenMcLean> Hey there folks
[00:53:53 CEST] <BenMcLean> Does anyone know how I'd convert from h264 mp4 to mpeg or whatever else Sony Vegas would accept without any stupid synchronization issues ?
[00:55:09 CEST] <BenMcLean> Or on second thought, the fact that Sony Vegas won't import h264 mp4 means it is obsolete. Can anyone recommend an inline video editor that doesn't suck? Preferably a free one?
[00:55:57 CEST] <furq> BenMcLean: http://www.openshotvideo.com/
[00:56:09 CEST] <furq> i have no idea if that sucks, but it uses the ffmpeg libs so it can actually import useful formats
[00:56:18 CEST] <furq> and export, for that matter
[00:57:47 CEST] <BenMcLean> furq, thanks! I have been needing to cut dependencies on software ... that I can't actually afford
[00:58:41 CEST] <BenMcLean> I'm hoping this will let me cut and arrange videos, sounds, images and text in HD without a crap ton of technical mumbo jumbo
[00:58:51 CEST] <furq> i think vegas uses directshow, so it probably can import mp4 if you install the jimmy codec pack or whatever
[00:58:59 CEST] <furq> but i'd steer clear of anything that uses directshow anyway
[00:59:49 CEST] <BenMcLean> I'm gonna restart so I can try out OpenShot. BRB
[01:05:54 CEST] <BenMcLean> OK well, OpenShot video editor opened up, imported my clip, I dragged it onto the timeline, then pressed play to preview, and it immediately had an epic fail crash.
[01:06:16 CEST] <llogan> there is also shotcut
[01:06:19 CEST] <BenMcLean> This suggests to me that this is an app which is probably not ready for primetime.
[01:06:40 CEST] <BenMcLean> Since, y'know, even the most basic of operations resulted in an instant crash
[01:07:16 CEST] <llogan> and lightworks
[01:08:44 CEST] <BenMcLean> I'll give shotcut a try
[01:08:46 CEST] <llogan> or buy a one month plan for premiere pro cc for $20
[01:09:19 CEST] <llogan> oh, it's $30 unless you buy annual plan
[01:09:38 CEST] <BenMcLean> Hell no, I want a program that I know is gonna work every time forever no matter whether Adobe likes me or not
[01:09:50 CEST] <BenMcLean> I want Audacity with video, in other words.
[01:09:59 CEST] <furq> lightworks has the same issues with importing/exporting common formats iirc
[01:10:11 CEST] <llogan> i avoid these issues by no longer doing any editing
[01:10:15 CEST] <BenMcLean> well I am gonna try shotcut
[01:10:40 CEST] <BenMcLean> I am not trying to do anything that is even close to complicated here, that's the crazy part
[01:11:08 CEST] <furq> if you just want a visual editor rather than an NLE then you could try avidemux
[01:11:45 CEST] <BenMcLean> I need an NLE
[01:11:54 CEST] <furq> shotcut looks quite neat actually
[01:14:01 CEST] <furq> although the fact that the first question in the FAQ is "Why does it crash on Windows upon launch?" doesn't inspire confidence
[01:14:14 CEST] <furq> followed by "Why does it frequently crash on Windows?"
[01:16:30 CEST] <BenMcLean> Shotcut seems to be working perfectly for me so far.
[01:17:34 CEST] <BenMcLean> I love the fact that it's project format is based on XML
[01:17:51 CEST] <BenMcLean> It's about time somebody started using a video editing save file format that makes sense
[01:18:30 CEST] <BenMcLean> I was looking at some tutorials the other day for how to use Blender as a video editor ...it's an ugly way to work
[01:18:44 CEST] <BenMcLean> the program is just totally not intended for that use case
[01:19:57 CEST] <BenMcLean> ooh ... it has overlay HTML filter? Like, I can plop a web page right into my videos?? i'm hoping that's what it means. that would be SUPER useful!
[01:20:31 CEST] <BenMcLean> as a test, I am making a short video where I hold up Star Trek action figures in front of my camera phone.
[01:20:44 CEST] <BenMcLean> it called "Odo & Quark Discuss Election 2016"
[01:25:41 CEST] <farfel> good evening
[01:26:01 CEST] <farfel> I have built ffmpeg with --enable-libopenjpeg
[01:26:17 CEST] <farfel> but, when I decompress an image, it seems that ffmpeg uses its native decoder
[01:26:22 CEST] <BenMcLean> farfel where does your nickname come from?
[01:26:42 CEST] <farfel> The Inspector General.....
[01:26:49 CEST] <farfel> with Danny Kaye
[01:27:14 CEST] <llogan> Not the off-screen dog in a Seinfeld episode?
[01:27:28 CEST] <farfel> :) no
[01:27:58 CEST] <llogan> ffmpeg -c:v libopenjpeg -i input ...
[01:28:06 CEST] <farfel> farfel is a jewish pasta dish from eastern europe
[01:28:11 CEST] <farfel> so I can see seinfeld using it
[01:28:51 CEST] <farfel> https://www.youtube.com/watch?v=RuU9gtsjzww
[01:29:41 CEST] <farfel> but, enough about me
[01:30:24 CEST] <farfel> lllogan: awesome, thangs
[01:30:26 CEST] <farfel> thanks
[01:31:01 CEST] <farfel> so, it looks like the libopenjpeg decoder could be improved on
[01:31:17 CEST] <farfel> determining pixel format
[01:31:50 CEST] <farfel> is there an interest in adding support for broadcast profiles in the encoder ?
[01:32:13 CEST] <farfel> then it can be muxed into mpeg ts
[01:32:30 CEST] <farfel> broadcast profiles and elementary stream headers
[01:33:07 CEST] <kyleogrg> hello
[01:33:51 CEST] <kyleogrg> I'd like to use a video duration as a factor in a bitrate calculation, automatically
[01:34:47 CEST] <kyleogrg> In a command line.  So how can I do something like: -b:v (duration*x)/y
[01:35:15 CEST] <kepstin> kyleogrg: i.e. you're trying to target an exact output filesize?
[01:35:41 CEST] <kyleogrg> yes, calculate the size of a whole batch of videos
[01:36:11 CEST] <kepstin> kyleogrg: as far as I know, there's no way to do that in a single ffmpeg command. You'd probably have to write a script that uses e.g. ffprobe to find the duration, then calculates the bitrate to give to ffmpeg.
[01:36:31 CEST] <pzich> what's the end goal? there's a lot more than duration to factor in to calculating a good bitrate
[01:37:14 CEST] <kyleogrg> hmm, just something i've wondered
[01:37:51 CEST] <kepstin> yeah, the *only* reason to use duration as a factor is if you have a fixed max filesize, for example for a video upload site or physical media.
[01:38:03 CEST] <BenMcLean> llogan thank you SO MUCH for recommending Shotcut. It seems to do absolutely everything I want real fast and intuitive with no nonsense for FREE
[01:38:22 CEST] <llogan> good to hear, but thank Dan Dennedy, the author.
[01:39:00 CEST] <kyleogrg> okay
[01:39:04 CEST] <BenMcLean> I'm not sure how well it will handle larger scale projects though, when I have to edit half an hour or more of footage from many sources. Will also need to figure out how to slice up clips out of DVDs at some point
[01:39:09 CEST] <kyleogrg> thanks for the help
[01:39:55 CEST] <BenMcLean> my little brothers were bugging me a couple weeks ago about what program to use that's free and I felt bad I couldn't recommend anything but now I know! :D
[01:40:05 CEST] <furq> kyleogrg: don't use -b:v in general unless you have a hard filesize constraint
[01:40:20 CEST] <furq> with x264, that is
[01:41:06 CEST] <kyleogrg> yeah.  i usually use crf
[01:51:23 CEST] <DHE> you'd also want 2-pass to ensure good quality and that your goal is actually met
[01:59:20 CEST] <BenMcLean> Well, all my problems are totally solved. Thanks everybody! :D
[02:00:35 CEST] <emitchell> Hey all, got a question. Trying to decide how to split a live video stream into files based off changes on an API and I am not sure if splitting the video live will cause issues with a) the incoming stream that we are recording and b) the resulting split video file.
[02:03:09 CEST] <thebombzen> emitchell: "based off changes on an API" is really vague
[02:03:31 CEST] <thebombzen> by if you want to double the video stream, you could try mapping it twice. i.e. -map 0:v -map 0:v
[02:03:50 CEST] <thebombzen> if you want to duplicate it to two files just provide two outputs
[02:04:47 CEST] <thebombzen> you might be looking for ffmpeg <input options> -i live_stream -c copy stream_dump1.mkv -c copy stream_dump2.mkv
[02:05:40 CEST] <emitchell> so we are polling an API looking for an end time, and would like to cut the stream into a file (this is for slicing up a sporting event by matches)
[02:05:58 CEST] <Bermond> Congratulations for the nice code of coduct that have just been added to the ffmpeg project documentation. Wise words. http://git.videolan.org/?p=ffmpeg.git;a=blobdiff;f=doc/developer.texi;h=4d3a7aef941368a55a166814c92049a4c99e6a8f;hp=6db93cef707076b511b57f52e50656a64e86c154;hb=89e9393022373bf97d528e6e9f2601ad0b3d0fc1;hpb=defab0825f416c665b0ba55cdcb9f39bc14a1dfa
[02:06:27 CEST] <emitchell> but ill look into the double mapping
[02:08:08 CEST] <emitchell> thanks!
[02:12:12 CEST] <emitchell> to clarify, we have a sporting event that streams for 10+ hours a day.  in each stream is multiple 2m45s matches.  there's an api that tells us when each match starts and ends, and we're able to map that to timestamps within the video.  we'd like to split that match out of the recording that we're saving via ffmpeg and then upload it to youtube, the only issue is that the file is being written to since the stream is still happening when we're trying to sp
[02:32:46 CEST] <SeanM_> Wondering if anyone could help me with this - I found this command (start from end of file, in this case 5 seconds from the end):    ffmpeg -sseof -5 -i Input.mp4      that works but I want to cut/remove everything from that point on. Any ideas how to do this?
[03:12:21 CEST] <thebombzen> emitchell: if you're on linux, you can use a named pipe
[03:12:26 CEST] <thebombzen> or *nix
[03:12:47 CEST] <thebombzen> instead of writing to a file, run mkfifo <filename> to make a named pipe
[03:13:12 CEST] <thebombzen> then, you can write to the pipe by opening it for output. any program that opens it for input will see what you write
[03:13:37 CEST] <thebombzen> so, say you have a stream grabber. you can have that streamgrabber write the output to mypipefile
[03:13:44 CEST] <thebombzen> and then run ffmpeg -i mypipefile
[03:14:02 CEST] <thebombzen> using ffmpeg, you save it to the actual file on the drive, and have the split go whereever you want.
[03:14:44 CEST] <thebombzen> alternatively, you can have your command write the streaming file to Standard Out, and pipe the output to FFmpeg, with ffmpeg -i - to read from standard in.
[03:15:02 CEST] <thebombzen> that second option works on all platforms. dunno if mkfifo works on a mac. def not on Windows tho.
[03:16:49 CEST] <thebombzen> by that second one, I mean instead of something like "videostreamlistener filename_to_save_to" try "videostreamlistener - | ffmpeg -i - <other stuff>"
[03:17:03 CEST] <thebombzen> and have it interpret - to be stdout or stdin, depending on context.
[03:17:51 CEST] <thebombzen> where you'd do "videostreamlistener - | ffmpeg -i - -c copy stream_dump.mkv -c copy to_upload_to_youtube.mkv"
[03:18:03 CEST] <thebombzen> if you get my drift
[03:18:41 CEST] <thebombzen> SeanM_: not sure what you're asking
[03:18:56 CEST] <thebombzen> because there's nothing after the end of the file, would could you remove everything from that point on?
[03:19:06 CEST] <thebombzen> If you're trying to truncate a file, the -t option does that.
[03:20:02 CEST] <SeanM_> Basically I am trying to remove the last five seconds (outro) from a file. And I thought that's the best way to jump to the exact point, but I want to cut everything from that point on.
[03:20:02 CEST] <thebombzen> that is, ffmpeg -i input_file -t 00:06:10.5 <codec options> output_file will truncate the file to 6 minutes and 10.5 seconds in length. i.e. after it has encoded that much time it will end.
[03:21:21 CEST] <SeanM_> That is helpful but is there a way to subtract the last five seconds only, without providing a specific time stamp like that?
[03:25:08 CEST] <thebombzen> SeanM_: I don't think there's a way to do it easily.
[03:27:44 CEST] <SeanM_> Thanks bombzen, that's too bad, I've been searching for like a week on how to do this and can't find anything easy.
[03:28:33 CEST] <SeanM_> I basically want to do the complete opposite of the -sseof command, to always end it five seconds from the end of file rather than starting it there... figured there would be a way to do that easily but it seems not.
[03:28:56 CEST] <thebombzen> SeanM_: you can use BASH to do it though
[03:28:58 CEST] <thebombzen> printf '%s - 5\n' "$(ffprobe decisive_battle.mkv -show_format -of flat 2>/dev/null | grep format.duration | sed 's/format.duration=//')" | tr -d '"' | bc
[03:29:21 CEST] <thebombzen> that was an example I used, but replace "5" at the beginning with the duration to crop and decisive_battle with the inputfile
[03:29:45 CEST] <thebombzen> so you could put this in get_cropped_duration.sh:
[03:29:47 CEST] <thebombzen> printf "%s - ${CROP_TIME}\n" "$(ffprobe ${INPUT} -show_format -of flat 2>/dev/null | grep format.duration | sed 's/format.duration=//')" | tr -d '"' | bc
[03:30:12 CEST] <thebombzen> or $1 and $2 if you don't want to do any work
[03:30:54 CEST] <SeanM_> Thanks very much, that helps a lot!
[03:31:40 CEST] <thebombzen> then you could do ffmpeg -i input.mkv -t $(get_cropped_duration.sh 5 input.mkv)
[03:31:55 CEST] <thebombzen> you're welcome :D
[05:05:25 CEST] <drazin> so i'm running a script that uses ffmpeg to convert some videos and there seems to be a bug where its selecting both audiotracks as default when there are more than one as described here -- https://github.com/mdhiggins/sickbeard_mp4_automator/issues/360
[05:05:32 CEST] <drazin> they claim its a FFMPEG bug
[05:05:37 CEST] <drazin> anyone know anything about this?
[07:43:03 CEST] <thebombzen> drazin: it's not a bu
[07:43:07 CEST] <thebombzen> bug*
[07:43:20 CEST] <thebombzen> drazin: you can do ffmpeg -map
[07:44:34 CEST] <thebombzen> suppose you do ffmpeg -i input and you see Stream 0:0, video. Stream 0:1, audio, 5.1. Stream 0:2, audio, stereo. You can do ffmpeg -i input -map 0:0 -map 0:1 to deselect the second audio stream
[07:44:47 CEST] <thebombzen> (when you transcode it that is)
[07:45:23 CEST] <thebombzen> if your audio player isn't automatically selecting one of them then it's poorly written, as this is a common practice (5.1 audio and stereo audio in the same container)
[07:45:41 CEST] <thebombzen> either get a better player like MPV or use -map and copy it to a new one.
[08:05:52 CEST] <odigem> hi
[08:21:49 CEST] <Micke__> Hi! I'm trying to use ffserver and ffmpeg to transcode a video stream to a different format. Things start up the way I wan't them to but the video served from ffserver is only a static image. Does anyone has any clues to what I might have configure wrongly?
[09:36:06 CEST] <coolandsmartrr> Can anyone help me with this issue? http://superuser.com/q/1081011/598139
[09:47:33 CEST] <yagiza> Hello!
[09:56:34 CEST] <c_14> coolandsmartrr: use -preset, not -vpre
[09:57:07 CEST] <coolandsmartrr> @c_14: give me this:
[09:57:08 CEST] <coolandsmartrr> Unrecognized option 'preset'.
[09:57:08 CEST] <coolandsmartrr> Error splitting the argument list: Option not found
[09:57:24 CEST] <c_14> Does your ffmpeg build have libx264 support?
[09:57:33 CEST] <coolandsmartrr> I think so, how do I check?
[09:58:11 CEST] <c_14> ffmpeg -encoders | grep h264
[09:58:27 CEST] <c_14> Or check in the configuration line printed by the ffmpeg binary
[09:59:49 CEST] <coolandsmartrr> Gives me the configuration, but I dont see h264
[10:00:19 CEST] <c_14> Then your build doesn't have support for libx264
[10:01:16 CEST] <coolandsmartrr> okay, then are there recommended configurations to encode videos superfast?
[10:03:52 CEST] <c_14> If you're talking speed/quality/filesize nothing's really better than x264
[10:04:39 CEST] <coolandsmartrr> okay, so to get it, I recompile?
[10:04:39 CEST] <yagiza> Can any1 tell me, what to do with this error message:
[10:04:39 CEST] <yagiza> Custom AVIOContext makes no sense and will be ignored with AVFMT_NOFILE format.
[10:05:41 CEST] <c_14> coolandsmartrr: yes, or use a static build or something from your distro (if you have a decent distro)
[10:06:04 CEST] <coolandsmartrr> Im using Sun OS, not sure if there is a static build out there
[10:06:51 CEST] <c_14> There might be, but none I know of. You'll have to compile yourself in that case.
[10:07:08 CEST] <phreezie> Hi, I'm trying to convert about 2000 jpeg images to a video stream with one frame for each jpeg. The jpeg images are generated in realtime and piped into ffmpeg. The problem is that only about 250 of the 2000 frames end up in the video. Is it possible that ffmpeg receives the images faster than it can process it and aborts after a while? The command line: ffmpeg -f image2pipe -r 25 -vcodec mjpeg -i pipe:0 -y -r 24 -vcodec libx264 -
[10:07:42 CEST] <phreezie> When adding the -re input option, I get better results, but still only about 500 frames.
[10:09:29 CEST] <c_14> don't use -r 25, use -framerate 25
[10:09:57 CEST] <phreezie> @c_14: For both input and output?
[10:10:01 CEST] <c_14> just input
[10:10:18 CEST] <phreezie> Alright, let me try that!
[10:10:28 CEST] <c_14> that will still drop frames though
[10:10:38 CEST] <c_14> should drop Y frame every 25 seconds
[10:10:42 CEST] <c_14> s/Y/1/
[10:15:16 CEST] <c_14> eh
[10:15:19 CEST] <c_14> 1 Frame per second
[10:15:35 CEST] <phreezie> @c_14: So with "ffmpeg -f image2pipe -vcodec mjpeg -framerate 25 -i pipe:0 -y -r 25 -vcodec libx264 -f mp4 frames.mp4", I'm still piping in 2118 frames and getting only 417 in the video
[10:15:58 CEST] <c_14> can you upload the complete console output to a pastebin service?
[10:16:05 CEST] <phreezie> sure
[10:21:07 CEST] <phreezie> @c_14 gimme a minute, I'm using a node wrapper for ffmpeg and i need to figure out the -framerate thing properly (didn't work before actually)
[10:24:49 CEST] <phreezie> @c_14: There we go: http://pastebin.com/uEjGv9UG
[10:27:11 CEST] <c_14> hmm, it doesn't look like it's dropping frames
[10:27:51 CEST] <c_14> What if you remove the -r 25 output option? Have you tried setting -preset to fast/veryfast/ultrafast ?
[10:28:00 CEST] <phreezie> "ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 frames.mp4" gives 209
[10:29:31 CEST] <phreezie> ok lemme try the preset (without -r 25 I'm getting 202 frames in the video).
[10:32:57 CEST] <phreezie> -preset ultrafast didn't change much (202 frames in video).
[10:33:48 CEST] <phreezie> I actually think ffmpeg tries to keep up in the beginning, fails and stops encoding
[10:38:12 CEST] <c_14> So I just created 2118 jpegs to test with and then ran cat *jpg | ffmpeg <arguments>. and I'm getting 2118 frames
[10:39:24 CEST] <c_14> Maybe try that as well? To make sure it's not how you're piping them that's the issue
[10:39:26 CEST] <phreezie> When using the -re option and delaying the frame generation for 30ms each frame, I actually got *more* frames in the video (around 2500). Looks like a synchronization issue?
[10:40:50 CEST] <c_14> What about delaying the frames without using -re?
[10:41:41 CEST] <phreezie> I thought about that as well. It's difficult to debug, I tried piping them to the file system instead of ffmpeg and the jpegs get generated as they should..
[10:42:03 CEST] <phreezie> That's the funny thing. That doesn't change anything.
[10:42:19 CEST] <phreezie> only -re makes the video longer when delaying
[10:42:40 CEST] <c_14> try updating your copy of ffmpeg?
[10:43:07 CEST] <phreezie> You mean to the latest version?
[10:43:48 CEST] <c_14> to a more recent git release, yes
[10:44:36 CEST] <phreezie> Yeah I've tried that too yesterday on my home machine. But lemme be sure I'm up to date here as well.
[10:48:05 CEST] <coolandsmartrr> @c_14: I didnt have permissions to gmake install for x264, so I placed it in ~/bin. When I ./configure enable-libx264, it tells me Error: libx264 not found. Is there a way to tell configure where I have x264 installed?
[10:48:44 CEST] <phreezie> ok, running on version N-80097-g89e9393 now
[10:49:15 CEST] <c_14> coolandsmartrr: --extra-ldflags=-L/path/to/libraries --extra-cflags=-I/path/to/header/files
[10:49:50 CEST] <c_14> coolandsmartrr: though it might be easier to change the libx264 preset and then export PKG_CONFIG_PATH to the lib/pkgconfig path
[10:52:50 CEST] <coolandsmartrr> c_14: Sorry, dont understand what a libx264 preset is?
[10:53:06 CEST] <c_14> prefix
[10:53:08 CEST] <c_14> not preset
[10:53:33 CEST] <c_14> multitasking is bad
[10:53:43 CEST] <phreezie> c_14: I've piped all files to the file system and ran: "cat frame*.jpg | ffmpeg -f image2pipe -vcodec mjpeg -i - -vcodec libx264 -f mp4 frames.mp4". That worked. But it sucks as solution :)
[10:54:49 CEST] <c_14> Maybe there's something strange about how you're writing to the pipe? Maybe an eof sneeks in there somewhere?
[10:54:50 CEST] <phreezie> So I guess something with my pipe stream is borked..
[10:56:14 CEST] <phreezie> I'm using a PassThrough in NodeJS where I pipe the data to and that goes into the ffmpeg wrapper I'm using (fluent-ffmpeg).
[10:56:33 CEST] <phreezie> *PassThrough stream
[10:58:55 CEST] <c_14> Have you tried manually executing the ffmpeg binary?
[11:00:52 CEST] <c_14> (ie instead of using fluent-ffmpeg)
[11:05:18 CEST] <yagiza> Is there any way to decode a UDP stream with RTP data using FFMpeg?
[11:05:57 CEST] <c_14> ffmpeg -i rtp:// ?
[11:08:29 CEST] <yagiza> c_14, I mean using existing RTP stream, not establishing a new connection.
[11:09:19 CEST] <c_14> You mean, you have RTP data and want ffmpeg to decode it?
[11:09:32 CEST] <yagiza> c_14, yes
[11:10:08 CEST] <phreezie> @c_14: Only with the cat *.jpg as source stream. But I could try outputting the jpg generation stream to stdout and pipe it into ffmpeg.
[11:11:04 CEST] <yagiza> c_14, I'm handling UDP connection myself. And I need to demux/decode it usinf FFMpeg libraries.
[11:13:09 CEST] <c_14> maybe using custom io
[11:15:46 CEST] <c_14> https://ffmpeg.org/doxygen/trunk/avio_reading_8c-example.html
[11:15:48 CEST] <c_14> ^kinda like that
[11:15:59 CEST] <c_14> You'll also have to implement write though
[11:16:14 CEST] <c_14> at very least for rtcp
[11:18:49 CEST] <yagiza> c_14, no
[11:19:15 CEST] <c_14> hmm?
[11:19:20 CEST] <yagiza> c_14, that thing doesn't work for RTP because of AVFMT_NOFILE flag.
[11:27:11 CEST] <c_14> Ah, well
[11:27:16 CEST] <c_14> In that case I don't think you can
[11:28:11 CEST] <c_14> You could setup a local udp bridge, but can't think of anything besides that and doing rtp yourself. You could try asking on the libav-users mailing list
[11:28:25 CEST] <c_14> *libav-user@
[11:29:47 CEST] <yagiza> c_14, ok, thanx
[12:48:59 CEST] <spooooon> is there a quick and easy way to save or display an AVFrame?
[12:51:32 CEST] <DHE> like the image within? there are rendering functions in the API (eg: using SDL) but keep in mind that AVFrame isn't necessarily in a user-friendly format. pixel format encoding are applicable, like yuv420p
[12:52:02 CEST] <spooooon> yea...
[12:52:04 CEST] <spooooon> that's my trouble
[12:52:41 CEST] <spooooon> I'm converting and displaying the data myself, and it not quite what I am expecting
[12:52:50 CEST] <spooooon> thought it would be nice to see if the input is correct
[12:52:58 CEST] <spooooon> and in this case it is yuv420p
[12:53:31 CEST] <spooooon> sorry I made a mistake, the conversion is being done with sws_scale
[12:54:29 CEST] <DHE> nevertheless, there is still encoding involved. the software scaler is usually used to do encoding conversions anyway
[12:55:06 CEST] <spooooon> yes, I'm using sws_scale to convert, but the output is not quite correct
[12:55:25 CEST] <spooooon> at least my interpretation is
[12:55:42 CEST] <spooooon> I'm almost certain it is a problem with my code, but it would be helpful to know if I could see the input is correct
[12:55:54 CEST] <spooooon> because I have a lot of layers above and below ffmpeg api
[12:56:37 CEST] <DHE> My own issue: while streaming live TV I got this error: [mpegts @ 0x27d0340] Invalid timestamps stream=0, pts=3007556839, dts=11597482422, size=59101
[12:57:08 CEST] <BtbN> so your TV-Provider sent invalid timestamps
[12:57:26 CEST] <DHE> what's interesting is it happened exactly after a certain amount of time the program is running - about 26.5 hours - which is the timestamp wraparound time of mpegts
[12:57:41 CEST] <DHE> and it's been happening on more than one channel
[12:58:49 CEST] <DHE> could be the original source but I found that to be quite a coincidence
[13:26:22 CEST] <spooooon> I was passing incorrect parameters to sws_scale
[13:39:56 CEST] <termos> I'm getting "Schematron validation not successful  DASH is not valid" when generating DASH with FFmpeg, is this a known problem?
[13:43:23 CEST] <JEEB> termos: might want to actually post the validation errors on the trac issue tracker with a way to regenerate similar ones
[13:43:36 CEST] <JEEB> then I can poke wbs about it and he can note if they're valid errors or not
[13:43:53 CEST] <JEEB> (wbs is the movenc/dashenc maintainer)
[13:49:51 CEST] <termos> I'll do that, seems like the problem is that the <Period> element in the MPD does not have an id attribute. <Period id="p0" start="PT0.0S"> works.
[13:53:27 CEST] <JEEB> I would recommend taking a look at the DASH spec regarding things like that
[15:41:14 CEST] <Leo__> hello
[15:41:48 CEST] <Leo__> any one here???
[15:42:33 CEST] <__jack__> sure
[15:43:29 CEST] <Leo__> can you help me?
[15:43:38 CEST] <__jack__> maybe
[15:44:04 CEST] <arehman> Hi, I have an RTMP stream currently running, and I just want to turn it to HLS
[15:44:08 CEST] <arehman> any pointers?
[15:44:09 CEST] <Leo__> you have a code for audio?
[15:44:48 CEST] <__jack__> arehman: ffmpeg -i source output.m3u8, probably with more codec and/or mapping options
[15:44:54 CEST] <__jack__> Leo__: huh ?
[15:46:03 CEST] <DHE> arehman: are you targetting certain devices like android? HLS calls for specific codecs and if your source doesn't meet the criteria then you should either convert or test the media first on your own phone
[15:48:05 CEST] <furq> what a nice young man
[15:50:17 CEST] <__jack__> :)
[15:50:57 CEST] <arehman> I currently have it setup with the Nginx RTMP module to stream
[15:51:07 CEST] <arehman> and so i just entered that command
[15:51:13 CEST] <furq> arehman: nginx-rtmp already supports hls output
[15:51:36 CEST] <furq> https://github.com/arut/nginx-rtmp-module/wiki/Directives#hls
[15:53:43 CEST] <BtbN> I'd recommend using ffmpeg for the hls output though, I had weird issues with hls made via nginx-rtmp.
[15:54:00 CEST] <furq> really?
[15:54:03 CEST] <furq> i've not noticed any
[16:02:20 CEST] <arehman> essentially the streams working
[16:02:38 CEST] <arehman> but when you access it via phone to the meu8, its a 40sec video rather than the livestream
[16:02:44 CEST] <arehman> m3u8*
[16:02:47 CEST] <arehman> heres the nginx.conf
[16:02:49 CEST] <arehman> http://pastebin.com/rcuq70RA
[16:18:13 CEST] <arehman> anyway, cheers guys, got it working
[16:18:26 CEST] <arehman> big thanks to __jack__
[16:18:30 CEST] <arehman> have a good day
[16:50:30 CEST] <Admin__> hey guys.. good day.. very strange problem.. i am running some ffmpeg scripts that capture and segment some videos ... it is a live stream.... why after about 24- 27 hours all of the scripts get killed on my ubuntu system ? then i restart and they again go for 24 hours.. then stop again .. sooooo ODDD!!
[16:50:41 CEST] <Admin__> anyone have a clue what could be happening
[16:51:38 CEST] <__jack__> Admin__: dmesg says something ?
[16:51:42 CEST] <jkqxz> What do you mean by "get killed"?  Do they run out of memory, say?
[16:53:56 CEST] <thebombzen> usually on the commandline if a process end and it just says "killed" that means the kernel killed it because you rn out of memory
[16:54:16 CEST] <thebombzen> wow remind me not to type in the morning. typos galore
[16:54:40 CEST] <Admin__> nothing at all in dmesg
[16:55:03 CEST] <DHE> Admin__: live stream meaning over the air ATSC receiver?
[16:56:34 CEST] <Andross> Hey guys
[16:56:39 CEST] <thebombzen> ohaider
[16:57:08 CEST] <Andross> my c++ program is using so many external libraries it's now more a c program
[16:57:19 CEST] <Andross> and i never learned c!
[16:57:36 CEST] <Admin__> both OTA and SAT receiver.. the source doesn't matter
[16:57:45 CEST] <Admin__> it seems to be regarless of the source
[16:58:09 CEST] <DHE> which means it's an mpegts source. and mpegts has timestamps that loop around every ~26.5 hours. that sound right?
[16:59:46 CEST] <Admin__> i don't think so
[16:59:48 CEST] <kepstin> On a live tv broadcast, I'd expect them to actually loop at 24h, but could be either :)
[17:00:04 CEST] <Admin__> you mean the loop is causing it to stop ?
[17:00:24 CEST] <Admin__> yes 26.5 hours .. yes
[17:00:35 CEST] <Admin__> you are sounding very accurate.. and are on to something right now
[17:01:03 CEST] <DHE> I've done something running on an OTA receiver having things going south after 26.5 hours
[17:01:12 CEST] <Admin__> right now do i get around it ?
[17:01:13 CEST] <Admin__> genpts ?
[17:01:25 CEST] <Admin__> how do i start off with my own timestamps on the stream that go on forever
[17:01:29 CEST] <Admin__> as to avoid this
[17:01:32 CEST] <DHE> dunno. if you solve it, let me know
[17:02:45 CEST] <thebombzen> Admin__: I've never gotten -fflags +genpts to do something
[17:03:03 CEST] <thebombzen> but you might
[17:03:33 CEST] <Admin__> hum.. here is another brain twister
[17:03:52 CEST] <Admin__> my source has Closed caption on it... the data is missing on the output after i transcode the video.... i did scopy
[17:04:06 CEST] <Admin__> weird thing is.. a few weeks ago i had it working just fine.. .now no :(
[17:04:12 CEST] <DHE> using what codec on the output? copy mode will work
[17:04:35 CEST] <thebombzen> closed caption might be a non-subttles stream so using -c:s copy might not work.
[17:05:00 CEST] <DHE> if it's OTA signals then it's a native metadata payload in the video stream
[17:05:55 CEST] <thebombzen> what I would do is -map 0 -c copy -c:v videocodec -c:a audiocodec
[17:06:21 CEST] <thebombzen> which will copy all streams with c copy and then override the video and audio streams
[17:18:26 CEST] <Andross> im looking for an example program that gets a wav and decodes it
[17:18:36 CEST] <Andross> the current example generates a test tone instead of using a source file
[17:21:50 CEST] <vade> you want demuxing_decoding example
[17:22:02 CEST] <vade> just stip the video stuff out
[17:25:19 CEST] <vade> speaking of demuxing and decoding, im using porting my code from 3.0.2 to current GIT master so I can have the H264 HW accell encoder - and notice that accessing the stream->codec is deprecated, in lieu of codecpar - however setting things like sample format for a streams codec? How do I do this? codec par doesnt have those variables
[17:25:56 CEST] <vade> oh. format. duh. I see
[17:32:46 CEST] <yagiza> Reading about RTP URI scheme: https://ffmpeg.org/ffmpeg-protocols.html#rtp
[17:33:01 CEST] <yagiza> What's the meaniung of UDP socket?
[17:33:41 CEST] <yagiza> AFAIK, there's no such thigg as "socket" in UDP.
[17:33:58 CEST] <yagiza> How can I connect to it?
[17:44:05 CEST] <Admin__> hum.. anyone know a good way to take the timestamp from a source mpegts  and then creating new timestamps so if the source restarts timestamps it doesn't matter.. the encoder will keep doing and the timestamps will keep going too
[17:45:43 CEST] <Andross> who wrote the examples in the source code?
[17:56:02 CEST] <Andross> god im so lost
[17:57:16 CEST] <Andross> all the good tutorials seem to be based on video demuxing
[18:01:01 CEST] <Admin__> [mpegts @ 0x2aba300] Non-monotonous DTS in output stream 0:1; previous: 9169, current: 9000; changing to 9170. This may result in incorrect timestamps in the output file. .. how do i setup the ffmpeg so timestamp starts at 0 and then increments forever regardless of what the timestamp is
[18:01:21 CEST] <Admin__> basically just want to sync the audio/video and then not change the timestap... these are live streams
[18:02:51 CEST] <vade> ok so - with the new FF_API_LAVF_AVCTX API - how does one actually OPEN A CODEC a stream opens without it being a deprecated call?
[18:07:49 CEST] <Andross> can someone tell me what exactly a 'frame' is in the context of audio?
[18:11:19 CEST] <DHE> Andross: just a sequence of audio samples, usually sized to meet codec requirements
[18:11:53 CEST] <DHE> which is why there's a FIFO implementation designed specifically for audio samples. that way you can deal with mismatches between codecs
[18:15:27 CEST] <vade> yea audio has been slightly more nuanced than video in my experience
[18:15:45 CEST] <vade> DHE: are you familiar with the new FF_API_LAVF_AVCTX API?
[18:15:47 CEST] <Andross> so annoyed, was getting along so well working through the encoding_decoding example until the author decided to encode a test tone, which nobody would ever use, rather than an input file
[18:15:57 CEST] <Andross> now im finding it a nightmare trying to figure out how to get ffmpeg to read an input file
[18:16:23 CEST] <vade> you want to av_fint_input_format
[18:16:40 CEST] <vade> avformat_alloc_context for an AVFormatContext
[18:16:49 CEST] <vade> call avformat_open_input
[18:16:57 CEST] <vade> probably avformat_find_stream_info
[18:17:05 CEST] <vade> you can then call av_find_best_stream
[18:17:20 CEST] <DHE> vade: you mean the new codecpar fields?
[18:17:24 CEST] <vade> it will tell you a stream and codec for a AVMEDIA_TYPE_AUDIO
[18:17:28 CEST] <Andross> an exmaple would be very helpful
[18:17:42 CEST] <vade> yea DHE :) im migrating my code to codecpar and getting a lot of errors
[18:17:49 CEST] <vade> use github and search
[18:17:52 CEST] <vade> it was helpful for me
[18:18:13 CEST] <vade> there is no good example thats fairl modern. ive considered writing some because its kind of a freaking nightmare.
[18:18:52 CEST] <DHE> I found the examples, while a bit minimalistic and not covering all scenarios, to be good enough
[18:19:19 CEST] <vade> depends on your level and understanding of AV in general
[18:19:28 CEST] <vade> not everyone knows about muxers / demuxers and sample sizes :)
[18:19:38 CEST] <vade> but fair enough, it wasnt horrible. I got stuff working hehe :)
[18:19:55 CEST] <DHE> the audio re-encoder example shows how to deal with that using the av_fifio_* interface
[18:22:01 CEST] <Andross> it also doesnt help that i dont know c much, only c++
[18:23:04 CEST] <vade> with codecpar, since a streams codec is now deprecated, how do I set a specific codec context ive set up to be associated with a stream? do I use av_format_set_video_codec ? that takes a format, not a format / stream ID though
[18:23:20 CEST] <vade> i guess im confused how I know what stream I set / access a codec from if -> codec is deprecated
[18:28:00 CEST] <DHE> yeah I'm a bit confused myself. I've got a local copy of the doxy docs from the git repo for this exact reason
[18:28:12 CEST] <vade> yea, this seems weird.
[18:28:28 CEST] <vade> because avcodec_send_packet requires a codecContext but I cant get one from the stream
[18:28:33 CEST] <vade> &. wat?
[18:35:42 CEST] <f00bar80> Please I need a pointer to a newbie guide for mpegts to m3u8 transcoding
[18:39:04 CEST] <Andross> i cannot fathom why the person that wrote the examples didnt just make a generic function that accepts an input file and output file, so that anyone could use that function to encode audio
[18:42:32 CEST] <Andross> i think it might be better to use the binary and commandline
[18:43:01 CEST] <c_14> f00bar80: ffmpeg -i mpegts out.m3u8 ? Not sure what you're asking for
[18:43:22 CEST] <vade> aha I see
[18:43:30 CEST] <vade> you make your own codec context and initializ with avcodec_parameters_to_context
[18:43:47 CEST] <vade> so stream codecparam -> your own  codec context via -> avcodec_parameters_to_context
[18:45:42 CEST] <vade> https://wiki.libav.org/Migration/12 from libav seems reasonably helpful
[18:48:07 CEST] <Admin__> hey guys.. check this out
[18:48:08 CEST] <Admin__> http://pastebin.com/raw/QYPRh7St
[18:48:25 CEST] <Admin__> as you can see hte input has closed caption.. the output doesnt.. and the copy is set...
[18:48:43 CEST] <Admin__> ${ffmpeg} -thread_queue_size 4096 -analyzeduration 5M -probesize 5M -i "$stream" $mapping -codec copy -copyts -copytb 1 -frame_drop_threshold 1.0 -dts_delta_threshold 0 -f mpegts -
[18:48:52 CEST] <Admin__> that is the ffmpeg command ... what am i doing wrong here.. ?
[18:50:27 CEST] <f00bar80> c_14: this is the basic transcoding, but i'm more into the stream controlling during the transcoding and which approache or a profile can be used .. regarding quality , bitrate , cpu consumption ..etc
[18:51:12 CEST] <c_14> f00bar80: depends on what codec you use. I'm going to assume H.264 for now https://trac.ffmpeg.org/wiki/Encode/H.264
[18:53:31 CEST] <DHE> Admin__: did you actually watch the video?
[18:53:42 CEST] <Admin__> i did.. no subtitbles
[18:54:13 CEST] <Admin__> sorry.. closed caption
[19:17:13 CEST] <Admin__> any ideas ?
[20:01:57 CEST] <Andross> back
[20:02:22 CEST] <Andross> so what are the potential issues if i choose to just distribute my program with binaries and feed it command line?
[20:02:41 CEST] <Andross> i guess it's just the one binary
[20:07:53 CEST] <kepstin> Andross: most of the stuff in https://ffmpeg.org/legal.html still applies, except that you can use a GPL version of FFmpeg instead of LGPL since it's not being linked into your code.
[20:15:55 CEST] <Andross> well i already have an lgpl binary
[20:16:07 CEST] <Andross> i looked at a couple of other programs and they seem to come with the binary too
[20:16:30 CEST] <Andross> so i think it might be more common, at least for audio applications, to just use the binary and feed it command line
[20:16:30 CEST] <_Vi> Shall "psnr" filter's output appear on console even with "-v warning"?
[20:17:03 CEST] <f00bar80> is there any guide on cpu consumption optimization when H.264 encoding is used ?
[20:17:41 CEST] <furq> what do you mean by optimisation
[20:18:14 CEST] <furq> the defaults for x264 already do a pretty good job of using as much cpu as possible
[20:18:27 CEST] <kepstin> f00bar80: make sure you're on 64-bit x86 with a modern processor, that x264 was built with assembly, and then use the slowest -preset value that's fast enough for you.
[20:19:03 CEST] <furq> does 64-bit really make a difference
[20:19:49 CEST] <kepstin> has double the registers from 32bit, I suspect the difference is noticable in x264
[20:20:12 CEST] <kepstin> (although I suppose the avx stuff is the same between both?)
[20:20:14 CEST] <furq> i should probably benchmark it
[20:20:22 CEST] <furq> i'm stuck using a 32-bit ffmpeg for avisynth input
[20:22:49 CEST] <f00bar80> how to disable/enable the subtitles and where can I find a clue on the optimum level for audio tracks, EPG info , video subtitles ?
[20:23:17 CEST] <furq> -c:s copy to enable subs, -sn to disable them
[20:23:27 CEST] <furq> i don't know what "optimum level" means in this context
[20:25:07 CEST] <f00bar80> furq: How to be able to choose from keeping EPG and some/all audio tracks , some of the subtitles ..etc
[20:25:13 CEST] <furq> -map
[20:25:21 CEST] <furq> https://www.ffmpeg.org/ffmpeg.html#Advanced-options
[20:25:51 CEST] <relaxed> https://trac.ffmpeg.org/wiki/Map
[20:29:05 CEST] <f00bar80> As I understand -map allows selecting which streams from which inputs will go into which output, this is by any mean answer my question ?
[20:37:57 CEST] <f00bar80> furq: so how to check all the input file available audio, EPG, subtitles streams, in order to be able to map the required streams ?
[20:39:12 CEST] <furq> ffprobe
[20:46:44 CEST] <kyleogrg> hello
[20:47:24 CEST] <f00bar80> furq: this is output of ffprobe for one stream http://pastebin.com/L2dhnA0d , correct me if I'm wrong , there's no subtitles here , right ? as well can you point me on how to identify how many audio and video streams are in here , and if I'm totally wrong , point me plz to some clrafication resources
[20:47:37 CEST] <kyleogrg> I ripped a bluray recording to mkv.  now I'm trying to use ffmpeg to put it into a m2ts container so sony vegas can open it.  so far, only the video will show up in vegas.
[20:48:11 CEST] <kyleogrg> command line: ffmpeg.exe -y -i "H:\title02.mkv" -c:v copy -c:a copy "C:\Users\me\Desktop\title02.m2ts"
[20:49:43 CEST] <c_14> What audio codec? <- kyleogrg
[20:49:48 CEST] <kyleogrg> This actually outputs a video-only file.
[20:49:49 CEST] <kyleogrg> PCM
[20:50:12 CEST] <c_14> kyleogrg: ffprobe title02.m2ts <- shows only the video stream?
[20:50:24 CEST] <kyleogrg> according to mediainfo
[20:50:44 CEST] <c_14> kyleogrg: 0:0 is video 0:1 is audio, there are no other streams listed. You can tell because it says Video and Audio after the stream identifier
[20:51:13 CEST] <c_14> ^of the ffmpeg.exe -y -i command
[20:54:42 CEST] <kyleogrg> http://pastebin.com/pXpNDcQj
[20:55:42 CEST] <f00bar80> ppl any comment ?
[20:56:33 CEST] <c_14> f00bar80: eh, the comment directly after kyleogrg said "according to mediainfo" was aimed at you. I just highlighted the wrong person
[20:57:18 CEST] <c_14> kyleogrg: the output file should have audio&video. Can you check with ffprobe/ffmpeg -i instead of with mediainfo?
[20:58:28 CEST] <kyleogrg> FFprobe output: http://pastebin.com/avDeJwtJ
[20:58:51 CEST] <c_14> aaah >    Stream #0:1[0x101]: Data: bin_data ([6][0][0][0] / 0x0006)
[20:59:10 CEST] <kyleogrg> What does that mean...
[20:59:40 CEST] <c_14> headers weren't muxed correctly probably, can you update your version of ffmpeg, it's rather old
[21:00:12 CEST] <kyleogrg> hmm, okay, i'll quickly download a zeranoe ffmpeg
[21:00:19 CEST] <Andross> by the way is ffmpeg's native aac encoder perfectly fine?
[21:00:34 CEST] <furq> it's better than all the other open-source encoders except fdk-aac
[21:00:45 CEST] <c_14> Assuming at least FFmpeg 3.0
[21:00:49 CEST] <furq> yeah
[21:00:53 CEST] <c_14> Though it was fine before
[21:01:03 CEST] <c_14> It didn't destroy anything most of the time
[21:01:09 CEST] <c_14> And it probably won't kill your cat
[21:01:14 CEST] <furq> Andross: you've not really got any other choice if you want to distribute binaries
[21:01:23 CEST] <furq> fdk and faac aren't gpl compatible
[21:01:43 CEST] <furq> and lame is lgpl
[21:01:43 CEST] <Andross> what about lgpl though?
[21:01:48 CEST] <furq> same thing
[21:01:58 CEST] <Andross> so fdk isnt even lgpl compatible?
[21:02:01 CEST] <furq> no
[21:02:18 CEST] <Andross> what is the difference in quality, is it noticeable?
[21:02:26 CEST] <furq> try it and find out
[21:02:39 CEST] <furq> there hasn't been a listening test done with a large enough sample size to draw any decent conclusions
[21:02:46 CEST] <kyleogrg> c_14: repeated the ffmpeg mux and ffprobe check, and I get the same ffprobe message you pasted
[21:02:48 CEST] <furq> i think the HA guys were planning on doing one
[21:03:13 CEST] <furq> at 128kbps you'll struggle to tell the difference between any modern-ish audio encoder really
[21:03:28 CEST] <Andross> what about 192 or 320
[21:03:58 CEST] <kepstin> at 192+, you will have difficulty telling the difference between old formats like mp3 and modern codecs
[21:05:12 CEST] <c_14> kyleogrg: It's definitely a bug, at the very least ffmpeg should complain on muxing and state it's not supported
[21:05:26 CEST] <c_14> the same thing happens if you run `ffmpeg -f lavfi -i sine=1000 -c:a pcm_s16le out.m2ts'
[21:05:41 CEST] <c_14> probably missing a tag or something
[21:05:58 CEST] <kyleogrg> okay...
[21:06:08 CEST] <f00bar80> what does the following ([2][0][0][0] / 0x0002), yuv420p(tv)  refer to
[21:06:22 CEST] <kyleogrg> I wouldn't know
[21:07:02 CEST] <kyleogrg> do you have a suggestion as to what tags to try?
[21:07:46 CEST] <c_14> Try getting your hands on an MPEG TS specification...
[21:07:51 CEST] <c_14> Just convert the pcm to flac or something
[21:08:16 CEST] <kyleogrg> would flac be supported by sony vegas?
[21:08:22 CEST] <c_14> f00bar80: yuv420p is the pixel format (tv) states that it's limited range, the ([2] stuff means something but I forget what
[21:08:29 CEST] <furq> or just decrypt the ts from the blu-ray
[21:08:36 CEST] <kyleogrg> furq: how so?
[21:08:40 CEST] <furq> i say "just" as if i know how to do that
[21:09:16 CEST] <furq> i assume there are tools which just decrypt the m2ts files on the disc instead of remuxing to mkv
[21:09:21 CEST] <kyleogrg> The end result I need is to simply take the Bluray MKV (from MakeMKV) and put it in some kind of container that Sony Vegas supports.
[21:09:31 CEST] <c_14> kyleogrg: I have no idea what codecs sony vegas supports
[21:09:44 CEST] <kepstin> makemkv supports simply copying the decrypted ts files rather than remuxing to mkv. Look for the backup mode.
[21:09:44 CEST] <kyleogrg> yeah
[21:09:47 CEST] <c_14> Knowing proprietary video editing solutions, probably not much and nothing well
[21:10:31 CEST] <furq> i think it uses directshow
[21:11:42 CEST] <furq> but yeah just do what kepstin said
[21:12:11 CEST] <kyleogrg> kepstin: would this be any different from extracting the ts file from the mkv now
[21:13:36 CEST] <furq> well one would hope it'd have the right audio header
[21:14:19 CEST] <kyleogrg> okay
[21:15:04 CEST] <f00bar80> anything in the paste ... identify a multiple number of audio tracks ?
[21:15:32 CEST] <furq> no
[21:18:32 CEST] <f00bar80> furq: if any how i can identify it?
[21:19:29 CEST] <furq> http://vpaste.net/Ee9JI
[21:21:11 CEST] <linux_aficionado> How would I set format options in an ffserver.conf file?
[21:28:33 CEST] <kyleogrg> I found a solution copying the video codec and encoding the audio to aac, and putting it into an mp4 container
[21:29:08 CEST] <furq> that works but you'll get generation loss when you export from vegas
[21:29:39 CEST] <kyleogrg> yes, for the audio.  but i'm doing very high quality.
[21:36:33 CEST] <kyleogrg> okay, thanks for the help everyone
[21:36:37 CEST] <kyleogrg> bye
[21:39:31 CEST] <Andross> so
[21:39:48 CEST] <Andross> if im just going to use the binary instead
[21:39:59 CEST] <Andross> it's better i use a static build of the binary right?
[21:43:25 CEST] <kepstin> probably, yeah. simpler to distribute then. zeranoe distributes a static binaries build, for example.
[21:52:42 CEST] <linux_aficionado> In ffserver, how are format context options set? i couldn't find anything in the documentation
[21:58:14 CEST] <f00bar80> normally how long does it to h.264 encode a mpegts stream?
[22:00:38 CEST] <kepstin> f00bar80: depends how long the video is, how fast your computer is, and what encoder options you're using.
[22:00:46 CEST] <kepstin> f00bar80: question basically can't be answered
[22:03:11 CEST] <Andross> so ive noticed some programs, that come with ffmpeg.exe, somehow have a progress bar when encoding
[22:03:25 CEST] <Andross> how is this possible? is it possible to get ffmpeg.exe to emit a progress signal?
[22:03:27 CEST] <furq> they're probably parsing the output
[22:03:48 CEST] <furq> i've done the same thing in the past with moderate success
[22:04:06 CEST] <furq> by which i mean the progressbar worked but the gui sometimes decided to segfault
[22:04:15 CEST] <furq> i don't think that was ffmpeg's fault though
[22:05:17 CEST] <Andross> additionally, im having a problem with my aac encode
[22:05:26 CEST] <Andross> for some reason when i load it up in foobar
[22:05:33 CEST] <Andross> i cant change its time
[22:05:37 CEST] <Andross> skip ahead etc
[22:05:46 CEST] <f00bar80> kepstin: these are the options http://pastebin.com/FWUMMaK4
[22:06:38 CEST] <kepstin> f00bar80: right, so you've run this on a fast machine, watched cpu usage and output fps, and seen if it was fast enough?
[22:07:26 CEST] <furq> does -deinterlace even work any more
[22:07:29 CEST] <furq> i thought it was long since deprecated
[22:08:02 CEST] <kepstin> if it does work, I assume it just sticks some random deinterlacing filter into the video filter chain? :/
[22:08:07 CEST] <furq> probably
[22:10:53 CEST] <kepstin> f00bar80: also, you're setting -threads 0? what is your intent with that?
[22:11:23 CEST] <kepstin> (if you want it to use only 1 thread, use -threads 1; if you want it to use all cpu cores available, omit -threads)
[22:13:26 CEST] <Admin__> hey guys .. can anyone point me in the right direction .... so i am encoding a live stream.. every 26.5 hours my PTS wrap occurs it seems.... this is related to my encoding not the source... is there some way to stop this from doing that... i don't want my stream to end
[22:17:35 CEST] <kepstin> Admin__: PTS wrap at 26.5 hours is an inherent part of mpeg-ts, that's simply the max time it can hold (it was designed to hold 24 hours actually; the 26.5 was the closest they could get with the binary numbers used).
[22:18:10 CEST] <Admin__> but its killing my stream somehow since i am capturing the live stream
[22:18:10 CEST] <kepstin> players that are designed for playing continuous mpeg-ts should just handle the wrap and keep going...
[22:18:22 CEST] <kepstin> but iirc, ffmpeg has some issues with it
[22:18:26 CEST] <Admin__> after 26.5 the whole thing requires a restart :(
[22:18:43 CEST] <Admin__> its like the wrap doesn't actually happen or someting..
[22:23:54 CEST] <Andross> so this other program, it seems to distribute a full GPL binary with it, is that legal?
[22:24:25 CEST] <furq> as long as the program is released under a gpl-compatible licence and they distribute the sources then sure
[22:24:41 CEST] <furq> the ffmpeg and library sources, that is
[22:24:47 CEST] <furq> obviously they need to distribute their own source code
[22:25:41 CEST] <furq> actually i forget whether that constitutes a derivative work. licensing is boring
[22:25:51 CEST] <furq> they definitely need to distribute the ffmpeg sources though
[22:27:23 CEST] <Andross> here is said program furq: http://www.mediahuman.com/audio-converter/
[22:29:04 CEST] <furq> yeah they're just linking to ffmpeg.org for the sources
[22:29:06 CEST] <furq> that's a gpl violation
[22:29:33 CEST] <furq> if you distribute binaries you have to distribute all the sources yourself
[22:30:10 CEST] <Andross> okay but, do they also need to distribute source code to their own program?
[22:30:16 CEST] <furq> i'm not entirely sure
[22:30:23 CEST] <furq> they would if they were linking to the ffmpeg libs
[22:31:08 CEST] <Andross> being able to use the static GPL build would be pretty great
[22:31:42 CEST] <furq> i don't think there's any difference between an lgpl and a gpl build if you're just calling the binary
[22:34:47 CEST] <furq> afaik if your program doesn't work without ffmpeg then it counts as a derivative work and it must be GPL licensed
[22:35:10 CEST] <furq> you can still charge for it if you want to, but you have to distribute the source
[22:36:09 CEST] <Andross> define "doesn't work"
[22:36:33 CEST] <furq> you'd need to ask a lawyer to define that
[22:37:57 CEST] <Andross> something wrong with my internet brb
[22:42:51 CEST] <linux_aficionado> is it even possible to use hls with ffserver?
[22:45:28 CEST] <Admin__> hey maybe my wrap around is an issue because i am doing +genpts
[22:45:36 CEST] <Admin__> maybe if i don't do that it should take the PTS from the actual live stream no ?
[22:45:41 CEST] <Admin__> could that be the cause ?
[22:47:40 CEST] <ferdna> i erased /tmp/feed_cam0.ffm... now it complains its not found... how do i recreate this file?
[22:47:47 CEST] <ferdna> isnt it automatically created?
[22:58:18 CEST] <vade> DHE: you around? Have you migrated to codecpar / send packet recieve frame yet? I just did, and while my decode works, encode seems wonky
[23:06:36 CEST] <Andross_> alright im back
[23:07:04 CEST] <Andross_> can someone explain the difference, if there is one, to "-ab" and "-b:a"
[23:09:52 CEST] <furq> there isn't one
[23:09:56 CEST] <furq> -ab is the old name, -b:a is the new one
[23:10:18 CEST] <furq> -ab will presumably be removed at some point but it's unlikely as long as 99% of people copy their ffmpeg commands off stackoverflow
[23:11:40 CEST] <Andross_> hehe
[23:11:57 CEST] <Andross_> i just made a command line reader and am using it to read the commands sent by that program i linked earlier
[23:12:25 CEST] <Andross_> i think the reason this program must be much faster than mine is because it uses libfdk_aac
[23:12:46 CEST] <Andross_> (which im again guessing is illegal)
[23:13:06 CEST] <furq> it sure is
[23:14:06 CEST] <furq> all the "ultra magic super turbo xyz converter" freeware is a bit of a cesspool really
[23:14:19 CEST] <furq> i wouldn't look to them for examples of what to do
[23:14:47 CEST] <Andross_> well i find it easier than reading the documentation
[00:00:00 CEST] --- Fri May 27 2016


More information about the Ffmpeg-devel-irc mailing list