[Ffmpeg-devel-irc] ffmpeg.log.20170228

burek burek021 at gmail.com
Wed Mar 1 03:05:01 EET 2017


[00:01:59 CET] <mdavis> Yeah, I just got it to work. Replaced all "unsigned long" with "uint32_t", but your way is cleaner :)
[00:02:54 CET] <mdavis> Right off the bat, there seems to be an issue with benchmarking using "-f null -"
[00:03:02 CET] <mdavis> [null @ 0x600246920] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 4294967296 >= 314
[00:03:20 CET] <mdavis> increment that last number for each frame
[00:04:25 CET] <mdavis> BtbN: I gotta go, but thanks for working through this with me!
[00:04:42 CET] <xtina> hmmm.. is anyone in here familiar with setting timestamps? i'm trying to mix a video stream that has:  -filter:v "settb=1000000,setpts=RTCTIME-RTCSTART"  and an audio stream that is at 44.1khz
[00:04:53 CET] <xtina> only one video packet ever comes in
[00:04:57 CET] <xtina> and hundreds of audio streams
[00:05:08 CET] <xtina> this didn't happen until i set the video filter
[00:09:57 CET] <thebombzen_> xtina: consier -af asetpts=RTCTIME-RTCSTART
[00:11:39 CET] <xtina> thebombzen_: OK, i guess i'll rebel against kepstin :P
[00:11:49 CET] <xtina> i'll give it a shot
[00:12:10 CET] <thebombzen_> rebelling?
[00:12:45 CET] <xtina> i mean, see his repeated messages above not to touch the audio timestamps..
[00:12:53 CET] <xtina> '<kepstin> xtina: don't change the audio timestamps! they're fine!'
[00:13:23 CET] <xtina> xtina: the audio timestamps in your case are set by the wav demuxer when it reads the audio from the pipe, and they're set to the exact correct values based on the sample rate
[00:13:35 CET] <xtina> (my audio is mp3 if it matters)
[00:16:42 CET] <xtina> based on your suggestion, both asetpts and setpts = RTCTIME-RTCSTART
[00:17:39 CET] <xtina> thebombzen_: if I try this, i get 350 audio frames and 1 video frame, and tons of lines that look like this:
[00:17:41 CET] <xtina> [Parsed_asettb_0 @ 0x1cc8570] tb:1/44100 pts:290304 -> tb:1000000/1 pts:0
[00:18:00 CET] <xtina> i'm doing     -filter:v "settb=1000000,setpts=RTCTIME-RTCSTART" \     -filter:a "asettb=1000000,asetpts=RTCTIME-RTCSTART" \
[00:18:00 CET] <thebombzen_> well then add asettb beforehand as well
[00:18:11 CET] <thebombzen_> shouldn't you use
[00:18:17 CET] <xtina> oh let me try 44100
[00:18:23 CET] <thebombzen_> settb and asettb 1/1000000
[00:18:27 CET] <thebombzen_> not 1000000
[00:18:30 CET] <xtina> er
[00:19:12 CET] <xtina> right you are :)
[00:22:05 CET] <xtina> thebombzen_: alright, now i'm getting some hard to understand errors
[00:22:06 CET] <xtina> DTS 1488237535833151, next:1933314 st:0 invalid dropping
[00:22:10 CET] <xtina> [h264 @ 0x2fe46d0] nal_unit_type: 1, nal_ref_idc: 1
[00:22:25 CET] <thebombzen_> I have no idea what that one means, sorry
[00:22:51 CET] <xtina> wow these logs exploded with graph and resampler settings
[00:22:55 CET] <xtina> this is more complex than i thought
[00:23:02 CET] <xtina> thebombzen_: OK, no probs
[00:24:29 CET] <xtina> i think not using asettb/asetpts was the way to go
[00:24:52 CET] <xtina> i removed the audio ts stuff and kept the 1/10^6 tb and setpts, now i'm getting some functioning stream, though at 1.5fps
[00:27:02 CET] <xtina> but what on earth does this mean...
[00:27:03 CET] <xtina> "DTS 1488237827654290, next:2766639 st:0 invalid dropping  "
[00:27:12 CET] <xtina> invalid what? what's st?
[00:32:57 CET] <xtina> if it's a clue, this line is also everywhere during my stream
[00:32:58 CET] <xtina> [h264 @ 0x21706d0] nal_unit_type: 1, nal_ref_idc: 1
[00:33:03 CET] <xtina> i don't normally see this
[00:36:18 CET] <xtina> well when i added timestamps to the video, my CPU usage shot from 20% to 90%
[00:36:31 CET] <xtina> and my FPS dropped from 20 to 1
[00:37:17 CET] <xtina> so i guess i'm screwed :(
[00:39:57 CET] <thebombzen_> have you also tried generating PTS in realtime?
[00:40:17 CET] <thebombzen_> like assuming 10 fps, you could use -vf setpts=N/10/TB
[00:49:16 CET] <xtina> thebombzen_: thx for the suggestion, tho my stream isn't stable at 10fps, especially in the beginning
[01:16:32 CET] <jpabq> What is the current state of support for CEA-608/708 ?   I have a patch to add muxing of captions into transport stream, but it against an OLD version of ffmpeg.  It was written by an ex-coworker, and I am far from an expert on the code.  I am willing to work on getting it committed, if ffmpeg is still lacking that functionality.
[01:59:47 CET] <llogan> jpabq: i don't know the status, but that would be a question for #ffmpeg-devel
[02:00:09 CET] <jpabq> llogan: thanks, I will ask over there
[02:04:50 CET] <xtina_> i've done more sleuthing on my audio/video livestream desync :)
[02:05:08 CET] <xtina_> i've found that the start of my stream is perfectly synced, then i accumulate about 250ms of desync every 10 minutes of stream
[02:05:22 CET] <xtina_> the audio is a tiny bit faster than the video and that gradually leads to the desync
[02:05:36 CET] <xtina_> this is causing me to suspect even more that somehow the Pi camera and mic's clocks are mismatched
[02:05:47 CET] <xtina_> but i'm not sure how to check or what to do about it. my system is too weak to add video timestamps and re-encode
[02:07:09 CET] <xtina_> if my camera is a tiny bi slow, should i be using like 25.1 fps instead of 25 or so?
[02:07:57 CET] <llogan> jpabq: and if nobody ever answers just send the patch to ffmpeg-devel at ffmpeg dot org and list any caveats. even better if you can rebase the patch to the current git master branch and use "git format-patch" to format it properly. at the very worst someone will say no or it may get ignored/forgotten.
[02:30:55 CET] <xtina_> i tried replacing -c:v copy with -vf "setpts=1.00042*PTS" but this shoots my FFMPEG cpu usage from 5% to 80%
[02:31:09 CET] <xtina_> is there any way for me to speed up my video for the stream by a tiny bit?
[02:43:01 CET] <xtina_> alternatively is there any way i can .. drop 1 frame every 2 minutes?
[02:43:26 CET] <furq> did you specify -c:v h264_omx
[02:44:06 CET] <xtina_> yep i did
[02:58:35 CET] <zZap-X> how would one know if ones ffmpeg supports hls / aacp ?
[03:22:13 CET] <benschwarz> Does anyone know how I could stick timing onto a video? (in seconds and one decimal place? - eg. 5.7), centered in the middle of the video
[04:46:09 CET] <benschwarz> or where I could start on such an endeavour?
[04:51:19 CET] <furq> !filter drawtext @benschwarz
[04:51:19 CET] <nfobot> benschwarz: http://ffmpeg.org/ffmpeg-filters.html#drawtext-1
[04:56:53 CET] <benschwarz> furq: have you seen this done successfully before?
[05:11:33 CET] <benschwarz> furq: I'm reading the section about text expansion - but I can't see anything about getting a timecode (although, frame number is there)
[05:14:17 CET] <furq> %{expr\:t} or %{pts\:flt} will get you a timecode
[05:14:24 CET] <furq> i don't see a way to truncate it to one decimal place though
[05:22:24 CET] <benschwarz> furq: Yeah :/
[05:50:39 CET] <benschwarz> furq: I got something that kind of works, but it only draws the filter on each new frame of the video
[05:50:48 CET] <benschwarz> is it possible to have it write new frames?
[05:50:57 CET] <benschwarz> so that the time count is smooth?
[06:35:43 CET] <Aelius> ffmpeg -ss 00:00:25 -i "video.mp4" -to 00:00:34.050 -an mmd.mp4
[06:35:51 CET] <Aelius> why does -to not work?
[06:36:23 CET] <Aelius> not just for that number- it won't work at all
[06:44:57 CET] <lotus> hey there, I'm getting an interesting error from a command I'm running -- stream specifier ":a" matches no streams.  One sec, will gist the command...
[06:45:38 CET] <lotus> https://gist.github.com/chadfurman/8759b32e053d3ad3e121a9651f3d3757
[06:46:26 CET] <lotus> Stream specifier ':a' in filtergraph description ... matches no streams
[06:46:57 CET] <lotus> My understanding is that [0:a] corresponds to the first audio input stream
[06:53:17 CET] <thebombzen_> lotus: 0:a corresponds to all audio input streams for input #0
[06:53:20 CET] <thebombzen_> which is usually only one
[06:53:31 CET] <thebombzen_> but if the first -i input has two audio streams, 0:a will select both of them
[06:53:58 CET] <thebombzen_> and if the first -i input has no audio streams, it won't work
[06:54:31 CET] <thebombzen_> my guess is you're doing ffmpeg -i video_input -i audio_input, in which case 0:a selects no streams, because input number 0 (video_input) has no audio
[06:54:48 CET] <thebombzen_> in which case you want to use "1:a" or just "a"
[06:55:29 CET] <lotus> Hmm interesting
[06:55:36 CET] <lotus> maybe the files are corrupt
[06:55:43 CET] <lotus> I have 8 inputs, all of them opus files
[06:57:05 CET] <lotus> thebombzen_: this is what the command outputs for the metadata of the files:  https://paste.ofcode.org/e6uib79biJE2KUJ8hbfCp9
[06:57:22 CET] <thebombzen_> also keep in mind that if you're using -map don't use square brackets
[06:57:24 CET] <lotus> oh weird, one of them claims to be a video file!
[06:57:37 CET] <lotus> -video.opus
[06:57:41 CET] <lotus> oh man, that's the problem!
[06:57:43 CET] <lotus> Ugh.
[06:57:46 CET] <lotus> Thanks, thebombzen_ !!!
[06:58:15 CET] <thebombzen_> yea it's a bit unintitive but with -map you juse use 0:a and with filtergraphs you'd use "[0:a]"
[06:58:24 CET] <thebombzen_> I agree it's weird, but that's what it is
[06:58:27 CET] <thebombzen_> and you're welcome :)
[07:02:36 CET] <Tatsh> i captured raw video to mkv, and i forgot to specify -pix_fmt; is there a way to force this metadata in?
[07:02:46 CET] <Tatsh> when i try to use my captured data, i get '[matroska,webm @ 0x19284e0] Could not find codec parameters for stream 0 (Video: h264, none(progressive), 640x480): unspecified pixel format'
[07:03:00 CET] <Tatsh> [buffer @ 0x19c3600] Unable to parse option value "-1" as pixel format
[07:43:59 CET] <thebombzen_> Tatsh: you can use the -pixel_format input option to tell the demuxer/decoder to assume it's that pixel format
[07:44:30 CET] <thebombzen_> ffmpeg -pixel_format yuv420p -i input.mkv, that would provide the information
[07:50:14 CET] <Dennis_> Hello, I have a Discord Music Bot running with npm forever to keep it open if it crashes. The problem I have though is that whenever a new song plays in Discord, a command prompt with ffmpeg.exe pops up which is annoying. How can I fix this?
[08:16:03 CET] <Tatsh> thebombzen_, Option pixel_format not found.
[08:16:18 CET] <thebombzen_> try using -pix_fmt then
[08:16:19 CET] <Tatsh> ffmpeg 3.2.4
[08:16:39 CET] <Tatsh> same error
[08:16:51 CET] <Tatsh> the reason is because -pix_fmt/-pixel_format is only for rawvideo
[08:16:59 CET] <thebombzen_> well not only raw video
[08:17:06 CET] <Tatsh> as in `-f rawvideo -pix_fmt yuv420p`
[08:17:13 CET] <Tatsh> when i captured these, i forgot to specify :(
[08:17:13 CET] <thebombzen_> well not quite
[08:17:37 CET] <thebombzen_> I'm actually fairly surprised that you managed to encode an h.264 stream without the muxer embedding the pixel format
[08:17:48 CET] <Tatsh> it has a pixel format of -1
[08:17:54 CET] <thebombzen_> ._.
[08:17:56 CET] <Tatsh> i can't hex edit this in?
[08:18:13 CET] <thebombzen_> how'd you get the h264 embedded into matroska without the encoder/muxer specifying that in the first place?
[08:18:24 CET] <Tatsh> i dunno, ffmpeg did it
[08:18:27 CET] <Tatsh> 3.2.4 built on gentoo
[08:18:42 CET] <thebombzen_> sounds fishy
[08:18:54 CET] <Tatsh> my command is like `ffmpeg -f v4l2 -i /dev/video1 -vf scale=640:480 out.mkv`
[08:19:05 CET] <Tatsh> when it should have had -pix_fmt yuv420p in there
[08:19:09 CET] <thebombzen_> well no
[08:19:33 CET] <thebombzen_> because it's using libx264 as the default, the libx264 encoder has a list of pixel formats it accepts
[08:19:55 CET] <thebombzen_> and if you do not specify one, it will usually default to yuv420p
[08:20:17 CET] <thebombzen_> although in this case it might have just used what you got from /dev/video1, which is possibly yuyv422 instead
[08:20:38 CET] <thebombzen_> either way, did you try extracting the raw h.264 bitstream from the container and then remuxing it?
[08:20:41 CET] <Tatsh> yes
[08:20:51 CET] <thebombzen_> as in ffmpeg -i input.mkv -c copy -f h264 raw.h264
[08:20:53 CET] <thebombzen_> and what did that tell you?
[08:22:10 CET] <thebombzen_> and I still find it hard to believe that a bug that obvious in the matroska muxer would make it into a release like 3.2.4. however if you really want you can try to edit it in with mkvmerge
[08:22:17 CET] <Tatsh> i get a lot of lines like [h264 @ 0x14072e0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1610078 >= 1610045
[08:22:17 CET] <thebombzen_> or some similar tool
[08:22:21 CET] <thebombzen_> ignore those
[08:23:06 CET] <Tatsh> once i try the remux, [matroska @ 0xdbb4a0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[08:23:17 CET] <thebombzen_> well yes
[08:23:24 CET] <Tatsh> ffmpeg -y -fflags +genpts -i track1.h264 -analyzeduration 10000 -probesize 10G -i old-with-audio.mkv -c copy -map 0:0 -map 1:1 -shortest fixed.mkv
[08:23:33 CET] <thebombzen_> well don't do that
[08:24:03 CET] <Tatsh> what's the better way to remux?
[08:24:13 CET] <thebombzen_> just do ffmpeg -i raw.264 -c copy video.mkv
[08:24:38 CET] <thebombzen_> no need to futz with the probesize and analyzeduration
[08:24:46 CET] <thebombzen_> those just tell it to look at more data for the missing data
[08:24:55 CET] <thebombzen_> but it's not that the pixel format is missing and it needs to find it
[08:24:59 CET] <thebombzen_> it's somehow it's invalid
[08:25:35 CET] <Tatsh> here's the output http://dpaste.com/1TKPW6T
[08:27:17 CET] <Tatsh> or, the command you gave me, it does the same thing
[08:42:41 CET] <thebombzen_> you need to specify the framerate
[08:42:52 CET] <thebombzen_> before -i put -framerate R where R is your framerate
[08:43:07 CET] <thebombzen_> Tatsh: also, consider using -vsync drop
[08:43:33 CET] <thebombzen_> ffmpeg -framerate r -i input.264 -c copy -vsync drop output.mkv
[08:43:35 CET] <Tatsh> what does -vsync drop do?
[08:44:12 CET] <thebombzen_> "As passthrough but destroys all timestamps, making the muxer generate fresh timestamps based on frame-rate."
[08:44:20 CET] <Tatsh> ok
[08:45:33 CET] <Tatsh> for the framerate should that be like 30000/1001 or decimal?
[08:47:22 CET] <thebombzen_> if you know the fractino use that
[08:47:41 CET] <thebombzen_> because internally ffmpeg uses the fraction
[08:47:55 CET] <thebombzen_> if you say "29.97" it'll just convert that internally to 2997/1000
[08:48:04 CET] <thebombzen_> which I assume is not what you're looking for
[08:48:58 CET] <Tatsh> output http://dpaste.com/3EB6S86
[08:49:02 CET] <Tatsh> this is with git master now
[08:49:17 CET] <Tatsh> ffmpeg -y -framerate 30000/1001 -vsync drop -i raw.h264 -c copy -vsync drop output.mkv
[08:51:30 CET] <thebombzen_> try adding -fflags +genpts as an output option
[08:51:37 CET] <thebombzen_> and remove -vsync drop as an input option
[08:51:40 CET] <thebombzen_> that doesn't do anythin
[08:54:45 CET] <Tatsh> nah, same error
[08:54:53 CET] <Tatsh> i was able to re-create a MKV with mkvmerge
[08:55:01 CET] <Tatsh> and ffmpeg accepts this as input
[08:55:38 CET] <thebombzen_> well there you go
[08:56:01 CET] <thebombzen_> I just find it weird that you got that corrupt file in the first place
[09:00:46 CET] <Tatsh> all because lack of `-pixel_format yuv420p` when capturing
[09:00:52 CET] <Tatsh> i forgot :/
[09:03:27 CET] <thebombzen_> well that shouldn't matter
[09:03:35 CET] <thebombzen_> ilbx264 knew the pixel format when it encoded
[09:03:47 CET] <thebombzen_> it knew the pixel format
[09:03:51 CET] <thebombzen_> you didn't need to tell it just in case
[09:04:01 CET] <thebombzen_> because in order to encode it, the encoder had to know
[09:04:51 CET] <thebombzen_> so you're saying that in ffmpeg.c's single execution, libavcodec told libx264 what the pixel format was, but libavformat's matroska muxer didn't write that
[09:04:54 CET] <thebombzen_> that seems very strange
[09:05:44 CET] <thebombzen_> if that's true, you forgetting -pixel_format yuv420p as an input option lead to a corrupt muxed file, even though it correctly guessed the pixel format
[09:05:49 CET] <thebombzen_> that's a bug
[09:05:55 CET] <thebombzen_> that's not your fault, if it's the case
[09:05:58 CET] <thebombzen_> it's called a regression
[09:06:40 CET] <Tatsh> only problem with that mkvmerge method
[09:06:45 CET] <Tatsh> lost a/v sync
[09:06:51 CET] <Tatsh> i guess i'm going to just recapture this stuff
[09:08:37 CET] <thebombzen_> Tatsh: you should have gotten this warning when you forgot to specify:
[09:08:39 CET] <thebombzen_> No pixel format specified, yuv422p for H.264 encoding chosen.
[09:08:39 CET] <thebombzen_> Use -pix_fmt yuv420p for compatibility with outdated media players.
[09:08:42 CET] <thebombzen_> or something like that
[09:08:53 CET] <Tatsh> i probably did but didn't spot it
[09:09:04 CET] <thebombzen_> well note that it autoselected the pixel format
[09:09:11 CET] <thebombzen_> so writing it as "-1" won't happen
[09:10:50 CET] <thebombzen_> and also, when I ran "ffmpeg -f v4l2 -i /dev/video0 -y test.mkv" I didn't get the same error
[09:11:34 CET] <thebombzen_> so either you've got a bugged version of ffmpeg which is weird, because I ran that with 3.2.4, or you're not being truthful
[09:12:01 CET] <thebombzen_> because when I tried on 3.2.4: "ffmpeg -f v4l2 -i /dev/video0 -y test.mkv" I could not reproduce
[09:45:17 CET] <forgon> ffmpeg -i input.mkv -ss 00:05 -to 00:10 part1.mkv -ss 00:10 -to 00:14 part2.mkv -ss 00:14 -to 00:27 part3.mkv
[09:45:49 CET] <forgon> Is there a way to write this with less repetition?
[09:46:35 CET] <forgon> Also, I want to use the same -vcodec and -acodec for each output file.
[09:46:43 CET] <forgon> Do I have to state every single one of them?
[09:55:23 CET] <thebombzen_> forgon: what are you trying to do?
[09:57:06 CET] <thebombzen_> if you're trying to split the file temporally, with uneven segments, then it's not really going to be easy to do that
[09:57:20 CET] <thebombzen_> the easiest way I can think of is something like
[09:57:47 CET] <forgon> Temporally is the least worst I can do, I don't know better how to split the frames than FFmpeg.
[09:57:56 CET] <thebombzen_> what are you actually trying to do
[09:58:11 CET] <thebombzen_> because if your'e actually trying to take the first 5 seconds, then the next four seconds, and the following 13 seconds
[09:58:19 CET] <thebombzen_> that's irregular enough that what you have is pretty short
[09:58:34 CET] <forgon> Yes, it's irregular
[09:58:40 CET] <thebombzen_> although it'll be faster if you use three commands
[09:58:51 CET] <forgon> thebombzen: Significantly faster?
[09:58:58 CET] <thebombzen_> for 13 seconds, no
[09:59:15 CET] <thebombzen_> you just won't have to decode the file several times
[09:59:39 CET] <thebombzen_> but either way, if you want to do that, 5 seconds, then 4, then 13, what you have isn't repetitive at all.
[10:00:18 CET] <forgon> So I run three separate ffmpeg commands, correct?
[10:00:33 CET] <thebombzen_> well that won't make it less redundant
[10:00:39 CET] <thebombzen_> if your goal is to make the command shorter, your'e out of luck
[10:00:44 CET] <thebombzen_> cause that's already really short lol
[10:01:24 CET] <thebombzen_> what you have there is really irregular
[10:01:43 CET] <thebombzen_> if you want to do weird things ilke have uneven chunks, then what you have there is really good
[10:01:51 CET] <thebombzen_> if you want to make the chunks even, then it is easier
[10:02:20 CET] <forgon> Another question
[10:02:55 CET] <forgon> If I use '-vcodec copy and -acodec copy' or, to the same effect '-c copy' my file gets corrupted.
[10:03:35 CET] <forgon> I receive an error message 'Could not find codec parameters for stream 0 (Video: h264, none(progressive), 1280x800): unspecified pixel format'
[10:03:59 CET] <forgon> Why would that happen?
[10:04:08 CET] <forgon> Does copying skip reencoding?
[10:04:09 CET] <thebombzen_> because you can't codec copy and truncate the video at the same time
[10:04:15 CET] <thebombzen_> yes copying skips re-encoding
[10:04:20 CET] <thebombzen_> that's literally teh whole poin to fit
[10:04:25 CET] <thebombzen_> literally the whole point of it
[10:04:33 CET] <thebombzen_> is that -c copy doesn't decode/encode
[10:04:50 CET] <forgon> And I foolishly trusted the Archwiki!
[10:04:50 CET] <thebombzen_> which means that if you try to do anything interesting to the video stream you should not expect it to work
[10:05:03 CET] <thebombzen_> don't trust archwiki for ffmpeg
[10:05:08 CET] <thebombzen_> it's good for actual system components
[10:05:19 CET] <thebombzen_> but nobody's bothered to update it for ffmpeg
[10:06:12 CET] <Tatsh> well
[10:06:24 CET] <Tatsh> the bug i had before might be caused by cutting an mkv with -ss -t -codec copy
[10:06:34 CET] <Tatsh> not sure yet but it seems likely
[10:07:20 CET] <Tatsh> well, no; other mkvs i cut are fine
[10:07:22 CET] <forgon> Tatsh: Just cut 3 seconds or so with different settings and run ffprobe.
[10:07:26 CET] <thebombzen_> well
[10:07:41 CET] <thebombzen_> if you try to cut encoded video with -ss or -t, and -c copy, then you should not expect it to work
[10:08:00 CET] <thebombzen_> in general, if you do anything interesting to the stream at the same time as -c copy, you should not expect it to work
[10:08:08 CET] <thebombzen_> there are some exceptions but those are known
[10:08:08 CET] <forgon> I only found out something was wrong when YouTube could not generate proper thumbnails.
[10:08:19 CET] <Tatsh> the point is to cut at the nearest keyframe thebombzen_
[10:08:24 CET] <Tatsh> and have a lossless cut
[10:08:42 CET] <Tatsh> nobody should expect -vf or -af to work there
[10:08:53 CET] <forgon> Tatsh: So just repeat your original codecs?
[10:09:03 CET] <Tatsh> (except -af pan= stuff)
[10:10:12 CET] <Tatsh> btw forgon on your command the times you specify won't work
[10:10:12 CET] <thebombzen_> try using -noaccurate_seek -ss foo -t bar -i input
[10:10:33 CET] <thebombzen_> Tatsh: yes they will
[10:11:00 CET] <Tatsh> they will work but he might get 00:09.566 for 00:10
[10:11:22 CET] <thebombzen_> yes, that's what happens if you try to skip to the keyframe
[10:11:56 CET] <thebombzen_> the keyframe interval is probably 250 frames
[10:12:03 CET] <thebombzen_> so 9.566 seconds is generous
[10:12:21 CET] <forgon> In practice my results were much closer iirc.
[10:12:43 CET] <Tatsh> i'm usually a few seconds behind
[10:12:51 CET] <Tatsh> and then the time to start gets written into the container
[10:12:51 CET] <forgon> 40 fps.
[10:13:00 CET] <forgon> But let me see whether I can get more precise.
[10:13:04 CET] <thebombzen_> the keyframe interval of 250 frames? that's like six seconds
[10:13:21 CET] <thebombzen_> you should not expect to be able to extract 4 seconds without re-encoding
[10:13:30 CET] <thebombzen_> given that you could go four secods without a keyframe
[10:13:46 CET] <thebombzen_> if you're trying to extract something on a seconds-basis, don't both
[10:13:48 CET] <thebombzen_> don't bother
[10:13:53 CET] <Tatsh> at that point make a gif :)
[10:13:58 CET] <thebombzen_> no
[10:14:10 CET] <thebombzen_> only make a gif if that's what you want lol
[10:14:27 CET] <thebombzen_> forgon: why do you want those strange intervals anyway
[10:14:31 CET] <thebombzen_> are you actually trying to make a gif
[10:20:00 CET] <forgon> forgon: I have a game recording, a limit of 15 minutes per video and both start and end need to be trimmed.
[10:24:27 CET] <llamapixel> At some stage you will want to edit up the content forgon so something like virtual dub might be easier.
[10:25:49 CET] <llamapixel> Avidemux might be better if you are on linux based systems. http://alternativeto.net/software/virtualdub/
[10:30:33 CET] <forgon> thebombzen: I get a syntax error for your suggestion, apparently -i needs to precede -ss
[10:32:43 CET] <thebombzen_> forgon: no it doesn't
[10:32:58 CET] <thebombzen_> ffmpeg -ss 5 -i input
[10:33:00 CET] <thebombzen_> that will work
[10:33:53 CET] <thebombzen_> also forgon if you're trying to trim a game recording down to 15 minutes
[10:34:05 CET] <thebombzen_> why are you chunking off 5 seconds, then 4 seconds, then 13 seconds
[10:34:11 CET] <thebombzen_> you still haven't answered what you want to do
[10:36:26 CET] <thebombzen_> if you have a huge game recording and want to take 15 minutes from, say, an hour and a half in, you can do something like:
[10:36:46 CET] <forgon> thebombzen: I have several videos.
[10:37:26 CET] <thebombzen_> sure, but you still haven't answered why you want to take a 22-second segment five seconds in and break it into three parts, one which is 5s, one which is 4s, and one which is 13s
[10:37:30 CET] <forgon> I just prefer cutting at 13:52 instead of 15:00 if it makes more sense for the "plot".
[10:38:02 CET] <thebombzen_> again, what does that have anything to do with the 22-second divided into three unequal chunks
[10:38:16 CET] <thebombzen_> five seconds in
[10:38:34 CET] <forgon> It's an example.
[10:38:53 CET] <thebombzen_> oh so it's not at all related to what you want to do
[10:39:05 CET] <thebombzen_> protip: ask what you want to do, not something that's supposedly demonstrative of your issue
[10:39:07 CET] <thebombzen_> because it's not
[10:39:12 CET] <thebombzen_> what are you actually trying to do
[10:40:05 CET] <forgon> First I'll figure out what exactly this -noaccurate_seek does.
[10:40:27 CET] <thebombzen_> forgon: no, first you should say what you actually want to do
[10:40:50 CET] <thebombzen_> are you trying to chunk a large (hour+) gameplay recording into approximately fifteen minute segments?
[10:41:23 CET] <thebombzen_> back up, because if you try to get too absorbed in the details there might be an easy fix outside of the details you're looking at
[10:41:35 CET] <thebombzen_> it's possible that -noaccurate_seek is irrelevant
[10:42:01 CET] <forgon> thebombzen: I want to cut into parts roughly 15 minutes long, according to what I see.
[10:42:08 CET] <thebombzen_> what does that mean
[10:42:12 CET] <thebombzen_> "according to what I see"
[10:42:49 CET] <forgon> I have about 1:06 hours footage. If a big battle starts after 14 minutes, I put it in video 2 even if each video is about 15 minutes long.
[10:43:28 CET] <thebombzen_> well try using the segment muxer
[10:43:31 CET] <thebombzen_> and just tweak it
[10:43:44 CET] <forgon> Mmh?
[10:43:47 CET] <thebombzen_> if you have an hour and six minutes, you can do that manually
[10:44:10 CET] <thebombzen_> the segment muxer will split the output into approximately equal times but cut on keyframes
[10:46:23 CET] <thebombzen_> so if you run ffmpeg -i recording.mkv -c copy -f segment -segment_time 15:00 part%03d.mkv
[10:46:41 CET] <thebombzen_> that will cut it into 15 minute pieces (or close to 15 minutes)
[10:46:57 CET] <jaggz> how do I make 3 images into a video?
[10:47:01 CET] <jaggz> they're not named sequentially
[10:47:05 CET] <thebombzen_> depends on how
[10:47:13 CET] <jaggz> just foo.png bar.png am.png
[10:47:23 CET] <thebombzen_> jaggz: do you want a slideshow of them?
[10:47:34 CET] <jaggz> I'm not 100% settled.. was thinking an mp4
[10:47:39 CET] <jaggz> slowly playing the frames over time
[10:47:50 CET] <thebombzen_> well the easist way to do that is to rename them sequentially
[10:47:51 CET] <jaggz> something I can put in an email
[10:48:02 CET] <thebombzen_> if you want to email it to someone then don't email them an mp4 video
[10:48:10 CET] <jaggz> it's for a doctor's ease of use
[10:48:18 CET] <jaggz> could maybe do gif anim
[10:48:24 CET] <thebombzen_> well if they really care then do something like
[10:49:03 CET] <thebombzen_> ffmpeg -framerate 1 -i foo.png -framerate 1 -i bar.png -framerate 1 -i baz.png -lavfi concat=n=3:v=1:a=0 slideshow.mp4
[10:49:13 CET] <thebombzen_> but the slideshow probably won't autoloop unless their player wants it to
[10:49:39 CET] <thebombzen_> although to be honest, it's far easier to just name them sequentially
[10:49:43 CET] <thebombzen_> another option is to concatenate them
[10:49:51 CET] <jaggz> oh interesting.. framerate before each -i
[10:49:58 CET] <jaggz> I named them sequentially now
[10:50:04 CET] <thebombzen_> well it'll assume they're 25 fps unless you specify otherwise
[10:50:17 CET] <thebombzen_> if they're named 1.png 2.png 3.png (etc)
[10:50:37 CET] <thebombzen_> you can do: ffmpeg -framerate 1 -i %d.png output.mp4
[10:50:44 CET] <thebombzen_> you can also concatenate them with "cat"
[10:50:45 CET] <thebombzen_> like
[10:50:59 CET] <thebombzen_> cat foo.png bar.png baz.png | ffmpeg -f image2pipe -framerate 1 -i - output.mp4
[10:51:28 CET] <jaggz> I didn't know you could do that!
[10:51:34 CET] <thebombzen_> that doesn't require you to name them sequentially but it does requrie you to have a shell set up pipes and stuff
[10:51:40 CET] <jaggz> (makes sense for these formats with headers and lengths I guess :)
[10:52:30 CET] <thebombzen_> yea most image formats let you know where it will end from the file format
[10:52:35 CET] <thebombzen_> if they don't then it won't work with image2pipe
[10:52:47 CET] <thebombzen_> but jpeg and png do, so what more do you need
[10:52:57 CET] <thebombzen_> now if you want it to loop, sending them an mp4 video won't do that
[10:53:24 CET] <thebombzen_> but you can consider uploading it to a website like http://gfycat.com/ and sending them the links
[10:53:35 CET] <thebombzen_> although tbh that website is not very professional
[10:53:37 CET] <thebombzen_> but it's an example
[11:09:53 CET] <thebombzen_> forgon: you also can specify times manually
[11:10:04 CET] <thebombzen_> given that it's an hour of footage and you only need to cut it into five or so chunks
[11:10:23 CET] <forgon> thebombzen: Yep, five chunks.
[11:10:29 CET] <thebombzen_> you can also specify the times you want to cut it
[11:10:38 CET] <thebombzen_> with -segment_times (comma-separated)
[11:10:49 CET] <forgon> I experimented with --noaccurate_seek, it seems to make no difference
[11:11:52 CET] <thebombzen_> well yea
[11:12:03 CET] <thebombzen_> there's a reason I said "say what you're trying to do"
[11:12:14 CET] <thebombzen_> because I could have told you that -noaccurate_seek is implied by -c copy
[11:12:24 CET] <thebombzen_> it's also written in the documentation
[11:12:27 CET] <thebombzen_> see https://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[11:13:57 CET] <thebombzen_> but, say if you want to cut it at 14:00, 28:30, 42:00 and 55:00
[11:14:34 CET] <thebombzen_> you could do ffmpeg -i recording.mkv -c copy -f segment -segment_times 14:00,28:30,42:00,55:00 out%03d.mkv
[11:15:09 CET] <thebombzen_> forgon: that would cut the input times (approximately) there. it will actually cut it not at those times but at the first keyframe after those times
[11:15:39 CET] <thebombzen_> which should be less than 10 seconds, unless yourframerate is very low
[11:16:15 CET] <thebombzen_> if it's too late you can nudge it back a bit, this isn't automated or anything
[11:17:13 CET] <thebombzen_> when someone is asking "what are you actually trying to do" they say that because there's frequently a way to do what you want without answering your exact question
[11:17:24 CET] <thebombzen_> your question about -ss and -to involves re-encoding, which is really not what you want to do here
[11:17:37 CET] <thebombzen_> that's why I was pushing so hard on "what do you actually want"
[11:18:24 CET] <thebombzen_> because there was an easy solution to your problem, but it wasn't a solution to the question  you were asking
[11:18:30 CET] <thebombzen_> it was a solution to the actual problem you were trying to solve
[11:19:06 CET] <thebombzen_> anyway I really have to get sleep lol but I hope that helped
[11:26:47 CET] <jaggz> Zen, all done and sent to doc :)
[11:43:02 CET] <jaggz> thebombzen, thanks a bunch :)
[12:15:13 CET] <Phrk_> Hello, i try to use libssh inside ffmpeg, on the doc it say : This protocol accepts the following options, how i specify option on the url ?
[12:15:29 CET] <Phrk_> i try add -private_key=xx but doesn't work
[12:19:00 CET] <dreampeppers99> Hi there, how are you? I'm writing up a intro documentation level for digital video developers https://github.com/leandromoreira/digital_video_introduction#temporal-redundancy-inter-prediction in which I use lots of hands-on, most of them using ffmpeg. Thanks for this amazing tool. I'm using ffmpeg for everything from the usual encoding tasks (transrating, transmuxing and etc) to debug codec (showing inter and intra prediction
[12:19:33 CET] <dreampeppers99> Is it possible to generate a video with all the macro block decision (a grid-like) overlay?
[12:20:51 CET] <dreampeppers99> 2) Is it possible to gather the used bitrate throughout the entire video? (in order to build a bitrate viewer like)
[12:48:25 CET] <dongs> dreampeppers99: i think you mean motion vector?
[12:48:36 CET] <dongs> also, half your shit got cut off by irc
[12:48:38 CET] <dongs> get a better client
[12:48:50 CET] <dongs> this is not blogspot.com, lines are limited to 500-something chars
[12:49:38 CET] <dreampeppers99> @dongs thanks , for the motion vector I could generate a video with arrows pointing to the MV
[12:50:15 CET] <dreampeppers99> For the macroblocks predictions too, I get all this from https://trac.ffmpeg.org/wiki/Debug/MacroblocksAndMotionVectors
[12:50:52 CET] <dreampeppers99> but it would be very nice if I could generate a video with grid-like style to represent the partitions decisions
[12:51:50 CET] <dreampeppers99> I'm writing this article to introduce video technology for developers https://github.com/leandromoreira/digital_video_introduction#1st-step---picture-partitioning
[12:52:49 CET] <dreampeppers99> and most of the time I can show a hands-on with ffmpeg, only on this case (until now) where I want to show grid-like view for partitions decisions I couldn't!
[12:53:15 CET] <dreampeppers99> @dongs thanks, I hope all the text was sent now.
[13:18:39 CET] <hron84> Hi! I have a problem with streaming RTMP with FFMPEG. If I stream a video, it randomly cuts the last few seconds (terminates the video before the end) and starts the next one. I use -re switch in playing a video, I am not totally sure it can cause this issue.
[13:19:32 CET] <hron84> The main problem is I frequently stream a 20-30 sec advertisement/trailer spots too and technically FFMPEG cuts the half of the content.
[14:31:57 CET] <forgon> Which signal is analogous to pressing 'q' to stop ffmpeg recording?
[14:34:42 CET] <c_14> INT or TERM
[16:16:41 CET] <rebel2234> so I have a happauge ATSC card for recieving OTA television broadcasts.  I am in wondering what tool/program I can use to select a channel/frequency and use after the -i switch in ffmpeg.  Any Ideas?
[16:19:44 CET] <bencoh> rebel2234: well tbh honest I'd rather use dvblast with your dvb device and make it output stream to localbound multicast address, but ...
[16:21:09 CET] <mdavis> I'm back with more Cygwin sillyness!
[16:23:22 CET] <rebel2234> bencoh: I need to be able to pipe the ota channel into ffmpeg and then compress it with h264 and ship it over a slow link.
[16:25:54 CET] <mdavis> The configure script's Cygwin settings specifies "-D_POSIX_C_SOURCE=200112". Is there any problem with changing that to 200809?
[16:27:10 CET] <bencoh> rebel2234: sure, but this doesn't mean everything has to be done by ffmpeg
[16:31:19 CET] <rebel2234> bencoh: I'm trying to build it into Tvheadend Mux's so that I can have a command that encodes (h264) a particular channel, then a user can switch channel.
[16:39:02 CET] <rebel2234> so essentially each mux would look something *like* this in tvh: pipe:///usr/bin/ffmpeg -loglevel fatal -i /dev/dvb/adapter0/dvr0 -vcodec libx264 -preset superfast -crf 28 -maxrate 2600k -bufsize 3500k -vf yadif=0,scale=-2:720 -acodec aac -b:a 128K -f mpegts pipe:1
[16:40:07 CET] <rebel2234> its the -i /something/something that needs to have the ability to tune to a specified frequency on an hauppauge card
[16:49:21 CET] <m_rch> Hi everyone! Just looking for some clarification on direction. I'm attempting to redact some PII from audio files. I have the original full audio file, a list of timestamps that should be redacted, and ffmpeg ;). My thought is that I could use aevalsrc to generate a file which is a series of beeps occuring between the list of timestamps then merge that file with the original. I don't know a whole hell of a lot about audio codecs - would I first need to mute
[16:49:21 CET] <m_rch> same timestamps in my original file before the merge? Or can I effectively tell the file of beeps to write on top of the original file? Further, I think I'm at a bit of a loss as to how to generate a series of beeps. I've used "aevalsrc="sin(444*2*PI*t):s=8000":d=30" to create a resonable sounding 30 second beep, but I'm not sure how to do the same with dead air between the beeps - so clarification on that would be super helpful as well. Thanks!
[16:50:22 CET] <mdavis> I think your message got chopped off a bit
[16:50:32 CET] <m_rch> Ah, where at? I'll get the rest in.
[16:50:51 CET] <mdavis> Would I first need to mute ... same timestamps in my original
[16:51:07 CET] <m_rch>  file before the merge? Or can I effectively tell the file of beeps to write on top of the original file? Further, I think I'm at a bit of a loss as to how to generate a series of beeps. I've used "aevalsrc="sin(444*2*PI*t):s=8000":d=30" to create a resonable sounding 30 second beep, but I'm not sure how to do the same with dead air between the beeps - so clarification on that would be super helpful as well. Thanks!
[16:51:23 CET] <m_rch> That do?
[16:51:47 CET] <mdavis> The ... is what I think got cut off
[16:51:52 CET] <m_rch> oh weird
[16:52:12 CET] <m_rch> "would I first need to mute the same timestamps in my original file before the merge?"
[16:52:33 CET] <mdavis> Oh OK, my bad.
[16:53:23 CET] <m_rch> I just mean I'm not sure if it would end up muddled, I basically need the PII - social security numbers, things like that, to be completely 0d out from the file. Or at least scrambled so badly they can't be understood.
[16:54:24 CET] <kepstin> right, adding a tone over them won't do that, you'll really want to mute them in the file.
[16:54:30 CET] <mdavis> What you might want to do is segment up your audio around those timestamps, then concat everything back together but with beeps in place of the PII
[16:55:07 CET] <kepstin> or, really, just open the audio in audacity, select the bits at the timestamps you want, and hit 'delete' :/
[16:55:24 CET] <mdavis> That would honestly be a lot easier, imo
[16:55:26 CET] <m_rch> Not an option unfortunately kepstin, we're talking millions of calls :(
[16:55:52 CET] <mdavis> oh. You have timestamps for all of those, though?
[16:55:53 CET] <kepstin> how are you figuring out which section of the audio you need to remove?
[16:56:02 CET] <m_rch> I have timstamps from transcriptions yeah
[16:56:37 CET] <m_rch> I've got the logistical bits worked out, I've just never worked with audio before
[16:57:20 CET] <mdavis> I've seen references in the docs to filter timelines, kepstin do you know anything about that?
[16:58:04 CET] <kepstin> mdavis: some filters have a parameter named 'enable' that takes an expression, which can calculate whether to enabled/disable the filter based on time
[16:58:09 CET] <kepstin> not all filters support that
[16:58:23 CET] <m_rch> mm, yeah, I had been using something like "volume=enable='between(t,5,25):volume=0'"
[16:58:31 CET] <m_rch> to mute the file at certain points
[16:58:50 CET] <kepstin> m_rch: yeah, that's probably about as good as you'll get with ffmpeg
[16:58:50 CET] <mdavis> m_rch If you can write a script to generate that enable expression, that might work
[16:58:53 CET] <m_rch> I guess I could just generate one long file of beeps
[16:59:00 CET] <m_rch> and then run that same command as the inverse
[16:59:07 CET] <m_rch> to mute the beeps where there should be no beeps
[16:59:09 CET] <mdavis> Do you really need beeps?
[16:59:10 CET] <m_rch> and then merge the two?
[16:59:15 CET] <m_rch> yes, unfortunately
[16:59:17 CET] <m_rch> business directive
[16:59:30 CET] <mdavis> That complicates things
[16:59:55 CET] <m_rch> I know, that's how I ended up here haha
[17:00:07 CET] <kepstin> m_rch: you might also consider using sox, it has a fairly good filter system that lets you specify timelines, and it more suited towards audio stuff
[17:00:22 CET] <m_rch> Yeah, my next question was going to be if there was something else I should look at
[17:00:28 CET] <m_rch> I'll take a look at sox, thanks
[17:01:46 CET] <mdavis> If you have a method to generate the 'enable' expression, you could have two streams. One is your original file, one is aevalsrc. Then you have them both with a volume filter, where one has the same enable expression, but negated
[17:02:08 CET] <mdavis> This is kinda getting into https://xkcd.com/378/ territory
[17:02:12 CET] <m_rch> yeah I can generate that
[17:02:40 CET] <kepstin> m_rch: you could also do something with ffmpeg using the 'aselect' filter, having two inputs - the original file and a solid tone. It then picks between the two inputs at any given time.
[17:02:41 CET] <m_rch> was my only thought given that not all of the filters use enable
[17:02:55 CET] <m_rch> hm, that sounds like something that might work
[17:03:04 CET] <mdavis> Yeah, forgot about aselect
[17:03:20 CET] <kepstin> er, no, select splits between multiple outputs
[17:03:24 CET] <kepstin> i'm getting all mixed up
[17:03:56 CET] <mdavis> So then, something like:
[17:04:27 CET] <kepstin> aselect isn't what you want, I had it backwards :/
[17:04:45 CET] <m_rch> is there anything about
[17:04:51 CET] <m_rch> generating a 20 minute long beep sound
[17:04:55 CET] <m_rch> for a 20 minute long call
[17:05:11 CET] <m_rch> then muting the call where the beep should be, muting the beep where the call audio should be, and merging the two?
[17:05:21 CET] <m_rch> other than it being a horrendously complicated and stupid sounding thing?
[17:05:42 CET] <mdavis> m_rch: It will, no matter what, be horrendously complicated and stupid sounding
[17:06:00 CET] <mdavis> ffmpeg really isn't ideal for this kind of work.
[17:07:04 CET] <m_rch> what drew me to it is that it supports basically every codec im going to encounter out of the box
[17:07:23 CET] <m_rch> because I'm working in a minefield of streams from a ton of different disparate sources
[17:08:15 CET] <m_rch> I'd rather have to do this once for everything than over and over whenever i run into an unsupported file in this pile of data
[17:08:34 CET] <mdavis> Yes, it does that well. Maybe you could use ffmpeg to transcode to a low-loss, intermediate format, then use other tools to do the actual editing work?
[17:09:59 CET] <m_rch> not a bad idea
[17:10:06 CET] <m_rch> im poking around the sox docs right now
[17:17:21 CET] <madno> Hello. Does ffmpeg support recording a desktop using .gif format? I mean, of course it does, but the generated files are huge. Around 30-40 MB for 3 seconds with just 10 frames.
[17:18:25 CET] <durandal_1707> are you serious?
[17:19:05 CET] <kepstin> madno: yes, gif is a really inefficient format that will give you huge files.
[17:19:32 CET] <kepstin> you're probably better off encoding an h264 mp4 if you want to play it in a browser...
[17:20:10 CET] <chuckleplant> Hi, I cross-compiled ffmpeg from ubuntu 14.04 for Windows, with cuda & nvenc support. What troubles me is that I didn't install cuda at all... not really sure how it was built and I'm really curious. Do you know if ubuntu has cuda pre-installed?
[17:20:13 CET] <jkqxz> gif is not made as a video codec at all - if you want to make the output it smaller, you need to make your desktop be more friendly to it.
[17:20:15 CET] <madno> But I mean, other solutions which are there in the wild give a smaller GIF size. Very much smaller.
[17:20:16 CET] <jkqxz> That means a minimal number of distinct colours and no shading on anything.
[17:20:37 CET] <madno> And I was just recording a small area of the screen, like a 300x300 box.
[17:24:33 CET] <kepstin> chuckleplant: I think it loads the libraries to use nvenc at runtime, so they don't actually need to be on the build system
[17:24:35 CET] <jkqxz> chuckleplant:  ffmpeg includes the cuda headers and auto-enables it in every build; it then dynamically loads the libraries at runtime when you try to use it (or fails if they aren't there).
[17:25:40 CET] <chuckleplant> That makes a lot of sense, thing is Nvidia documentation https://developer.nvidia.com/ffmpeg   points out that you need to specify the libraries...I guess this is outdated then?
[17:32:44 CET] <xtina> hey guys, i'm not sure if this is the right place to ask this but
[17:32:50 CET] <xtina> i'm recording audio with Sox that is 1.0004x too fast
[17:32:58 CET] <xtina> i'm passing it to Ffmpeg for streaming
[17:33:06 CET] <xtina> passing it as mp3
[17:33:12 CET] <xtina> i'd like Ffmpeg to resample it at 0.9996x of the speed
[17:33:14 CET] <furq> what was wrong with arecord
[17:33:24 CET] <xtina> furq: arecord does not encode
[17:33:30 CET] <xtina> sox is capable of recording+encoding in one step
[17:33:39 CET] <furq> but you're passing it to ffmpeg anyway
[17:33:40 CET] <xtina> with arecord i had to pass it to LAME for encoding and that passing overruns the arecord alsa buffefr
[17:33:47 CET] <xtina> i'm passing it to ffmpeg as a mp3, it's tiny
[17:33:57 CET] <xtina> i can't pass raw audio in realtime, it's too big
[17:34:57 CET] <xtina> sox is recording at 44100khz; i'd like to resample at 44082khz, encode to mp3, then pass to ffmpeg
[17:35:02 CET] <xtina> but mp3 only encodes at fixed sample rates
[17:35:07 CET] <xtina> so i can't do this. does anyone have other ideas?
[17:38:05 CET] <xtina> i know Sox has a -tempo arg that can change the speed but this uses a complex algo and i don't want to do that processing
[17:40:12 CET] <xtina> there has to be a way to resample or playback at 0.9996x speed without reprocessing the audio, right?
[17:41:17 CET] <DHE> you could drop frames. you'll get little audio pops every now and then. they would likely be audible
[17:42:57 CET] <xtina> DHE: dropping frames would be great
[17:43:04 CET] <xtina> do you know how?
[17:43:17 CET] <xtina> if i could drop 1 audio frame every 2 minutes i would be golden
[17:44:05 CET] <xtina> DHE: i tried setpts but it uses SO much CPU
[17:44:51 CET] <DHE> hmm... come to think of it I'm not actually sure how. it's not done nearly as often as framerate adjustments
[17:45:30 CET] <xtina> hmm
[17:45:45 CET] <xtina> well, it is possible for me to read in the audio at 0.9996x framerate
[17:45:57 CET] <xtina> like, read it in at 44082Khz..?
[17:46:59 CET] <DHE> I'm pretty sure the sample rate is part of the encoded output and can't be so easily changed
[17:47:20 CET] <DHE> clearly I've gotten myself in over my head so I'm going to stop here.
[17:49:49 CET] <xtina> DHE: no proble
[17:49:52 CET] <xtina> problem
[17:49:53 CET] <xtina> :)
[17:50:05 CET] <xtina> does anyone else know how i can resample audio at 0.9996x speed in ffmpeg or any other audio library?
[17:50:12 CET] <xtina> (mp3)
[17:50:42 CET] <atomnuker> libswresample will do it
[17:51:14 CET] <DHE> without reprocessing the audio? that's the trcky bit.  if you're okay with reencoding, it's quite easy
[17:51:24 CET] <xtina> definitely without reprocessing the audio
[17:51:29 CET] <xtina> i'm on a Pi Zero, i can't afford more processing
[17:51:39 CET] <xtina> unless it's *really* computationally simple
[17:52:06 CET] <xtina> i can try re-encoding the audio, but idk if the Zero can handle that :(
[17:52:35 CET] <DHE> I suspect it could. with high quality resampling algorithms that's another concern
[17:52:59 CET] <xtina> quality's not a concern at all really
[17:53:04 CET] <atomnuker> libspeex then
[17:53:06 CET] <xtina> as long as the audio doesn't turn into total garbage, i'm happy
[17:53:25 CET] <xtina> i tried re-encoding video (with setpts=1.0006x*PTS) and it totally froze up my Pi
[17:53:28 CET] <xtina> maybe audio is easier?
[17:54:16 CET] <bencoh> it should be
[17:54:23 CET] <xtina> atomnuker: libspeex or libswresample?
[17:54:51 CET] <bencoh> speex is a codec, swresample is a resampling lib
[17:55:51 CET] <xtina> if spx is a format, i can't stream anything but mp3 or aac
[17:56:28 CET] <xtina> i will try compiling ffmpeg with libswresample
[17:56:44 CET] <xtina> i'm passing ffmpeg mp3, so i'd have to decode mp3, resample at 0.9996x, then re-encode
[17:58:29 CET] <xtina> i still don't understand why i can't resample in Sox though
[17:58:36 CET] <xtina> if i can't resample in Sox, why could i resample with ffmpeg
[17:59:14 CET] <atomnuker> no, libspeex has a resampling library which is very simple
[17:59:28 CET] <kepstin> xtina: you can resample in sox... you probably just want to use the "speed" effect, then it'll automatically resample as needed
[17:59:34 CET] <atomnuker> https://speex.org/docs/manual/speex-manual/node7.html#SECTION00760000000000000000
[18:00:08 CET] <tmatth> atomnuker: they are now split into libspeex and libspeexdsp btw
[18:00:32 CET] <tmatth> and I think most people are only using the resampler in libspeexdsp
[18:02:41 CET] <xtina> kepstin: oh, hey, -speed only uses 40% of my CPU
[18:02:43 CET] <xtina> not bad at all, lol
[18:03:05 CET] <xtina> let me see how well this works
[18:03:10 CET] <kepstin> yeah, that's the resampling that's cpu intensive there
[18:03:32 CET] <xtina> yea, but 40% for audio + 20ish% from ffmpeg is not awful i suppose
[18:03:39 CET] <kepstin> that's why it's usually better to change the video speed, it's just editing frame timestamps, where audio requires resampling
[18:04:25 CET] <xtina> kepstin: hmm. i've heard raspivid has a pts option
[18:04:33 CET] <xtina> if i could change the timestamps in raspivid before encoding, i'd be golden
[18:04:41 CET] <xtina> changing timestamps and re-encoding during ffmpeg was too much
[18:05:20 CET] <xtina> i don't see any pts option in raspivid myself though
[18:05:42 CET] <kepstin> what format is raspivid giving you? I thought it was a raw h264 stream, which doesn't really have timestamps
[18:05:55 CET] <kepstin> which is why ffmpeg is requiring you specify what framerate it's supposed to be
[18:07:05 CET] <xtina> kepstin: yea, i'm handing h264 encoded. i read the following: "Every camera frame has a timestamp on it taken from the SoC STC (System Time Clock)."
[18:07:21 CET] <xtina> if the video has no timestamps, how are audio/video muxed?
[18:07:44 CET] <kepstin> xtina: you're using the "-framerate 30" input option on ffmpeg, i think?
[18:07:52 CET] <xtina> kepstin: that's right
[18:07:55 CET] <xtina> oh i see
[18:07:57 CET] <xtina> so that's how
[18:08:08 CET] <xtina> hmm, can i just specify framerate 29.99 or something?
[18:08:15 CET] <kepstin> that tells the ffmpeg h264 demuxer to simply have the frame timestamps increase by 1/30s each frame
[18:08:27 CET] <kepstin> you can set that to whatever you want, yeah
[18:08:37 CET] <xtina> hmmm i'll try that out now
[18:11:06 CET] <xtina> kepstin: hmm, can't tell right off the bat if this works. i suppose i'll let this stream run for 30min and see if there's desync :P
[18:11:26 CET] <xtina> i've done -framerate 20.0008 for 1.0004x speed on the video
[18:12:48 CET] <kepstin> keep in mind that this is still just a hack which might get it to run longer, a proper solution wouldn't care what the input framerate is, and just handle anything :/
[18:13:00 CET] <xtina> kepstin: yea, this is definitely not pretty
[18:13:12 CET] <xtina> but if the issue is that one clock is 1.0004x the speed of the other
[18:13:23 CET] <xtina> then, at least on my device, this should solve the issue 100% shouldn't it?
[18:13:39 CET] <xtina> isn't this robust on my specific devices
[18:13:44 CET] <kepstin> yeah, but that's not something that's really constant. clocks vary with temperature, and different systems will have different variation
[18:14:25 CET] <xtina> hmm, so the 1.0004x rato isn't constant on my 2 particular devices either
[18:14:48 CET] <xtina> i know that the audio clock is the one that's faster
[18:14:53 CET] <xtina> because every time this happens, the audio gets ahead
[18:15:05 CET] <xtina> i wonder how the variance in clocks compares with the mean difference
[18:15:17 CET] <xtina> the variance in my particular camera/mic clocks, i mean
[18:15:39 CET] <Phrk_> Hello, how do i use theses "following options" https://ffmpeg.org/ffmpeg-protocols.html#libssh
[18:16:36 CET] <Phrk_> I need to use a private_key file
[18:18:34 CET] <xtina> kepstin: btw, does ffmpeg really handle fractional -framerate inputs?
[18:18:41 CET] <xtina> i think i read some stuff to the contrary
[18:20:28 CET] <kepstin> xtina: the 'framerate' input option actually takes a fractional value in the code, I think the parser for it accepts either fractions or decimal values
[18:20:44 CET] <xtina> ah, gotcha :) thanks
[18:24:55 CET] <dv_> is it correct that MP4 cannot contain MJPEG streams, but quicktime MOV can?
[18:33:04 CET] <Duality> hi
[18:34:47 CET] <Duality> I am trying to convert a incomming h264 stream with mpegts container to rgb24 output on the stdout but all i am getting is some logging messages http://pastebin.com/WgXEYmsk
[18:34:57 CET] <Duality> or do i need all my options before the input ?
[19:07:24 CET] <kepstin> Duality: first of all, this is the ffmpeg help channel; the options on avconv are different enough from ffmpeg that we might get something wrong...
[19:07:38 CET] <kepstin> Duality: but it looks like your problem is just that you forgot to give it an output file
[19:09:42 CET] <kepstin> Duality: you'll have to be more specific about what exactly you want in the output - raw rgb frames, rgb-format h264?
[19:10:25 CET] <kepstin> assuming you add an output file to that, your output would still be yuv h264, you have to use 'libx264rgb' to get rgb h264.
[19:20:16 CET] <Duality> kepstin: rgb24 data raw non encoded :)
[19:20:35 CET] <Duality> kepstin: it's the only thing i have on my embeded device sorry but didn't know that ffmpeg and avconv would be so different.
[19:21:33 CET] <kepstin> Duality: "ffmpeg -i udp://10.42.11.153:12345 -pix_fmt rgb24 -f rawvideo -" would probably work, the same command might also work with avconv
[19:22:17 CET] <kepstin> if the input is mpegts in udp, ffmpeg should be able to probe it automatically, no need to specify codec and whatnot
[19:27:29 CET] <Duality> kepstin: nice it seems to work :)
[19:28:44 CET] <Duality> i notice one thing though, the data takes a while to appear on the stdout, any ideas why that could be ?
[19:29:22 CET] <kepstin> Duality: you have to wait for a keyframe to arrive on the input before it can start decoding
[19:29:53 CET] <Duality> can i change how often a keyframe appears ?
[19:30:09 CET] <kepstin> Duality: if you have control of the remote side that's sending the video, yes
[19:30:43 CET] <Duality> yea it's just ffmpeg streaming :D
[19:30:48 CET] <Duality> but i see i can use -g
[19:31:23 CET] <kepstin> yep, setting -g to a lower values means keyframes more often, so less delay. But it will mean lower video quality (or higher bitrate), since keyframes are large to encode
[19:33:00 CET] <uu8> how would you debug a packet like ffmpeg?
[19:33:19 CET] <uu8> for example with a utility like gdb
[19:33:59 CET] <kepstin> uu8: gdb works on ffmpeg just fine... but you'd want to use it on a copy that doesn't have the symbols stripped.
[19:36:24 CET] <uu8> kepstin: so you say me to use the --disable-stripping option, isn't it?
[19:36:49 CET] <kepstin> uu8: if you're building it yourself, you don't have to, just run the executables with the _g suffix instead of the stripped on
[19:39:32 CET] <uu8> kepstin: ok. thanks. And, what about compiler optimizations?
[19:40:11 CET] <kepstin> uu8: you probably don't want to adjust them unless they're really interfering with what you're trying to figure out
[19:40:11 CET] <uu8> kepstin: Wouldn't be it easier to locate my running point if the compilation is done without optimizations?
[19:45:56 CET] <Duality> kepstin: i am sending a keyframe often now, but it still takes about 8 seconds for the data to appear on stdout, is it because of buffering ?
[19:47:15 CET] <kepstin> Duality: hmm, not entirely sure. Do keep in mind that ffmpeg (and avconv) tools aren't *really* designed for realtime stuff, so they're not optimized for low latency.
[19:47:43 CET] <Duality> it's not really a big deal but was just wondering :)
[19:49:11 CET] <llogan> i keep seeing ffmpeg and avconv in the same sentence which makes me wonder if you're using the counterfeit ffmpeg from Libav
[19:50:28 CET] <llogan> liblave
[19:50:36 CET] <llogan> which means "to bluff"
[19:53:53 CET] <xtina> kepstin: and whoever else has been helping - i'm using -framerate 20.008 in ffmpeg and so far i've been streaming A/V from my Zero to youtube for 1.5hrs without any desyncing or buffering at all!
[19:54:02 CET] <xtina> all on battery too :)
[19:54:37 CET] <xtina> we'll see how consistent the clock speed ratio is between the camera and mic, but so far this is working great
[19:54:44 CET] <xtina> <30% CPU too
[19:55:01 CET] <xtina> thanks for all the help, everyone
[19:55:31 CET] <Duality> llogan: it's on a raspbian image that has avconv installed
[19:55:38 CET] <Duality> i am sorry for the blasphemy
[19:55:39 CET] <Duality> :D
[19:55:48 CET] <llogan> then you're using the fake "ffmpeg"
[19:56:00 CET] <llogan> you'll have to go to #libav for help with that
[19:56:11 CET] <Duality> but i like this channel
[19:56:16 CET] <Duality> and the people in it :D
[19:56:32 CET] <llogan> thanks, but we can't support third party tools here
[19:58:14 CET] <Duality> llogan: but it works thanks to kepstin :D
[19:58:28 CET] <Duality> no but seriously i understand :)
[19:58:52 CET] <llogan> if the raspbian is based on jessie there may be a real ffmpeg in jessie-backports
[19:59:39 CET] <llogan> or get a binary for ARM at https://johnvansickle.com/ffmpeg/
[20:00:57 CET] <llogan> the fake ffmpeg is really old and shitty. you can check to see if it is fake ffmpeg by looking at the first line of the console output. FFmpeg stuff will say "Copyright... the FFmpeg developers" while fake will say Libav
[20:06:12 CET] <furq> just run "whereis ffmpeg"
[20:06:21 CET] <furq> if it's a symlink to avconv then it'll print avconv
[20:07:00 CET] <llogan> IIRC, it wasn't always a symlink in oldbian
[20:19:02 CET] <Duality> llogan: there might be, but it seems that i have to compile for hardware decoding support
[20:19:28 CET] <Duality> and this embeded device is sloooow :) you don't want to compile on it :D
[20:20:10 CET] <llogan> mmal and/or omx? i forgot about those.
[20:20:38 CET] <llogan> well, i guess not omx because you're not encoding to a format that it supports
[20:21:27 CET] <llogan> and it doesn't look like you're using mmal either
[20:34:50 CET] <xn0r> I have a ~30 fps (ntsc?) video where every 5th frame seems to be a duplicate of the previous one. I have tried to get rid of this with -vf pullup -r 24000/1001 but the duplicates remain, why?
[20:47:05 CET] <mdavis-test> Still trying to decide on an IRC client...
[20:48:04 CET] <furq> irssi
[20:48:23 CET] <mdavis-test> That's what I'm using at the moment, idk about it
[21:02:07 CET] <arog> hi
[21:02:24 CET] <mdavis> hello
[21:02:56 CET] <arog> I am having a problem with my program that is recording using nvenc. It works fine for 2 streams, but for more than 2 streams I get errors about out of memory and no NVENC capable devices found
[21:03:16 CET] <arog> a quick google search shows that most of the graphic cards by nvidia are limited to 2 simultaneous encoding sessions at once
[21:03:26 CET] <mdavis> Yep
[21:03:27 CET] <furq> do you have a high-end quadro
[21:03:31 CET] <arog> i have a TitanX
[21:03:38 CET] <arog> 3 of them
[21:03:47 CET] <arog> but still 6 encoding sessions won
[21:03:48 CET] <mdavis> Then you need to specify the gpu
[21:03:49 CET] <arog> won't be enough
[21:03:55 CET] <arog> mdavis, I have a better idea
[21:04:05 CET] <arog> what if I have a binary that can do some sort of round dobin
[21:04:05 CET] <kepstin> xn0r: you probably want to use the 'decimate' filter to fix that.
[21:04:10 CET] <arog> between each stream
[21:04:24 CET] <arog> so each stream will push 1 frame at a time to their file then repeat
[21:04:27 CET] <arog> would that work?
[21:04:48 CET] <mdavis> So, something like a pool for gpu encoders?
[21:04:58 CET] <arog> yes
[21:05:39 CET] <arog> that might be a good v1.1 option, but for now I can do a specify gpu
[21:05:43 CET] <arog> just so it is working somewhat
[21:06:04 CET] <arog> http://pastebin.com/i8X3P17f  -- that is my code right now
[21:06:08 CET] <arog> how do I specify the gpu?
[21:06:13 CET] <kepstin> yeah, a low-end quadro is better for encoding than a high-end consumer card, simply because of the arbitrary stream limit
[21:06:35 CET] <kepstin> since it doesn't actually use the gpu at all, gpu performance is irrelevent (clocks might affect the encoder isp, but that's it)
[21:06:53 CET] <arog> kepstin:  we are actually doing a lot of processing and deep learning
[21:06:55 CET] <furq> as long as it's one of the low-end quadros without the stream limitation
[21:06:57 CET] <arog> so need the titanXs :_
[21:06:58 CET] <arog> :)
[21:07:03 CET] <arog> but trying to utilize this encoder at the same time
[21:07:09 CET] <arog> wait
[21:07:14 CET] <arog> a low end quadro doesn't ahve a limitation?
[21:07:27 CET] <furq> some quadros still have the limit
[21:07:34 CET] <kepstin> have to check specific models to confirm
[21:07:36 CET] <furq> ^
[21:08:06 CET] <furq> if you have a recent intel desktop cpu in there you might want to look into quicksync
[21:08:26 CET] <furq> i don't think the xeons have quicksync though so you're out of luck in that case
[21:08:47 CET] <furq> you could also obviously encode on the cpu but i'm guessing that's out of the question
[21:10:10 CET] <arog> hmm
[21:10:14 CET] <arog> so there are 2 problems
[21:10:17 CET] <arog> well
[21:10:19 CET] <arog> hmm
[21:10:25 CET] <arog> right now we can install a quadro to support that
[21:10:32 CET] <arog> but we will eventually move to a DrivePX2
[21:10:42 CET] <arog> maybe we can just stream the data over network and then save it on that computer
[21:10:44 CET] <arog> hmm
[21:10:48 CET] <arog> do you know which quadro could work?
[21:11:03 CET] <kepstin> looks like some of the quadro cards have double the nvenc resources of others, interesting. The K2200 is probably a good choice
[21:11:28 CET] <kepstin> hmm, M6000 too
[21:11:43 CET] <furq> what resolution is this video
[21:11:58 CET] <furq> sending rawvideo over a network probably won't go well
[21:12:09 CET] <arog> um
[21:12:15 CET] <arog> either 640x480 or 2k haha
[21:12:18 CET] <arog> we will encode it
[21:12:20 CET] <arog> then send it
[21:12:23 CET] <arog> in anycase it is 10gbe
[21:12:23 CET] <furq> oh
[21:12:25 CET] <furq> nvm then
[21:12:41 CET] <arog> http://www.pny.com/NVIDIA-Quadro-K2200 where does it say it has unlimited streaming capability?
[21:12:57 CET] <furq> 10G would be fine for 2k rawvideo anyway, as long as the disks on the other end can keep up
[21:13:15 CET] <furq> 1080p60 is about 1.5gbps
[21:13:15 CET] <arog> https://developer.nvidia.com/video-encode-decode-gpu-support-matrix looking at that
[21:13:31 CET] <arog> it just says 1/1
[21:13:40 CET] <arog> are you sure it doesnt have the nvenc2 limitation
[21:14:36 CET] <kepstin> arog: best I can find is a quote from an nvidia moderator saying "Quadro products (Kepler or Maxwell) can encode more than 2 video streams."
[21:15:04 CET] <thebombzen_> arog: Quadros are cards designed for scientific computing
[21:15:06 CET] <mdavis> Yeah, I would want to verify that before I shell out for quadros...
[21:15:16 CET] <thebombzen_> Quadros are industry cards
[21:15:26 CET] <thebombzen_> they have lots of features that GeForce cards don't have
[21:15:27 CET] <furq> yeah nvidia's docs on this are very hard to find
[21:15:31 CET] <thebombzen_> that are necessary to figure this out
[21:15:34 CET] <furq> i've had to go off forum posts in the past
[21:15:43 CET] <thebombzen_> like Quadros have full double precision support
[21:15:44 CET] <arog> hmm
[21:15:51 CET] <thebombzen_> quadros also have error-correcting resilience
[21:16:02 CET] <arog> Sure that's fine but that still doesn't confirm if they have a limitation on the number of encoding sessions
[21:16:09 CET] <arog> I'd hate to buy one and find it it only supports 3/4
[21:16:09 CET] <thebombzen_> geforces are primarily concerned with consumers who are doing things like playing games
[21:16:39 CET] <thebombzen_> Do you actually need to be encoding these at the same time?
[21:16:41 CET] <thebombzen_> like in realtime?
[21:17:10 CET] <thebombzen_> Because if not, then you might want to consider taking your Titans and putting them in SLI
[21:17:12 CET] <arog> http://video.stackexchange.com/questions/17419/what-graphics-card-features-effect-nvidia-nvenc-hardware-encoding-speed
[21:17:25 CET] <arog> yes I want to encode in realtime
[21:17:31 CET] <arog> or at least buffer it
[21:17:34 CET] <arog> somewhat realtime
[21:17:42 CET] <arog> that's why I wanted to implement a pool or something
[21:17:51 CET] <kepstin> it looks like they released an sdk update a while back which has unlimited encoder sessions on K2000 and higher and all M* cards.
[21:18:06 CET] <arog> awesome
[21:18:14 CET] <arog> M4000 supposed to be good according to this stackoverflow post
[21:19:07 CET] <thebombzen_> cool
[21:19:10 CET] <thebombzen_> there you go
[21:19:26 CET] <kepstin> no info about any of the P* cards, but I bet you can consider all the current P* cards to be "high end" :)
[21:21:49 CET] <mdavis> Ayyyyy https://www.amazon.com/PNY-VCQM5000-PB-NVIDIA-Quadro-M5000/dp/B013W9NGQK/
[21:21:53 CET] <mdavis> Give me ALL the power
[21:27:48 CET] <arog> heh
[21:27:52 CET] <arog> quick question
[21:27:55 CET] <arog> how do i specify which gpu to use
[21:27:57 CET] <arog> in my code?
[21:28:03 CET] <arog> http://pastebin.com/i8X3P17f
[21:29:00 CET] <c_14> nvenc?
[21:29:06 CET] <arog> yea
[21:29:12 CET] <c_14> av_opt_set the gpu option
[21:29:29 CET] <arog> http://pastebin.com/B4y9z83S
[21:29:31 CET] <arog> thanks
[22:23:25 CET] <Duality> can i make ffmpeg flush the buffers after every frame ?
[22:23:34 CET] <Duality> like for the stdout :)
[22:24:04 CET] <Duality> i meen for like when your writing into stdout
[22:29:07 CET] <Darby_Crash> hi guys. have someone a tutorials/guide/script for build a ffmpeg binary not dynamic?
[22:29:25 CET] <JEEB> the default configuration is static libraries
[22:29:28 CET] <JEEB> next!
[22:30:05 CET] <kepstin> well, it'll link dynamically to stuff like external encoder libraries (libx264) by default
[22:30:08 CET] <Darby_Crash> JEEB, i have tryed all but the result need always shared library
[22:30:14 CET] <JEEB> that's different
[22:30:48 CET] <Darby_Crash> i need a binary with all-in-one
[22:31:16 CET] <JEEB> start with building your own C stdlib static <gets whacked unconscious>
[22:31:16 CET] <kepstin> Darby_Crash: easy way? just download one someone else has made: https://www.johnvansickle.com/ffmpeg/
[22:35:11 CET] <Darby_Crash> kepstin how can i compile a binary like this?
[22:39:12 CET] <Darby_Crash> JEEB have you a guide?
[22:39:22 CET] <JEEB> no
[22:39:59 CET] <Darby_Crash> must i compile stdlib?
[22:40:58 CET] <Darby_Crash> must i compile stdlib? JEEB
[22:41:11 CET] <DHE> same story as last time. you must install or otherwise acquire the .a versions of all system libraries you need. you may want to add things like "--disable-indevs --disable-outdevs" when configuring to reduce dependencies
[22:42:18 CET] <Darby_Crash> i have all .a files DHE but the result need always .so files
[22:42:40 CET] <DHE> did you build with --extra-ldflags=-static ?
[22:42:46 CET] <Darby_Crash> yes
[22:43:03 CET] <DHE> then something has gone wrong
[22:44:04 CET] <JEEB> &30
[22:44:17 CET] <Darby_Crash> is this the right method?
[22:44:19 CET] <Darby_Crash> https://clbin.com/yp5gz
[22:44:37 CET] <furq> what shared libs does that need
[22:46:50 CET] <Darby_Crash> furq i want a static build like @relaxed have do it
[22:47:07 CET] <Darby_Crash> without shared library
[22:47:31 CET] <Darby_Crash> my configure method is right?
[22:54:37 CET] <tyngdekraften> can someone reproduce flac misdetection? https://p.fuwafuwa.moe/qyijql.flac
[22:54:45 CET] <tyngdekraften> before i post a bug report
[22:55:21 CET] <JEEB> cavs?
[22:55:24 CET] <tyngdekraften> yes
[22:55:30 CET] <JEEB> anyways, probe stuff is welp
[23:01:39 CET] <tyngdekraften> upload.ffmpeg.org:21 is closed
[23:02:21 CET] <tyngdekraften> am i still supposed to upload my sample media here?
[23:02:38 CET] <JEEB> I think the sample stuff was changed at some point
[23:02:53 CET] <JEEB> a lot of people just link/upload small samples onto the trac
[23:18:46 CET] <llogan> you can use https://streams.videolan.org/upload/
[23:19:10 CET] <llogan> i guess
[23:52:37 CET] <tyngdekraften> thanks just submitted the ticket
[00:00:00 CET] --- Wed Mar  1 2017


More information about the Ffmpeg-devel-irc mailing list