[Ffmpeg-devel-irc] ffmpeg.log.20170711

burek burek021 at gmail.com
Wed Jul 12 03:05:01 EEST 2017


[00:10:31 CEST] <kerio> furq: tgt is great yo
[00:10:59 CEST] <furq> this is some kind of elaborate prank right
[00:11:06 CEST] <kerio> it's top gear with money
[00:11:07 CEST] <kerio> what's not to dislike
[00:11:18 CEST] <kerio> :^)
[00:11:22 CEST] <furq> 23:11:06 ( kerio) it's top gear with money
[00:11:25 CEST] <furq> this is what's not to dislike
[00:11:36 CEST] <furq> except with only one negative
[00:47:40 CEST] <FurretUber> Hi, yesterday I have asked help regarding encoding video with 10-bit depth with libx264. It was told me I should build libx264 with 10-bit support enabled. I built it and tried to encode with it. Trying to use libx264rgb, it segfaults (I was warned it could have problems)
[00:48:40 CEST] <FurretUber> Using libx264, I can make the videos with 10-bit, but then colors are lost even with -qp 0, as the color space of the resulting video is YUV
[00:49:26 CEST] <FurretUber> Is it possible to make the color space of the output don't lose colors?
[00:49:37 CEST] <FurretUber> If the input is RGB
[01:14:43 CEST] <thebombzen> FurretUber: if you use yuv444p10le, there won't really be any loss
[01:15:28 CEST] <thebombzen> it won't be bit-exact but it's not like yuv420p(10le)? which has chroma subsampling
[01:16:32 CEST] <thebombzen> You shouldn't have any color loss issues with yuv444p10le
[01:17:05 CEST] <thebombzen> Alternatively, ffv1 supports lossless rgb, but it's too slow for realtime
[01:30:31 CEST] <iive> huffyuv is usually fast enough for real time. not sur if it supports 10bit
[01:35:49 CEST] <FurretUber> From what I have tested, both ffv1 and huffyuv produces files significantly bigger, comparing to libx264 with -qp 0
[01:40:39 CEST] <The_8472> h.264 has an RGB 4:4:4 profile, but idk if x264 supports it
[01:44:58 CEST] <The_8472> FurretUber, also, there's dirac which supports lossless compression and YCgCo which can be interconverted with RGB losslessly
[01:49:40 CEST] <lmat> I see -c:v mentioned several times in examples (in man ffmpeg-all;), but what does it mean?
[01:52:18 CEST] <lmat> found it!  -c = -codec!
[01:52:31 CEST] <lmat> (in the man page, it's -c[:...  so searching wasn't helpful :-(
[01:52:55 CEST] <kepstin> x264 does rgb 4:4:4 just fine in 8-bit, I guess it's only the 10bit that has issues? :/
[02:04:04 CEST] <FurretUber> With 8-bit, libx264rgb can be used correctly, with 10-bit libx264rgb segfaults
[02:07:26 CEST] <FurretUber> With the command "ffmpeg -ss 00:00 -i lossless.avi -vframes 1 saida.bmp", will the bmp image be lossless? I'm thinking the problem may be somewhere else
[02:07:50 CEST] <thebombzen> FurretUber: You should not care about the filesize of realtime encoding.
[02:07:58 CEST] <thebombzen> If you are not encoding in realtime, just use ffv1
[02:08:34 CEST] <thebombzen> I'm not sure why you're insisting on using 10-bit encoding though
[02:09:07 CEST] <thebombzen> also, yes, bmp images are lossless, but they're also uncompressed and you should avoid them
[02:09:17 CEST] <FurretUber> It is just to test
[02:09:22 CEST] <thebombzen> you also don't need -ss 00:00, which just means "seek to 0"
[02:09:33 CEST] <FurretUber> I will extract the single frame
[02:09:33 CEST] <thebombzen> which is silly, because that's where it starts.
[02:09:48 CEST] <FurretUber> Produce a video with libx264, as I am doing
[02:09:51 CEST] <thebombzen> FurretUber: what are you actually trying to do?
[02:10:08 CEST] <FurretUber> And then I will extract the image from the video
[02:10:20 CEST] <thebombzen> What are you actually trying to do though?
[02:19:54 CEST] <FurretUber> I'm trying so badly to use libx264 because of the size of the resulting videos
[02:20:15 CEST] <FurretUber> One minute with ffv1 has 625 MB
[02:21:39 CEST] <FurretUber> One minute, under the same conditions, with libx264 using -qp 0 and yuv444p10le has 19 MB
[02:22:20 CEST] <FurretUber> And I would like to ensure the libx264 with -qp 0 and yuv444p10le is lossless
[02:27:01 CEST] <The_8472> is your input 10bit?
[02:28:32 CEST] <The_8472> qp0 + yuv444p (i.e. 8bit) should already be lossless for 8bit inputs except for the yuv conversion. "With 8-bit, libx264rgb can be used correctly" ... so you could just use that
[02:54:29 CEST] <iive> h264 lossless mode is using motion compensation, and this saves a lot of bits
[03:01:48 CEST] <thebombzen> FurretUber: I said this before, upping to 10-bit doesn't give you an advantage for lossless content
[03:01:59 CEST] <FurretUber> From what I have tested with still images, ffmpeg produces the video losslessly, so it is working correctly. However, the players I have installed are having problems. With ffplay, the quality loss is enormous (is there a option I should set?). With vlc, the resolution of the video is being changed, which causes quality loss. With kdenlive, setting the profile matching the video, it shows it losslessly.
[03:02:35 CEST] <FurretUber> The good news is that ffmpeg works. The bad news is that, apparently, the players don't show the video correctly
[03:04:55 CEST] <thebombzen> "with ffplay, the quality loss is enormous" wut
[03:05:25 CEST] <thebombzen> FurretUber: either way, there's no advantage gained by upping 8-bit content to 10-bit before losslessly encoding.
[03:06:05 CEST] <thebombzen> Upping 8 to 10 can be more efficient because it allows more room for low-bit quantization error
[03:06:12 CEST] <thebombzen> but lossless content has no quantization error
[03:23:39 CEST] <FurretUber> About ffplay, I don't understand why this happened. For example: there is a red bar on the image, with the color, in RGB, 255,0,0. Around it is white with RGB 255,255,255. It changes from one pixel to another with the original content, but with ffplay it appears with a transition: 255,0,0 then 233,11,9 then 232,9,8 then 188,28,25 then 255,226,232
[03:24:49 CEST] <FurretUber> Using compare (from ImageMagick), between 80% and 90% of that specific frame was changed
[03:38:31 CEST] <PixelPerfect> c_14 curl didn't redirect me for whatever reason
[04:10:27 CEST] <thebombzen> FurretUber: have you considered uploading an image to a screenshot
[04:10:36 CEST] <thebombzen> rather than desribing RGB values in english text
[04:10:58 CEST] <FurretUber> Yes, I have. However, aren't those services lossy?
[04:11:14 CEST] <thebombzen> I don't even know what to say to that
[04:12:04 CEST] <thebombzen> Image hosting sites support PNGs
[04:15:55 CEST] <hiru> I'm using the -map option to import subs from an input source but the output source loses sub styling. why is this happening?
[04:16:29 CEST] <FurretUber> Here they are. Source: https://i.imgur.com/qR2giJO.png , encoded with ffmpeg: https://i.imgur.com/X9VrFMv.png , ffplay https://i.imgur.com/35fxOVb.png , original and ffplay side by side https://i.imgur.com/j1FDkug.png
[04:17:42 CEST] <thebombzen> I fail to see how "the quality loss is enormous"
[04:18:02 CEST] <FurretUber> compare, from ImageMagick
[04:19:36 CEST] <FurretUber> Should ffmpeg and ffplay show this difference?
[04:22:05 CEST] <FurretUber> Using a command like " compare saida_original.bmp qR2giJO.png -highlight-color  Green  -lowlight-color Black compare_imgur.bmp " should make the different parts be green and the non-different parts black
[04:22:31 CEST] <FurretUber> Comparing the original to the ffplay I have got this: https://i.imgur.com/X7hRDq5.png
[04:22:52 CEST] <klaxa> hiru: it shouldn't lose styling, does it maybe just lose the font?
[04:23:13 CEST] <hiru> it looks like it loses styling only if I convert the output file in webm
[04:23:25 CEST] <hiru> if I copy the file using mkv it just uses same font style
[04:23:26 CEST] <klaxa> webm has no styling subtitles afaik
[04:24:35 CEST] <hiru> if I try to extract the .ass file and burn it via -vf it loses style too
[04:24:56 CEST] <klaxa> huh
[04:25:14 CEST] <klaxa> can you pass the .mkv as the subfile and try that?
[04:25:37 CEST] <hiru> if I try that the -vf just fails saying there is no such file or directory
[04:26:04 CEST] <hiru> in the past I solved the issue using cd and moving into the file path and using a filename without a folder structure
[04:26:23 CEST] <hiru> but I can't to that since the source file comes from a samba path
[04:26:42 CEST] <hiru> *do
[04:27:11 CEST] <thebombzen> FurretUber: the number of pixels that are not bit-exact identical to the original is not a metric on quality difference
[04:28:43 CEST] <hiru> the source path is something like this "\\PC\path\to\media\[TAG]File_Name\[TAG]File_Name_-_04_-_Title,_Title,_Title,_[TAG_Author_Name]_[207A3C74].mkv"
[04:29:09 CEST] <hiru> maybe I have to escape special characters? dunno
[04:32:05 CEST] <hiru> "All characters enclosed between '' are included literally in the parsed string". what does it mean?
[04:33:35 CEST] <hiru> oh... maybe I found the problem
[04:33:53 CEST] <hiru> ffmpeg removed all the \ from the file path
[04:35:47 CEST] <hiru> it worked :O
[04:36:04 CEST] <hiru> using \ before every other \ solved the issue
[04:36:27 CEST] <hiru> I mean, escaping all the \ in the file path
[04:37:32 CEST] <FurretUber> thebombzen: true. The way I wrote the compare command, even one value of RGB color in a pixel will be flagged as difference (example: 255,0,0 is different than 254,0,0).
[04:39:05 CEST] <FurretUber> But the difference is noticeable, the ffplay "output" is visually lossy
[04:39:45 CEST] <klaxa> hiru: welcome to escaping hell :P
[04:40:07 CEST] <hiru> fortunately I only have to escape when using -vf
[04:42:29 CEST] <hiru> ffmepg says "not usable fontconfig" but I have a fonts folder in the ffmpeg folder
[04:42:35 CEST] <hiru> (using windows)
[05:19:48 CEST] <FurretUber> I discovered why ffplay video is problematic. ffplay is not using yuv444p10le, it is converting the video to yuv420p.
[06:31:40 CEST] <kepstin> FurretUber: probably a limitation of the sdl video renderer it uses, yeah. If you want a reference, mpv has a good renderer.
[06:32:44 CEST] <Fig1024> just curious, what is the practical application of recording yuv444p10le? who actually needs that?
[06:33:42 CEST] <atomnuker> low resolution recordings off of emulators
[06:34:01 CEST] <atomnuker> you need to preserve chroma since subsampling at such resolution and source material is hell
[06:34:22 CEST] <Fig1024> ok, but why 10 bit
[06:34:31 CEST] <FurretUber> I have used mplayer with video driver gl, and it opened correctly
[06:34:45 CEST] <atomnuker> to preserve gradients
[06:35:24 CEST] <atomnuker> you're also converting from rgb which has a larger colorspace so 10 bits helps preserve colors too
[06:35:49 CEST] <Fig1024> why not just record strait rgb 8 bit then
[06:35:58 CEST] <atomnuker> doesn't compress well
[06:36:09 CEST] <atomnuker> few formats support it
[06:36:58 CEST] <Fig1024> I imagine 10 bit video is not popular either, since it wrecks havoc on pixel byte alignment
[06:37:37 CEST] <kepstin> x264 can do rgb reasonably well, but player support is eh; although most players that can handle 4:4:4 10bit can probably do rgb.
[06:38:15 CEST] <kepstin> Fig1024: when decoded it's stored as 16 bits per component, with padding. Alignment isn't really an issue.
[06:38:29 CEST] <atomnuker> Fig1024: not really, its not packed
[06:39:42 CEST] <Fig1024> so FFMPEG handles 10 bit video as 16 bit?
[06:41:48 CEST] <atomnuker> everything does
[06:42:22 CEST] <Fig1024> I'm actually interested in best way to record high quality video from computer desktop (like 3d game) which can be used for editing and conversion to high compression format. Should I look into 10 bit
[06:43:09 CEST] <Fig1024> I checked DeckLink 4K Extreme capture/playback device and their 10 bit format used really weird pixel layout, here's no padding
[06:43:35 CEST] <atomnuker> they use v210
[06:43:49 CEST] <kepstin> Fig1024: depends a lot on what you're trying to do. If you have lots of disk bandwidth, a fast lossless codec that can store the original rgb would be best.
[06:43:57 CEST] <atomnuker> which has 2 bits of padding per uyvy chunk
[06:44:48 CEST] <Fig1024> I'm curious what professional video editors prefer as source format for editing
[06:46:49 CEST] <furq> well professional video editors aren't using rgb
[06:47:56 CEST] <kerio> are 10 bits per channel enough to edit videos
[06:48:05 CEST] <Fig1024> but in case source material comes from computer desktop capture, like 3d game
[06:48:18 CEST] <furq> yeah that's rgb
[06:48:31 CEST] <furq> that's my point, it's not really a good comparison
[06:48:36 CEST] <Fig1024> so convert rgb to 10 bit YUV and record that way?
[06:48:47 CEST] <furq> maybe
[06:49:00 CEST] <kerio> if it's possible, store the source material losslessl
[06:49:01 CEST] <furq> it's probably going to end up as 8-bit yuv anyway
[06:49:03 CEST] <kerio> losslessly
[06:49:08 CEST] <kerio> but it's probably not possible
[06:49:17 CEST] <Fig1024> is there a name for this lossless format
[06:49:30 CEST] <furq> if you want rgb support then ffv1 is a good choice
[06:49:39 CEST] <furq> there are tons of lossless codecs though
[06:49:44 CEST] <furq> it really depends what you need from it
[06:49:46 CEST] <kerio> ffv1 is slow as ballz tho
[06:49:49 CEST] <furq> it's not that slow
[06:50:05 CEST] <furq> it is obviously much slower than ffvhuff, but that's because it's actually compressing the video
[06:50:27 CEST] <Fig1024> is that part of ffmpeg
[06:50:36 CEST] <furq> yes
[06:51:06 CEST] <kerio> ffvhuff is a beautiful huffman encoder who don't need no interframe compression
[06:51:25 CEST] <kerio> Fig1024: i'd save with ffvhuff which is pretty much free
[06:51:32 CEST] <furq> Fig1024: http://vpaste.net/cScvh
[06:51:41 CEST] <kerio> and then if possible compress with ffv1 or lossless libx264rgb
[06:51:44 CEST] <furq> those are all the lossless video codecs supported by my ffmpeg
[06:51:53 CEST] <kepstin> if you're doing something like an external capture card on a fast computer, you could even consider libx264rgb as a temp file format.
[06:52:17 CEST] <kerio> ah yes i was assuming the recording pc was also responsible for generating the content
[06:52:22 CEST] <kepstin> benefit over something like ffv1 is ... well, you'd have to test it
[06:52:24 CEST] <kerio> if it's separate you can libx264rgb or ffv1
[06:52:31 CEST] <furq> ut video, ffvhuff and huffyuv are all very fast
[06:52:42 CEST] <furq> ffv1 and x264rgb are slower but compress better
[06:52:48 CEST] <kerio> isn't huffyuv just a slightly older version of ffvhuff
[06:52:56 CEST] <furq> ffvhuff isn't really supported outside of ffmpeg
[06:53:04 CEST] <furq> if you need premiere or whatever then ffvhuff is no good afaik
[06:53:07 CEST] <furq> same with ffv1 sadly
[06:53:25 CEST] <furq> ut is apparently a good choice if you need widespread support
[06:54:19 CEST] <furq> Fig1024: it also depends on what you plan to do with these videos
[06:54:34 CEST] <furq> there's no point using rgb or yuv444p10le for your intermediate format if they're going to end up on youtube
[06:55:18 CEST] <furq> that's just a waste of space unless you're doing fancy video editing stuff that needs dynamic range headroom
[06:55:29 CEST] <Fenrirthviti> furq: but the thousands of youtube guides that say to use RGB!
[06:55:34 CEST] <furq> lol
[06:55:42 CEST] <furq> that's true. no blog post about video has ever been wrong yet
[06:55:44 CEST] <Fenrirthviti> Who will tell them!?
[06:55:53 CEST] <furq> especially when ffmpeg is involved
[06:56:06 CEST] <Fenrirthviti> "I upload utvideo directly to youtube because it's better quality that way."
[06:56:07 CEST] <kerio> just upload your raw rgb video to youtube directly :-D
[06:56:14 CEST] <Fenrirthviti> bam!
[06:56:17 CEST] <furq> there's no reason not to do that other than bandwidth tbh
[06:56:21 CEST] <furq> if i had 1G fibre then i would totally do that
[06:56:23 CEST] <kerio> Fenrirthviti: youtube gon reencode
[06:56:32 CEST] <Fenrirthviti> I'm well aware.
[06:56:50 CEST] <Fenrirthviti> Just quoting some OBS users I've run into in the past :P
[06:57:43 CEST] <Fenrirthviti> But recording to utvideo and uploading straight to youtube with no post is kinda stupid.
[06:57:47 CEST] <kepstin> youtube is gonna re-encode... but they've gone over and re-done encodes of videos in new codecs in the past, and might again in the future.
[06:58:12 CEST] <kepstin> so putting up the best quality you have bw for might eventually help out future generations? :/
[06:59:01 CEST] <furq> i don't know if they keep the source video if it's 400GB v410
[06:59:10 CEST] <furq> i can't say i've tried though
[06:59:21 CEST] <Fenrirthviti> it is google, who knows
[06:59:26 CEST] <furq> but it's one less step of generation loss
[06:59:28 CEST] <Fenrirthviti> They do all sorts of strange shit.
[06:59:33 CEST] <furq> so if you have the bandwidth then you might as well do that anyway
[07:01:34 CEST] <Fig1024> how come all 3d graphics uses RGB instead of YUV? isn't YUV better in every way
[07:02:36 CEST] <kepstin> monitors render rgb directly, so using rgb for the full path means fewer conversions needed.
[07:04:20 CEST] <Fig1024> except now recording video is pretty common so there's double conversion. And if there's any video embedded in 3d output it's 4x conversion
[07:04:22 CEST] <kepstin> yuv mostly exists because it can improve compression, since detail tends to be in the Y plane, the U/V can be subsampled, etc.
[07:05:15 CEST] <Fig1024> having luma component actually makes a lot of 3d engine functions easier, since it's all about lighting and color blending
[07:07:31 CEST] <kepstin> most operations will involve all the components anyways, and if you're working in rgb the calculations are nice because they're generally identical on all three components, so easy to compute in gpus. YUV you need to treat the Y separately from the UV typically.
[07:08:54 CEST] <kepstin> (then you get fun semi-planar pixel formats like NV12 which are designed to be nicer for gpu/vector stuff)
[07:11:35 CEST] <Fig1024> anyway, if I need pretty good high quality format for video editing, without going overboard with fancy stuff, I can just use h264 with I-frame only, right
[07:12:10 CEST] <kepstin> unless you really need the ability to cut out individual frames without re-encoding, I frame only is kind of silly.
[07:12:21 CEST] <kepstin> Just use a smallish gop so you get fast seeking.
[07:15:07 CEST] <Fig1024> ok
[09:21:22 CEST] <Raigin> Hay guys, I'm trying to setup a "stream controller" type box. Basically I have box that listens to a UDP Port and takes whatever is streamed to it and converts it to the appropriate formats and streams it to two more devices upstream. Right now I can only stream one thing to it so I have one box that runs a script and streams video based on a schedule and then I stop that script whenever we
[09:21:22 CEST] <Raigin> need to "Go Live" from another box. I'm wondering if this can be done automatically because the way I'm stopping the script doesn't work all the time and causes all sorts of issues.
[09:22:09 CEST] <Raigin> Anyone got any ideas of how to accomplish this just w/ the listener? Like if it detects packets from IP1 to drop all packets from IP2?
[09:54:28 CEST] <kerio> what's ffmpeg-10bit
[09:57:42 CEST] <kerio> also is it possible to pick a v4l2 device by serial or some other way
[09:57:49 CEST] <kerio> instead of having to pick the correct video* device
[10:20:14 CEST] <kerio> is there a way to add timestamps to rawvideo?
[10:20:17 CEST] <kerio> as in
[10:20:23 CEST] <kerio> i have the output from a program
[10:20:31 CEST] <kerio> in raw video form
[10:21:38 CEST] <kerio> the framerate is not necessarily stable
[10:36:52 CEST] <scriptso> I was wondering if anyone would be able to help me understand these to Snippets... By no means do I hold or claim that I personally wrote the entirety of these scipts but bs but I did write the majority of one of them at least... You can tell by  how ugly it is | ffstreaming screen script https://pastebin.com/URLcbBgn, ffstreamit  https://pastebin.com/6mBVYK6u|
[10:37:02 CEST] <scriptso> The idea here is that I want to be able to cast from  a a boom 2 machine to a Windows machine... I did not use that client script on the Windows machine.... yeah...,  for some reason I thought VLC would handle it??    I have gone over ffmpeg entire Docs but what I'm confused on is on the client side for Windows,  exactly how do I go about setting the right settings?? I am very very new to ffmpeg
[11:37:25 CEST] <jimgray> I would like to combine a set of images.jpg and an audio.mp3 to test.mp4. But the images diffs in resolution. The command I use is:
[11:37:40 CEST] <jimgray> ffmpeg -framerate 29/201 -pattern_type glob -i '*.jpg' -i audio.mp3 -c:v libx264 -c:a copy -r 30 -pix_fmt yuv420p -shortest test.mp4
[11:38:15 CEST] <jimgray> But some images look ugly, losing original resolution.
[11:38:23 CEST] <faLUCE> hello, does zerolatency have no B-frames?
[11:40:02 CEST] <furq> faLUCE: http://vpaste.net/txsG2
[11:45:17 CEST] <faLUCE> thanks furq
[12:08:39 CEST] <kerio> yo can mpv play two video tracks at the same time
[12:09:40 CEST] <jer_> Hey guys, does anyone know of a way to remove the audio priming 'noise' when concating two mp4 files with aac audio? Between each file is a short audioglitch.
[12:20:16 CEST] <BtbN> re-encode and use the concat filter
[12:26:04 CEST] <jer_> re-encoding isnt really an option here im affraid. We are working with automated processes concatting thousands of videos in a short ammount of time, and re-encoding them all will increase our time/cost tremendously. I found a suggestion forcing the libfaac encoder would do the trick, but still need to test that. Any other tips are more than appreciated
[12:26:53 CEST] <squ> re-encode just audio
[12:29:36 CEST] <BtbN> if you're not re-encoding, forcing an encoder is pointless
[12:29:39 CEST] <BtbN> also, faac is bad
[12:30:08 CEST] <BtbN> And I don't think it's possible to avoid this without re-encoding.
[12:30:27 CEST] <BtbN> The segments don't match, so the gap in audio becomes audible
[12:31:17 CEST] <BtbN> You might be lucky if you trim off the last second of each segment, or even less
[12:39:07 CEST] <jer_> hmm, quite right indeed :P Reencoding just audio could be the solution here. Time for some thourough testing. Thanks!
[12:40:00 CEST] <BtbN> that'll be hard to do
[12:40:20 CEST] <BtbN> if you use the concat filter, which is neccesary to fix this, everything has to be re-encoded
[12:40:44 CEST] <BtbN> could split the files, re-encode the audio, and merge them again
[14:08:27 CEST] <Nacht> Anyone know why this isnt working? I get a clip, but I'm not seeing any subs burned in:
[14:08:27 CEST] <Nacht> ffmpeg -i movie.mp4 -i sub.ass -filter_complex "[0:v][1:s]overlay[v]" -map "[v]" -map 0:a -c:a copy out.mp4
[14:14:43 CEST] <Nacht> im using a filter_complex cause I want to burn in multiple subtitles at different heights/colors
[14:28:46 CEST] <kepstin> Nacht: to burn in subs, you should use the 'subtitles' filter
[14:30:17 CEST] <Nacht> Ah I see, the overlay only works if its a track in a VOD ?
[14:31:49 CEST] <Nacht> Hmmm. I can't really set the heights of the subtitles with the subtitles filter
[14:35:00 CEST] <BtbN> heights?
[14:35:04 CEST] <BtbN> Do you mean fontsize? oO
[14:38:28 CEST] <DHE> I'm surprised the overlay filter didn't just barf on startup for specifying a non-video input
[14:57:46 CEST] <Nacht> I tried a few translation services and thought it would be handy having 3 diff language subtitles burned into the video at different heights so you see all languages at the same time.
[14:58:06 CEST] <Nacht> Ill just add them as sub tracks and leave it at that
[15:12:42 CEST] <BtbN> well, you could do that
[15:12:53 CEST] <BtbN> but I'm not sure if 3 diffrent subtitles burned in is a good idea
[15:14:56 CEST] <sgermain> Hi, I'm having a strange issue with ffmpeg and I'm not quite sure how to debug it. I'm using it to transcode audio from a live MPEG2 stream and, sometimes, the process just freezes. No error, no indication of what happened, it just freezes. Now, I have my suspicions that the source of the video might be at fault here but I need a way to ignore those errors and keep encoding
[15:15:45 CEST] <sgermain> I did try to set -err_detect to ignore errors, but it doesn't seem to do anything
[15:16:27 CEST] <sgermain> The source of the video is H264+AC3, sent by a Zenverge zn200
[15:51:05 CEST] <danieeel> anybody to help me out how to use ffmpeg to encapsulate a bitstream with metadata? (e.g. to supply raw frames, codec info, colorspae tags... and produce a .mov file)
[16:12:24 CEST] <pgorley> hi, ffmpeg's configure is outputting "GNU assembler not found, install/update gas-preprocessor" even though gas-preprocessor.pl is in my PATH, what switch do i use to tell ffmpeg where the perl script is located?
[16:13:26 CEST] <pgorley> ./configure --as=gas-preprocessor.pl doesn't print errors, but shouldn't that switch be used to specify the actual assembler (and not just the preprocessor)?
[16:15:06 CEST] <iive> pgorley: this looks like the wrong program.
[16:15:26 CEST] <iive> that switch is for the actual assembler
[16:16:28 CEST] <iive> pgorley: are you compiling in msvc?
[16:17:23 CEST] <pgorley> iive: no, on ubuntu, compiling for android
[16:17:31 CEST] <iive> i'm wrong, the configure does check for gas-preprocessor.pl .
[16:19:16 CEST] <pgorley> hence my confusion :/
[16:20:21 CEST] <iive> yeh...
[16:20:33 CEST] <iive> no idea, somebody else around here should know.
[16:21:11 CEST] <iive> you do have working `gas` as in gnu assembler
[16:21:18 CEST] <pgorley> it checks for gas-preprocessor.pl in $as
[16:21:24 CEST] <pgorley> yes, i have gas installed
[16:21:42 CEST] <pfirsich> I am trying to convert the pixel format when piping out raw video, but it doesn't seem to work properly: https://pastebin.com/LdHPypyV
[16:22:09 CEST] <pfirsich> I think I don't understand something fundamental
[16:23:50 CEST] <furq> pfirsich: -pix_fmt, not -pixel_format
[16:24:11 CEST] <pfirsich> damn. It said that was deprecated and I assumed it was just a replacement for clarification
[16:26:53 CEST] <pfirsich> furq, that works. Thank you. But it says: "Option -pix_fmt is deprecated, use -pixel_format."
[16:27:10 CEST] <furq> -pix_fmt is an ffmpeg output option
[16:27:11 CEST] <pfirsich> and on that site: https://ffmpeg.org/ffmpeg.html it says nothing about deprecation. Is that error maybe not intentional? And if it is, it is most definitely confusing
[16:27:16 CEST] <furq> presumably ffplay still takes -pixel_format
[16:27:36 CEST] <furq> -pixel_format is an input option for some ffmpeg formats
[16:27:52 CEST] <furq> it's not confusing at all and also i am lying
[16:28:39 CEST] <pfirsich> you are right. it is a simple replacement for ffplay.
[16:29:14 CEST] <pfirsich> so passing -pix_fmt for ffmpeg and -pixel_format for ffplay works too.
[16:29:48 CEST] <pfirsich> I assume it is not a goal to have a unified interface?
[16:29:59 CEST] <furq> -pixel_format is an input option for both
[16:30:07 CEST] <pfirsich> ah, I see
[16:30:17 CEST] <pfirsich> now I understand, thank you
[16:30:18 CEST] <furq> if it's confusing then you can just use -vf format=rgb24 in ffmpeg
[16:30:24 CEST] <furq> -pix_fmt is just an alias for that
[16:31:37 CEST] <pfirsich> thanks a lot furq
[16:32:16 CEST] <sgermain> Is there an advanced debugging mode for ffmpeg? Trying to figure out why my transcode freezes periodically
[16:37:50 CEST] <pgorley> sgermain: configure using --enable-debug=[some high number], run ffmpeg with loglevel=debug (or trace, but it's noisy) and use gdb
[16:39:49 CEST] <sgermain> Would that show errors coming from the source? Like I said earlier, I think that Zenverge encoder is sending out garbage that causes ffmpeg to freeze
[16:40:32 CEST] <JEEB> yes, just debug log level should show various input demuxing/decoding errors
[16:40:47 CEST] <sgermain> Cool! Thanks
[16:40:52 CEST] <sgermain> I'll try that
[16:40:52 CEST] <JEEB> (no need for re-building just for the logging part unless you somehow built without the logging enabled)
[16:41:11 CEST] <sgermain> It's the default Ubuntu build
[16:41:19 CEST] <pgorley> loggin's enabled
[16:41:33 CEST] <sgermain> Excellent
[16:41:41 CEST] <sgermain> Thanks guys!
[16:53:23 CEST] <adhawkins> Hi all. Trying to convert a 4k video file using ffmpeg to change the frame rate to 24fps for a particular playout device. The conversion is failing. Log is here:
[16:53:24 CEST] <adhawkins> http://paste.debian.net/975911/
[16:53:30 CEST] <adhawkins> Anyone suggest what I'm doing wrong?
[17:00:57 CEST] <Nacht> Anyone got experiance with Chinese subtitles ? The characters look good in Notepad++ with UTF-8 encoding, but when I inject them in the MP4 I get nothing but squares
[17:02:36 CEST] <sfan5> does it work in a different container? which media player are you using to verify?
[17:02:44 CEST] <Nacht> VLC
[17:03:06 CEST] <BtbN> Your player is probably lacking a font for them
[17:03:23 CEST] <Nacht> Ah, thought I was getting nuts. Hadn't considered the player. Ill check that. Cheres
[17:03:26 CEST] <Nacht> *Cheers
[17:03:47 CEST] <BtbN> or something messed up the encoding
[17:25:45 CEST] <carstoge> Hey guys, after a lengthy two-pass vp9 encode on a 2-hour h264 encoded video, I've noticed a dramatic loss in quality in the output, it's not watchable anymore. I used stock settings, like described in the wiki article 'VP9/Encode'. Any tips?
[17:28:13 CEST] <carstoge> I've fiddled a bit with the options to get a more speedy encode with better visual quality output, but to no avail.
[17:28:40 CEST] <carstoge> The file size is amazingly small, I can deal with it being bigger.
[17:38:06 CEST] <furq> the stock settings are 200kbps
[17:38:10 CEST] <furq> so it's no wonder it looks shit
[17:38:26 CEST] <pgorley> so i found out the problem with my gas-preprocessor, we were downloading the html page instead of the perl script, whoops
[17:38:40 CEST] <carstoge> furq: What bitrate do you recommend?
[17:38:56 CEST] <furq> https://trac.ffmpeg.org/wiki/Encode/VP9#constantq
[17:39:01 CEST] <furq> that's recommended
[17:39:04 CEST] <carstoge> Thank you.
[17:39:14 CEST] <furq> encode a short sample and pick a crf that suits you
[17:39:29 CEST] <furq> https://trac.ffmpeg.org/wiki/Encode/VP9#speed
[17:39:34 CEST] <furq> and read that if you want it to be faster
[17:40:46 CEST] <carstoge> But can I use VBR, just with a higher targeted bitrate?
[17:41:56 CEST] <kepstin> adhawkins: dunno if it's related, but you have to repeat the output options (codec, bitrate, etc.) for each output - your second output on that is using mpeg2video & mp2 audio since nothing's set.
[17:42:08 CEST] <kepstin> but i don't know if that's the issue - does it work if you only use one output?
[17:59:09 CEST] <lmat> ffmpeg -y -ss 0:20 -i dlittle7.avi -itsoffset 0:4.5 -i dlittle7-stalbans_b_mono.wav -map 1:0 -map 0:0 -c:v copy -c:a aac dlittle7-mixed.mp4
[17:59:20 CEST] <lmat> I'm taking audio from one file and putting it with the video of another file.
[17:59:32 CEST] <lmat> My desire here is to make the audio only start 4.5 seconds into the vide.
[18:00:02 CEST] <lmat> It's not working, though...The result seems to be that the first 4.5 seconds of the video is being truncated, and the audio is starting at the beginning.
[18:00:29 CEST] <lmat> Basically, I *could* add 4.5 seconds of silence to the audio, but I would rather do this in ffmpeg. What am I doing incorrectly?
[18:18:59 CEST] <Knorrie> I'm trying to concat a few MP4 files. http://paste.debian.net/plainh/b5a52d66  For some reason, it keeps exiting with "files: Operation not permitted", whatever I try. I have no idea what would not be permitted, it's just a text file with a few lines
[18:43:50 CEST] <lmat> Very strange: ffmpeg -y -itsoffset 1:7 -i dlittle7.avi dlittle7-mixed.mp4
[18:44:15 CEST] <lmat> I would expect this command to delay the video and audio by one minute, seven seconds. What actually happens is ... strange.
[18:45:06 CEST] <lmat> The audio starts right away, but the video is frozen on the first frame. About ten seconds in, the video goes very fast and catches up with the audio, then everything plays in sync.
[18:45:15 CEST] <lmat> is -itsoffset experimental or something?
[18:45:36 CEST] <lmat> Also, when I use ffplay to play the video, it starts at 67 seconds.
[19:33:14 CEST] <kepstin> lmat: having the audio and video start at different times in files is ... problematic.
[19:33:38 CEST] <kepstin> most players sync to audio, so they'll start the video at the point where the audio starts
[19:34:15 CEST] <kepstin> so yeah, you'll probably have to add some silence to the start of the audio track (this is something you *can* do with ffmpeg, using a filter; requires re-encoding
[19:34:17 CEST] <kepstin> )
[19:34:59 CEST] <kepstin> https://ffmpeg.org/ffmpeg-filters.html#adelay can do it, for example
[19:38:41 CEST] <hiru> I was having a look at audio filters but how can I use decimal values? the command just fails if I try to do that https://stackoverflow.com/a/29222419
[19:41:52 CEST] <hiru> oh. I had to use a value in seconds converting minutes in seconds. nevermind
[19:42:57 CEST] <agentsim> got a configure issue with I'm stumped on. I need to build a pretty minimal build of avformat/codec/util supporting only audio files and images (for embedded artwork)
[19:43:19 CEST] <agentsim> it seems to work, but I cannot read PNG formatted embedded art
[19:43:49 CEST] <agentsim> I've included png and apng whenever it is an option in muxers, demuxers, encoders, decoders, etc... but still no luck
[19:43:55 CEST] <agentsim> any ideas what I might be missing?
[19:44:13 CEST] <c_14> probably something id3 or mp3 or something?
[19:45:15 CEST] <agentsim> my test file is an ALAC encoded M4A file, if i replace the artwork with a jpeg it can find it just fine
[19:45:28 CEST] <agentsim> so I doubt it is id3/mp3 related
[19:45:56 CEST] <c_14> what do you have enabled? output of ./configure ? (on a pastebin site)
[19:55:04 CEST] <agentsim> https://pastebin.com/YFAuAynF
[20:02:31 CEST] <c_14> you need zlib
[20:03:14 CEST] <c_14> for png
[20:03:24 CEST] <c_14> which you can notice since png isn't in the list of enabled decoders
[20:04:05 CEST] <c_14> (I do wish configure would error out for tried to enable but couldn't for auto-detected things)
[20:09:49 CEST] <BtbN> c_14, it does?
[20:10:33 CEST] <c_14> not for the auto-detected things
[20:10:36 CEST] <c_14> like sdl, zlib etc
[20:10:41 CEST] <BtbN> yes, even for those
[20:10:47 CEST] <c_14> it does?
[20:10:49 CEST] <c_14> since when?
[20:11:58 CEST] <BtbN> a while
[20:12:06 CEST] <BtbN> No idea if it made it to a release yet
[20:12:23 CEST] <agentsim> c_14: thanks, I'll give that a go
[20:12:44 CEST] <c_14> BtbN: zlib is check_lib and check_lib doesn't use die? only require_ uses die
[20:13:17 CEST] <BtbN> There is special logic to check for things that are user-request enabled but not enabled at the end of configure
[20:13:31 CEST] <BtbN> And at least for --enable-sdl2, it works for me.
[20:13:41 CEST] <BtbN> ERROR: sdl2 requested but not found
[20:14:14 CEST] <BtbN> line 6531 in configure
[20:15:10 CEST] <lmat> kepstin: Thanks!
[20:18:13 CEST] <c_14> BtbN: then it might just be in the --enable-decoder that doesn't fail anymore
[20:18:25 CEST] <BtbN> no
[20:19:27 CEST] <c_14> Well he had --enable-decoder=png and it wasn't enabled and didn't fail (though I guess I don't know what version he's building)
[20:22:54 CEST] <c_14> --enable-encoder=libx265 neither enables libx265 nor fails on my system
[20:23:11 CEST] <agentsim> I'm building 3.3.2 tag
[20:28:12 CEST] <lmat> kepstin: hmm, I'm a little clueless; how do I use adelay? (I should probably be asking: "how do I use filters?".)
[20:28:23 CEST] <lmat> I have    ffmpeg -y -ss 0:20 -i dlittle7.avi -adelay 14.5 -i dlittle7-stalbans_b_mono.wav -map 1:0 -map 0:0 -c:v copy -c:a... but that doesn't work
[20:30:29 CEST] <kepstin> lmat: start by reading the filters overview at the top of the page I linked. Here you can use "-af adelay=14500" to add 14.5 seconds of delay to a mono audio track. Note that "-af" is an output option, so it goes after all the input files and before the output file.
[20:30:54 CEST] <lmat> kepstin: is -filter adelay... different than -af adelay... ?
[20:31:24 CEST] <kepstin> lmat: "-af" is an alias for "-filter:a"
[20:32:16 CEST] <lmat> kepstin: awesome.
[20:32:47 CEST] <lmat> Ah, it matches up perfectly.
[20:33:41 CEST] <lmat> well...maybe 100 more milliseconds...
[20:35:35 CEST] <agentsim> c_14: thanks, that did the trick :)
[20:43:43 CEST] <c_14> BtbN: ./configure --disable-all --enable-ffmpeg --enable-avformat --enable-avcodec --enable-parser=png --enable-demuxer=apng --enable-decoder=png --enable-filters --enable-protocol=file --enable-decoder=apng --enable-demuxer=image2 --enable-demuxer=image2pipe --enable-demuxer=image_png_pipe --disable-zlib <- minimal example that doesn't fail and also doesn't enable the png decoder. Checked on master as of
[20:43:45 CEST] <c_14> 02d248
[20:51:15 CEST] <lmat> kepstin: My command: ffmpeg -y -ss 0:20 -i dlittle7.avi -i dlittle7-stalbans_b_mono.wav -map 1:0 -map 0:0 -c:v copy -c:a aac -filter:a adelay=4100 -vf fade:in:0:30 -t 223 dlittle7-mixed.mp4
[20:51:32 CEST] <lmat> I would like to fade in (vide -vf toward the end), but I am   -c:v copy. These are not compatible.
[20:51:42 CEST] <lmat> I'm happy to get rid of copy, but I don't know with what to replace it.
[20:52:12 CEST] <kepstin> with the name of the codec that you would like to use to re-encode the video. In this case, probably x264
[20:52:21 CEST] <kepstin> er, 'libx264'
[20:53:02 CEST] <lmat> okay
[20:53:47 CEST] <lmat> kepstin: I think I should have known that...
[20:54:15 CEST] <lmat> Oh, I just realized that all my cores are busy with this operation and my cpu is running at > 4.5 ghz :-o
[20:54:41 CEST] <lmat> yes, worked perfectly!
[20:57:53 CEST] <lindylex> How do I set the overlayed video width and height to 100?  This is what I tried ffmpeg -i m.mov -i o.mov -filter_complex 'overlay=x=440:y=400,overlay_w=100:overlay_h=100' -y my.mov
[20:58:02 CEST] <lindylex> It does not work.
[20:58:56 CEST] <lmat> lindylex: Which video are you wanting to overlay onto which?
[20:59:32 CEST] <lindylex> o.mov will be the overlay video.  I need to resize it to 100 x 100.
[21:00:39 CEST] <lindylex> I think I could specify from reading the docs.  This is what it says "overlay_w, w
[21:00:39 CEST] <lindylex> overlay_h, h
[21:00:39 CEST] <lindylex>     The overlay input width and height.
[21:00:39 CEST] <lindylex> "
[21:00:41 CEST] <lmat> lindylex: ',' is used to add more filters. Perhaps change ',' to ':' ?
[21:01:03 CEST] <lmat> lindylex: (I'm totally newb)
[21:03:25 CEST] <lindylex> nevermind overlay_h and overlay_w store the width and height of the overlayed video.  It is not for setting.  I need to resize before I overlay.
[22:26:40 CEST] <OldBrick> Hi All.  I'm having a bit of trouble with ffmpeg.  I'm using a small script to launch it and record a video stream.  It works great from the command line, but fails when used from a timer.  Linux box, and the error always is "could not find codec parameters".  Anyone have some thoughts?
[22:36:10 CEST] <JodaZ> OldBrick, you have to explicitly set the input codec now whereas before ffmpeg could propably guess it from the file extension
[22:39:08 CEST] <OldBrick> I think I do, the whole command is "ffmpeg -t $RECORDTIME -i udp://@:5000?timeout=1500000 -c:v mpeg2video  -pass 1  /home/av/bobdisk/$FILENAME  &"
[22:41:36 CEST] <JodaZ> so what is the input format?
[22:41:48 CEST] <JodaZ> it has to come before the -i in the command
[22:43:30 CEST] <OldBrick> Hmm. I don't want to transcode, just stream to disk.  So the -c:v mpeg2video should go before -i.  I'll try that.  Wonder why it works from the bash prompt then.
[22:50:49 CEST] <JodaZ> OldBrick, looking at it, it might be the extension on the output file missing too
[22:50:52 CEST] <JodaZ> not sure
[23:29:08 CEST] <kepstin> OldBrick: what exactly are your trying to do with that command? THere's some options there that don't really make sense...
[23:30:25 CEST] <kepstin> OldBrick: first, what is $RECORDTIME - what's an example value, what's the intent of its use?
[23:32:19 CEST] <kepstin> OldBrick: right now you're transcoding to mpeg2, you might want to consider using '-c:v copy' if you're just saving to disk. Also the '-pass' option only makes sense if you're going to do a 2-pass encode, which you can't do with a live source. Just remove that option.
[23:33:30 CEST] <kepstin> I'm not sure what you mean by "when used from a timer"
[23:35:40 CEST] <puppy431> Hello, under windows, is it possible to encode audio directly from a CD (-i Track0n.cda) ?
[23:45:21 CEST] <OldBrick> JodaZ, kepstin, there was a lot of playing around to try to make it work, so not all are doing anything.  I'm trying to record video from an hdhomerun tv tuner to a file.  $RECORDTIME is set to 1 minute for test.
[23:45:39 CEST] <OldBrick> The timer is just a systemd timer.
[23:46:14 CEST] <kepstin> puppy431: maybe with a custom build that includes libcdio it could work, but e.g. the zeranoe builds don't have that.
[23:47:26 CEST] <puppy431> kepstin, is it possible under linux, then?
[23:48:05 CEST] <kepstin> puppy431: yes
[23:49:09 CEST] <puppy431> and can you 'rip' all tracks at once?
[23:49:21 CEST] Action: kepstin notes that libcdio isn't known as a particularly reliable cd ripper, so it's probably still better to use other tools like cdparanoia to rip cds before encoding.
[23:49:55 CEST] Action: puppy431 thanks kepstin ;)
[23:50:16 CEST] <kepstin> puppy431: you could rip all tracks at once - as a single long audio file, yes.
[00:00:00 CEST] --- Wed Jul 12 2017



More information about the Ffmpeg-devel-irc mailing list