[Ffmpeg-devel-irc] ffmpeg.log.20180609

burek burek021 at gmail.com
Sun Jun 10 03:05:02 EEST 2018


[01:40:05 CEST] <wfbarksdale> does av_seek_frame handle making sure that, in the case of video and audio, that both streams can be decoded successfully from the resulting seek point? or do you need to do two seeks and check the resulting file position and use the earliest seek result?...  it looks like avformat_seek_file handles this, but the API is still under construction
[01:55:57 CEST] <wfbarksdale> also, i'm actually not finding any way to check the position of the AVFormatContext when demuxing
[06:39:36 CEST] <scientes> is the DTS decoder multi-threaded
[06:40:34 CEST] <scientes> (ac3)
[06:44:50 CEST] <scientes> i've had problem with skipping on a h.264/5.1 video even with video off on vlc
[06:55:28 CEST] <atomnuker> no, it isn't
[06:56:58 CEST] <scientes> not even for differnt channels
[06:57:06 CEST] <scientes> 5.1
[06:57:30 CEST] <atomnuker> no
[06:57:35 CEST] <reepca> not sure where else to ask this, but I'm trying to use ffmpeg to take a picture with my webcam. However I have to have it running for over 2 seconds or the resulting image is very dark. I'd like to figure out a way to make that a one-time on-startup delay for an interactive application that uses ffmpeg. That is, I want to interactively specify to a running ffmpeg when frames should be captured and written. Or, alternatively, find a way
[06:57:35 CEST] <reepca> to keep the camera "running", if that makes any sense, between invocations of ffmpeg. Any advice?
[09:21:51 CEST] <cryptopsy> how do i losslessly trim? this was my command  ffmpeg -i "${1}" -ab 320k -ac 2 -ar 48000 -ss "${3}" -t "${4}" -strict -2 "${2}"
[10:05:24 CEST] <Mavrik> cryptopsy, -codec copy instead of ab, ac, ar
[10:05:35 CEST] <Mavrik> That skips encode and just copies part of the stream
[10:09:41 CEST] <cryptopsy> have to reboot, brb
[11:28:42 CEST] <jpeg> I am trying to overlay an input with an alpha channel webm, by using the libvpx-vp9 encoding, it works fine, if I use a command I found through google, but not if I add filters/music to it (it overlays the webm with black background instead of transparency), here's both commands and the link mentioned: https://pastebin.com/ra0hCGGx what am I doing wrong or what flag am I missing?
[11:32:04 CEST] <ChocolateArmpits> jpeg, try converting the jpg to a pixel format that has alpha channel support before directing it to overlay
[11:32:28 CEST] <ChocolateArmpits> it may be that overlay takes the pixel format of the first input and uses that for output
[11:32:57 CEST] <jpeg> I'll try to google that up, don't quite know how I would do that to be honest, thanks :)
[11:34:53 CEST] <BtbN> jpg doesn't have transparency. Converting it to a format with alpha channel will only give you something fully intransparent as well
[11:35:58 CEST] <jpeg> oh wait so I just need to convert it to png, now I get it, sorry makes sense, let's try that
[11:36:12 CEST] <BtbN> that makes no sense, no.
[11:36:26 CEST] <BtbN> There won't suddenly information show up which parts are meant to be transparent
[11:37:14 CEST] <ChocolateArmpits> BtbN, well if he's scaling the webm and the jpg is smaller than that and the webm is transparent then the resultant image should have some transparency
[11:37:32 CEST] <ChocolateArmpits> provided everything works, though it doesn't right now
[11:37:49 CEST] <jpeg> I mean if I convert the input file to a png first and overlay that with the webm, then it should work as much as I understood, worth a test anyway
[11:38:03 CEST] <ChocolateArmpits> then make sure the png has alpha as well
[11:38:23 CEST] <ChocolateArmpits> doesn't matter if it's fully opaque
[11:50:10 CEST] <jpeg> doesn't seem to work by using a png with alpha channel
[11:50:21 CEST] <jpeg> sorry that it took so long, literally rendering on a potato
[11:51:14 CEST] <jpeg> I feel it has something to do with some missing flag I am not setting or a flag I -am- setting, causing the issue, but I can't figure out what it is from the docs or googling
[12:04:08 CEST] <furq> jpeg: i take it background.webm and background.jpg are fully opaque and you just want the transparent video overlaid on top
[12:04:17 CEST] <jpeg> yes
[12:04:22 CEST] <furq> weird
[12:04:26 CEST] <furq> i don't see anything in there that wouldn't work
[12:04:57 CEST] <furq> other than vp9 in mp4, which probably isn't what you want
[12:05:01 CEST] <furq> that wouldn't cause the problem though
[12:05:42 CEST] <jpeg> just tried setting pix_fmt to yuv420p which iirc does have transparency channel and it still overlay the transparent webm with black background
[12:09:00 CEST] <jpeg> just to clarify, maybe I have stated something wrong before: I have a background image (tried both jpg and png) and I want to overlay a webm that has an alpha channel (a text that just fades in and out)
[12:13:18 CEST] <BtbN> yuv420p does not have transparency
[12:13:20 CEST] <BtbN> yuva does
[12:15:39 CEST] <furq> scale shouldn't change the format anyway
[12:17:21 CEST] <jpeg> BtbN just "yuva"?
[12:17:27 CEST] <furq> yuva420p
[12:17:32 CEST] <furq> don't set that as your output format though
[12:17:59 CEST] <furq> maybe try format=yuva420p,scale=1920:1080
[12:18:02 CEST] <furq> or the other way round
[12:18:07 CEST] <furq> it shouldn't be necessary though
[12:18:40 CEST] <jpeg> ok will try
[12:19:05 CEST] <jpeg> would it help btw if I'd pastebin the console output?
[12:19:28 CEST] <furq> it couldn't hurt
[12:20:09 CEST] <furq> also for debugging you probably want to use something other than vp9
[12:20:29 CEST] <jpeg> ok here it is for the current command: https://pastebin.com/sGsT5SLD
[12:20:30 CEST] <furq> if the output after overlay is supposed to be fully opaque then it's not the encoder that's to blame
[12:20:50 CEST] <furq> oh what
[12:21:06 CEST] <furq> yuv420p with "alpha_mode=1"
[12:22:00 CEST] <furq> maybe you do need to explicitly set yuva420p then
[12:22:00 CEST] <jpeg> is that a clue to the issue?
[12:22:24 CEST] <jpeg> well I did set the format and rendering rn, let's see
[12:22:39 CEST] <furq> like i said, try encoding to something faster than vp9 for debugging
[12:22:49 CEST] <jpeg> what would that be?
[12:22:59 CEST] <furq> -c:v libx264 -preset ultrafast
[12:23:25 CEST] <jpeg> ok let me do that
[12:24:17 CEST] <jpeg> yeah seems to be faster, thanks, let's see once it finishes
[12:26:37 CEST] <jpeg> still black background
[12:28:47 CEST] <andoru> hello everyone
[12:29:02 CEST] <andoru> does anyone know how to efficiently encode a still image video with VP9?
[12:29:35 CEST] <andoru> basically it's an hour-long video file that only has the first frame repeated until the video ends (duration set with avisynth)
[12:30:33 CEST] <andoru> I've tried to encode it with this command line: ffmpeg.exe -i "test.avs" -c:v libvpx-vp9 -lossless 1 output.webm
[12:30:52 CEST] <andoru> it gave me a 60MB video file, and it was a bit slow
[12:31:09 CEST] <andoru> while it's a decent result, I was wondering if it would be possible to do better?
[12:31:22 CEST] <BtbN> set a super low framerate
[12:31:42 CEST] <andoru> I did, the script is set to output 1FPS
[12:31:54 CEST] <jpeg> furq I'll try to encode again with vp9 just in case
[12:32:08 CEST] <BtbN> was more thinking like 1/length
[12:32:33 CEST] <andoru> I don't quite follow
[12:33:44 CEST] <andoru> you mean 0.1FPS?
[12:35:19 CEST] <jpeg> furq nope with vp9 and format=yuva420p it's still adding the black background to the transparent webm
[12:37:01 CEST] <andoru> what I was looking for were some encoder switches that would tell it to encode the first frame as lossless, while the subsequent frames would be empty copies of the first frame
[12:37:43 CEST] <andoru> I was able to achieve this by setting the keyint in h264 to infinite, but I'm not aware of such a setting in VP9
[12:38:16 CEST] <furq> andoru: -g
[12:38:23 CEST] <furq> that will obviously prevent you from seeking though
[12:38:40 CEST] <andoru> oh, that's a no-no :/
[12:38:53 CEST] <furq> i mean infinite keyint in x264 would do the same thing
[12:39:16 CEST] <furq> seeking will still sort of work in some players but it'll be really slow
[12:39:17 CEST] <andoru> I was able to seek the h264 video (although slowly)
[12:39:28 CEST] <andoru> yeah, then I'll try that
[12:39:30 CEST] <furq> but some players will refuse to seek to anything other than an IDR frame
[12:39:34 CEST] <andoru> thanks!
[12:39:38 CEST] <furq> or whatever they call it in vp9
[12:46:28 CEST] <andoru> okay, I've added -g -1, also tried -g 9999 and -g 0, but all don't seem to help :/
[12:49:08 CEST] <jpeg> the googled command seems to work even if I apply it to an image: https://pastebin.com/JdwBeNUS I don't understand what the most recent command https://pastebin.com/wEa0QWEm does wrong, so that the webm has black background instead of transparency
[12:49:31 CEST] <jpeg> is there anything else I could try that maybe would make it work?
[12:50:21 CEST] <andoru> ah, nevermind
[12:50:40 CEST] <andoru> I forgot I've set -crf 1 instead of -lossless 1 on my subsequent trials
[12:51:19 CEST] <andoru> it works now with this command: ffmpeg.exe -i "test.avs" -c:v libvpx-vp9 -g 9999 -lossless 1 output.webm
[12:51:28 CEST] <andoru> thank you for the help furq
[15:15:13 CEST] <RedSoxFan07> Is it possible to keep PGS subtitles and also convert them to SSA and SRT? How?
[15:16:11 CEST] <RedSoxFan07> Like this? -c:s copy -c:s ssa -c:s srt
[15:16:45 CEST] <JEEB> it's not possible
[15:16:51 CEST] <JEEB> PGS is picture based
[15:17:02 CEST] <JEEB> you need to OCR it to get text out of it
[15:17:04 CEST] <furq> you would normally be able to map the stream twice and -c:s:0 copy -c:s:1 srt
[15:17:19 CEST] <furq> but yeah, you can't automatically convert image subs to text with ffmpeg
[15:17:31 CEST] <JEEB> there's a filter for OCR, but unfortunately it doesn't return the text in a way that you can then utilize to make subtitle packets out of it
[15:18:10 CEST] <furq> there are plenty of good OCR tools out there anyway
[15:18:20 CEST] <furq> and once you've done that you can mux the srt back in and have both
[15:20:13 CEST] <JEEB> yea, for a static piece of video there's plenty of tools
[15:20:15 CEST] <JEEB> like subtitle edit
[15:20:31 CEST] <JEEB> you have some cases where you'd want to take in broadcast picture subtitles
[15:20:34 CEST] <JEEB> and OCR them into text
[15:36:13 CEST] <johnch> Hi! Does ffmpeg run Ok on the Raspberry Pi? I notice varios plugins in the Pi repo but not ffmpeg itself. Will I have to compile from source?
[15:38:48 CEST] <JEEB> FFmpeg can utilize the hw decoding stuff on the rpi, but if you're planning to do anything on the CPU that's not just churning out audio it's going to a road to pain
[15:39:14 CEST] <JEEB> as in, FFmpeg will run on the rpi (Both armv7 and aarch64)
[15:39:30 CEST] <JEEB> but it's more the utter slowness of the rpi platform that generally gets you limited
[15:42:19 CEST] <johnch> JEEB: this is what I was worried about, spending a lot of time and effort only to get slow and laggy output. I have an audio stream set up in Icecast. Was thinking of adding video.
[15:43:14 CEST] <JEEB> I think the rpi might have a hw decoder and encoder for H.264 (not 100% sure of the latter), but you can almost certainly give up on anything software based on that thing
[15:43:59 CEST] <JEEB> and the hw encoder I'm not sure how well it does its bandwidth limiting so I'm not really sure how well it's suited for streaming. also the compression capabilities of a hw encoder are generally aimed towards low latency and speed, not compression ratios
[15:44:37 CEST] <JEEB> so if you are thinking of actually doing the A/V coding on the rpi...
[15:44:58 CEST] <JEEB> that has to be a really minimal thing, not sure if you'd even get scaling in there
[15:45:17 CEST] <JEEB> since you'd have to make sure that you can decode the input with hw decoding chip, then just throw that into the hw encoder
[15:45:34 CEST] <JEEB> at which point you might also think about just copying the source bit stream of video into the output as-is
[15:48:36 CEST] <johnch> JEEB: yes, I think you are right about the H.264, but considering all other factors you mention trying to run video seems pretty marginal at best.
[15:49:20 CEST] <johnch> JEEB: was thinking just streaming a single camera feed, but that might be to say, half a dozen clients. Sound like it would not cope anyway.
[15:49:41 CEST] <johnch> JEEB: will probably just stick to providing the audio streams.
[15:49:55 CEST] <johnch> JEEB: thanks for responding to my question.
[15:50:42 CEST] <JEEB> the camera might either output its own crappy H.264 stream, or give you raw frames. although given how everything on the rpi is on the USB thing, I don't think raw video input is a good idea...
[15:52:34 CEST] <johnch> JEEB: the cam I was going to experiment with (the only one I have) is a Sony Playstation Eyetoy. Most likely RAW video I would think, so I guess that puts paid to that idea.....
[15:52:57 CEST] <JEEB> johnch: oh that thing. the ps2 one? I happen to still probably have one of those somwhere
[15:53:13 CEST] <JEEB> those frames would be minimal I think, you could try plugging it to a linux VM and see what it exports
[15:53:15 CEST] <johnch> JEEB: yes, the ps2 one.
[15:53:34 CEST] <JEEB> I think it worked under linux surprisingly enough since it was some rebranded logitech thing
[15:54:01 CEST] <JEEB> and then if it could output 4:2:0 YCbCr you could in theory output that to the hw encoder
[15:54:12 CEST] <JEEB> (not 4:2:2 since that's not supported by almost any HW)
[15:54:41 CEST] <JEEB> or you could maybe get JPEG from the camera
[15:54:43 CEST] <johnch> JEEB: yes, works on my MINT setup just fine. Not sure how to check what it puts out though.
[15:54:44 CEST] <JEEB> which is crappy
[15:54:51 CEST] <JEEB> but at least wouldn't use CPU?
[15:55:00 CEST] <JEEB> and you could pass that on as-is
[15:59:31 CEST] <johnch> JEEB: I have it on VLC. Just trying to figure out if there is any info about what VLC is receiving from it
[15:59:56 CEST] <JEEB> it's probably exported as a v4l2 device
[16:00:11 CEST] <JEEB> so there should be some tools to check what it can output etc
[16:01:53 CEST] <johnch> JEEB: found it. You are quite right, its a v4l2 device, motion jpeg video, 640x480, 4:2:2 YUV. Not sure what YUV means though.
[16:03:02 CEST] <JEEB> YUV is a misnomer for YCbCr in digital space
[16:03:12 CEST] <JEEB> (although FFmpeg also calls it so :))
[16:03:20 CEST] <JEEB> but yes, it's MJPEG
[16:03:22 CEST] <JEEB> not raw video
[16:03:44 CEST] <JEEB> so I would give up re-encoding that but rather would just pass those images through
[16:03:55 CEST] <JEEB> I think the frame rate depended on the resolution and lighting
[16:07:26 CEST] <johnch> JEEB: makes sens just to pass that through. Keeps overhead to a minimum, which as you say is essential here. I will perhaps experiment a bit.
[16:07:42 CEST] <johnch> JEEB: does ffmpeg come in a deb package?
[16:08:49 CEST] <johnch> JEEB: for ARM that is. I can see there is Debian which I assume is for Intel
[16:09:37 CEST] <JEEB> it's most certainly buildable but no idea how old it is
[16:10:39 CEST] <johnch> JEEB: ok, looks like I will have to build it then. Was hoping not to clutter the pi with dev tools...
[16:11:08 CEST] <JEEB> then just look up how to cross compile with your rpi sysroot
[16:11:29 CEST] <JEEB> it will also go much faster then
[16:12:50 CEST] <johnch> JEEB: no idea what a rpi sysroot is, so just researching that and how to cross compile. never done that before. Thanks for your help.
[16:13:41 CEST] <JEEB> sysroot can just be your / on rpi
[16:13:54 CEST] <JEEB> which you can put somewhere on your system
[16:14:10 CEST] <JEEB> there is some debian tools for getting it from the debian repos too
[16:15:21 CEST] <johnch> JEEB: thnx. Just reading up on it.
[16:28:43 CEST] <furq> does ffmpeg support the mmal mjpeg decoder yet
[16:29:19 CEST] <furq> johnch: on a recent debian-ish os you should just need crossbuild-essential-armhf
[16:30:08 CEST] <furq> and then just copy /opt/vc4 off your pi and pass that to --extra-cflags for mmal and omx
[16:48:55 CEST] <debianuser> Hello. Short question. https://ffmpeg.org/ffmpeg.html mentions `-stdin` to "enable interaction on standard input" and `-nostdin` to disable those interactive commands. But where can I find a list of those interactive commands?
[16:49:57 CEST] <furq> press ? while it's running
[16:53:30 CEST] <debianuser> furq: Thank you!
[17:02:47 CEST] <jpeg> furq would you have any other ideas on what I could try to maybe make the webm overlay to be transparent? I googled since we last tried and none of the solutions seemed to work, so I hoped you'd maybe know something else
[17:06:31 CEST] <furq> i couldn't really suggest anything other than making sure you're on a recent ffmpeg
[17:06:38 CEST] <furq> like i said, the same workflow seems to work fine here
[17:08:38 CEST] <jpeg> I pulled the newest ffmpeg there is, guess it's just something strange happening
[17:09:50 CEST] <furq> it's weird that scale breaks it but overlay doesn't
[17:11:57 CEST] <jpeg> is it scale that breaks it though?
[17:12:00 CEST] <jpeg> let me remove that and see
[17:12:14 CEST] <furq> you're not adding anything else that would break it
[17:16:05 CEST] <furq> oh wtf
[17:16:13 CEST] <jpeg> removing it - it still breaks
[17:16:16 CEST] <furq> apparently you need -c:v libvpx before -i or else the decoder doesn't handle transparency
[17:16:29 CEST] <jpeg> oh seriously? lol
[17:16:29 CEST] <furq> that's...dumb
[17:16:38 CEST] <jpeg> where did you get that from, I went throughout the whole internet it feels
[17:16:41 CEST] <jpeg> lets test
[17:16:44 CEST] <jpeg> sounds promising
[17:16:56 CEST] <furq> i saw it a few minutes ago on SO and assumed it was old news
[17:17:00 CEST] <furq> since the post was two years old
[17:17:05 CEST] <furq> but i just tried it out and it works
[17:18:06 CEST] <jpeg> could you link me that page just so I could bookmark it for later use too?
[17:18:21 CEST] <furq> https://video.stackexchange.com/a/19226
[17:18:47 CEST] <jpeg> thanks! rendering rn hopefully it works
[17:18:58 CEST] <furq> yeah i remembered after seeing it that you had it in your first command
[17:19:59 CEST] <furq> it obviously specifically goes before -i foo.webm
[17:20:08 CEST] <furq> so if your image is first then it's after that
[17:20:30 CEST] <jpeg> you're truly amazing!
[17:20:32 CEST] <jpeg> it works holy shit
[17:20:35 CEST] <jpeg> thank you so much
[17:20:36 CEST] <furq> nice
[17:21:35 CEST] <JEEB> yea I think the separate alpha plane is the only thing not supported by the lavc decoder for vp8/9?
[17:21:52 CEST] <jpeg> not sure what that means
[17:22:10 CEST] <furq> well apparently it works, you just need to explicitly flag the decoder
[17:22:25 CEST] <furq> oh hang on
[17:22:28 CEST] <JEEB> if you need to force libvpx it uses the libvpx lavc wrapper :P
[17:22:29 CEST] <furq> ffvp8 is the default decoder isn't it
[17:22:32 CEST] <JEEB> yes
[17:22:34 CEST] <furq> yeah i just realised why that's not dumb
[17:23:39 CEST] <jpeg> oh so it defaults to the wrong decoder if not specified?
[17:24:00 CEST] <JEEB> not wrong, but one that doesn't specifically support the complteely separate alpha feature
[17:24:16 CEST] <furq> is there any other way of doing alpha
[17:24:49 CEST] <furq> outputting yuva420p with ffmpeg seems to give the same thing
[17:25:04 CEST] <jpeg> so whats better livpx or yuva?
[17:25:23 CEST] <furq> they're not the same thing
[17:25:58 CEST] <JEEB> furq: the decoder most certainly doesn't support the alpha-in-there pix_fmt it seems tho? http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/vp8.c;h=62b9f8bc2dae227e28206f2ac41911aa99ff2dac;hb=HEAD#l170
[17:25:59 CEST] <jpeg> only weird thing is that it doesn't fully fade out as in the webm
[17:26:06 CEST] <jpeg> it gets stuck on ~10% visibility and stays on top
[17:26:32 CEST] <furq> JEEB: the decoder doesn't
[17:26:55 CEST] <furq> i mean that encoding yuva420p with libvpx with ffmpeg gives a file that ffprobe reports as yuv420p, alpha_mode 1
[17:27:08 CEST] <JEEB> uh-huh
[17:27:10 CEST] <furq> which i assume means it has a separate alpha plane
[17:27:11 CEST] <JEEB> probably a profile string
[17:27:13 CEST] <JEEB> yes
[17:27:27 CEST] <JEEB> VPx has a history with alpha planes
[17:27:35 CEST] <JEEB> I wonder if it uses the same way as VP5/6 before
[17:27:39 CEST] <furq> is that the only way of doing it with vpx
[17:27:52 CEST] <JEEB> if you actually need an alpha plane, yes
[17:27:55 CEST] <furq> right
[17:28:02 CEST] <furq> ok this all makes more sense then
[17:28:44 CEST] <jpeg> I wonder does the loop apply to the overlay too? maybe that's why it stays "fixed" at that one point, instead of fully playing it
[17:28:48 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/vp6.c;h=645fc5c690e03d2f30f945433d4d1356236fa446;hb=HEAD#l626
[17:28:58 CEST] <JEEB> I wonder if you could see if it's the same way as in VP6?
[17:29:07 CEST] <JEEB> that seems to have some flag for has_alpha and open another decoder
[17:29:35 CEST] <JEEB> so if someone cares enough it could be implemented (and with libvpx I guess you have reference code)
[17:30:17 CEST] <JEEB> but with vp6 it literally looks like the alpha plane is being fed to a completely separate vp6 decoder
[17:30:20 CEST] <JEEB> lol
[17:30:58 CEST] <furq> oh yeah
[17:31:15 CEST] <furq> JEEB: idk if you saw it but some guy was complaining that channelcount isn't set correctly in mp4s that don't have stereo audio
[17:31:24 CEST] <furq> which doesn't break anything but mediainfo shows the wrong value
[17:31:29 CEST] <furq> on the off chance you think that's worth fixing
[17:31:31 CEST] <jpeg> fixed that issue, thank you so much again guys!
[17:32:49 CEST] <JEEB> furq: can't quickly say off hand if it's what the spec notes or not
[17:33:00 CEST] <furq> yeah i noticed l-smash does the same thing
[17:34:04 CEST] <furq> there is a separate channelconfiguration tag which is set correctly
[19:49:24 CEST] <foop> How can I insert an audio delay at a specific time in a video (not throughout the whole video)? The audio is in sync up until a certain time, where it suddenly becomes several seconds out of sync
[19:56:05 CEST] <BtbN> split it, apply fix to broken part, re-combine
[19:57:35 CEST] <ChocolateArmpits> it should be possible to adjust timestamps via asetpts and an expression that would increment timestamps after a certain timestamp
[19:59:27 CEST] <BtbN> that forces you to re-encode the whole thing though, and is probably more work to figure out
[20:03:03 CEST] <ChocolateArmpits> well using ffmpeg for this rather than let's say audacity is already amore work
[20:03:08 CEST] <ChocolateArmpits> more*
[20:25:20 CEST] <foop> thanks, I went with the splitting approach
[20:56:42 CEST] <heliumclicks> Hey all, I'm trying to digitize some old tapes using an easycap device. It "works" but I find that the audio becomes more out of sync over time. The videos can be pretty long (2hrs+) so recording everything raw isn't really an option. https://pastebin.com/mTiAdPqB
[21:10:04 CEST] <ChocolateArmpits> heliumclicks, what are the input formats?
[21:12:50 CEST] <ariyasu> you could do the audio raw while encoding the video
[21:13:17 CEST] <heliumclicks> How do I determine the input formats? It's coming from a v4l2 device
[21:13:23 CEST] <ariyasu> depending on your hardware you could probable do -preset slow and a lower crf in realtime also
[21:13:30 CEST] <heliumclicks> ariyasu... hmmm good point. I'll test that maybe
[21:13:36 CEST] <furq> please don't put h264 in avi
[21:13:37 CEST] <heliumclicks> i have pretty good hardware
[21:13:56 CEST] <ChocolateArmpits> heliumclicks, well ffmpeg lists the format of the input, say framerate, audio rate, resolution
[21:14:10 CEST] <furq> you could probably do lossless if this is just vhs
[21:14:18 CEST] <furq> ffv1 and flac should compress pretty well
[21:14:37 CEST] <furq> and also yes please don't use avi or mp3 in general in 2018
[21:14:43 CEST] <heliumclicks> I'm down to try whatever
[21:14:47 CEST] <furq> but especially not with h264
[21:15:04 CEST] <heliumclicks> Essentially what happened is i started with vlc, but it did a pretty terrible job with lots of stuttering and clicking
[21:15:25 CEST] <heliumclicks> i started working on an ffmpeg command line and hours later i just googled until i found someone trying to do something similar using a similar device
[21:15:31 CEST] <heliumclicks> i imagine that person was as ignorant as i am
[21:16:00 CEST] <furq> yeah 99% of ffmpeg command lines on the web are wrong in some way
[21:16:10 CEST] <furq> and 90% are wrong in many ways
[21:16:33 CEST] <heliumclicks> but i have 5600 bogomips per bitflipper so I think it should be able to stream video in 480p.
[21:17:42 CEST] <furq> http://vpaste.net/FurMK
[21:17:43 CEST] <furq> try that
[21:18:40 CEST] <heliumclicks> Sure... let's see what we get.
[21:18:42 CEST] <furq> that's lossless so it'll be a lot bigger, but still much smaller than rawvideo/pcm
[21:18:50 CEST] <ChocolateArmpits> furq, that feels like it'll eat 100gigs for 2 hours lol
[21:18:55 CEST] <furq> not for sd
[21:18:58 CEST] <heliumclicks> Need to let it run for a minute or two to see if the audio goes out of sync
[21:19:39 CEST] <furq> actually maybe it will lol
[21:20:11 CEST] <ChocolateArmpits> furq, depends on the input pixel format
[21:20:27 CEST] <furq> well raw yuv420p would be about 111GB
[21:20:34 CEST] <ChocolateArmpits> ANd uncompressed 2 hours of yuv444 or rgb would be around 180gigs
[21:20:36 CEST] <furq> i'd expect about half that for ffv1
[21:20:44 CEST] <ChocolateArmpits> heh you're making calculations as well
[21:20:48 CEST] <furq> i sure am
[21:21:16 CEST] <heliumclicks> i have about 60G free right now, plenty to test with
[21:22:01 CEST] <ChocolateArmpits> the drift should be apparent with one hour of recording
[21:23:21 CEST] <csierra_> I'm trying to extract a frame that is 1 pixel wide. In debian and Ubuntu, it comes out fine. In windows, the whole frame has a green tint. If I change the frame to 2 pixels, the green tint goes away. I'm using nightly version N-91254-g2bd26dea66 but it happened with the stable version as well. Any idea what could be causing it?
[21:23:56 CEST] <heliumclicks> I get spurts of "Past duration 0.999.... too long" sometimes
[21:24:07 CEST] <heliumclicks> er... too large
[21:24:34 CEST] <ChocolateArmpits> something isn't keeping up
[21:25:18 CEST] <heliumclicks> seems to work though
[21:25:21 CEST] <heliumclicks> at least so far
[21:25:38 CEST] <furq> what bitrate is it reporting
[21:25:53 CEST] <heliumclicks> no this is great so far
[21:26:21 CEST] <heliumclicks> around 36700
[21:26:25 CEST] <csierra_> https://paste.debian.net/hidden/912292c6/ is my output
[21:26:32 CEST] <furq> that should easily fit in 60G then
[21:26:41 CEST] <heliumclicks> climbing though...
[21:26:43 CEST] <heliumclicks> at 38 now
[21:26:53 CEST] <furq> anything under 60000 or so should be fine
[21:28:10 CEST] <heliumclicks> yeah that little bit was 1.4G
[21:28:57 CEST] <furq> i mean you don't have to go lossless, i was just pointing out that there are much smaller ways of doing it than rawvideo
[21:29:12 CEST] <furq> if you're not planning on doing further processing then it's a bit of a waste
[21:30:12 CEST] <heliumclicks> the audio is pretty awful quality as it is
[21:30:30 CEST] <heliumclicks> lossless does not seem very worth it unless that's the only way to keep things in sync
[21:30:49 CEST] <heliumclicks> i can batch process afterwards i guess
[21:31:01 CEST] <furq> it won't make any difference at all to that
[21:31:05 CEST] <ChocolateArmpits> well if lossless doesn't work, then you'll have to stretch the audio afterwards
[21:31:23 CEST] <furq> if you're having audio issues then it makes sense to use flac audio, yeah
[21:31:26 CEST] <furq> so you don't have to encode twice
[21:32:00 CEST] <ChocolateArmpits> it's also possible to use the atempo filter if you know by how much the audio starts drifting after some time and if it's consistent
[21:32:05 CEST] <furq> but yeah if you capture the video lossy then at least use -crf 18 and probably -preset slower
[21:32:27 CEST] <furq> and also bear in mind you might need to deinterlace it if your capture device doesn't take care of that for you
[21:32:32 CEST] <heliumclicks> ChocolateArmpits: it doesn't seem consistent
[21:32:36 CEST] <furq> and crop out head switching noise etc
[21:32:58 CEST] <heliumclicks> i don't know what you're referring to
[21:33:08 CEST] <heliumclicks> BUT, based on the sample i just made, i think this is ok
[21:33:37 CEST] <heliumclicks> ffmpeg can output to more than one stream at a time right? is there a good way to monitor what it's recording?
[21:34:06 CEST] <ChocolateArmpits> heliumclicks, well it may not seem consistent at different times, bu the value by which the audio gets out of sync over time may be consistent, say 10 samples every minute or something of that nature
[21:34:11 CEST] <heliumclicks> my plan was just to wait a few seconds and then start playing the file with ffmplay. the delay is fine, i don't need realtime
[21:34:25 CEST] <heliumclicks> ChocolateArmpits: that's possible, but I don't have a good way to test it
[21:35:20 CEST] <ChocolateArmpits> Why not stretch the audio using tempo filter so the end frames match to the video, then look at the rest of the video ?
[21:35:35 CEST] <ChocolateArmpits> just to confirm
[21:37:51 CEST] <ChocolateArmpits> As for audio quality of the tape itself, if your deck supports stereo/hifi audio try turning that on, audio quality is usually significantly better, provided the audio was recorded to those tracks properly
[21:38:58 CEST] <ChocolateArmpits> otherwise it plays standard audio that's recorded on another physical track in a more plain way
[21:49:05 CEST] <heliumclicks> interesting... i tried "ffmpeg -i test.mkv test.webm" on that lossless test file i just made and it's going at about 0.22x. To me that implies that it couldn't have kept up with the stream if i had tried to encode on the fly. Is that a reasonable conclusion or is there some other cause for it to be this slow?
[21:49:30 CEST] <furq> the reason is that the default encoder for webm is libvpx and libvpx is slower than shit
[21:49:41 CEST] <furq> especially for sd
[21:49:45 CEST] <heliumclicks> aha
[21:49:53 CEST] <furq> x264 is orders of magnitude quicker
[21:50:19 CEST] <heliumclicks> shitty test then.. what do you suggest? Filesize matters so I can't leave it lossless. I need to send this out to family members eventually
[21:50:31 CEST] <furq> just use x264 and aac in mkv
[21:50:34 CEST] <heliumclicks> also they will expect to be able to play it without installing ffmpeg :)
[21:50:47 CEST] <furq> like i said, you'll want to filter up front if you're encoding lossy to avoid generation loss
[21:50:51 CEST] <furq> so any deinterlacing or cropping
[21:50:56 CEST] <stockstandard> Hey everyone - I'm pulling ffmpeg into a DockerFile, but am referencing a specific version in the pull so every now and then when I go to refresh the docker container, the script breaks because the version that is referenced is not the latest... Could someone please advise on how I can update this 2 liner?
[21:50:57 CEST] <stockstandard> https://pastebin.com/ExKpNsv5
[21:51:39 CEST] <furq> x264 and aac in mp4 is ideal for compatibility, but the issue with mp4 is that if the capture stops halfway through you'll be left with an unplayable file
[21:51:53 CEST] <furq> so ideally you want to capture to mkv and then remux to mp4 when t's done
[21:53:08 CEST] <heliumclicks> i see
[21:53:43 CEST] <heliumclicks> any particular options you would suggest for producing a non-shitty x264+aac in mkv?
[21:54:00 CEST] <heliumclicks> also thanks for your help so fra
[21:54:04 CEST] <heliumclicks> far*, rather
[21:54:11 CEST] <furq> -crf 18 -preset slower
[21:54:40 CEST] <furq> try slow or medium if slower is too slow
[21:55:00 CEST] <furq> aac should be fine at default settings
[22:00:42 CEST] <heliumclicks> furq, ffmpeg -i test.mkv -c:a aac -c:v libx264 -crf 18 -preset slower compressed.mkv
[22:00:50 CEST] <heliumclicks> so that, more or less?
[22:01:02 CEST] <furq> sure
[22:01:12 CEST] <furq> assuming you're happy with the cropping and interlacing
[22:02:42 CEST] <heliumclicks> I'm still not sure what you mean by that. When i look at the output it seems fine? Is there something I am not noticing?
[22:03:11 CEST] <furq> if you have black bars around the sides or switching noise at the bottom then you probably want to crop that out
[22:03:35 CEST] <furq> and if you have sawtooth artifacts then you need to deinterlace
[22:03:44 CEST] <furq> but your capture device might be doing that for you
[22:04:02 CEST] <heliumclicks> cropping is fine, deinterlace looks like maybe i do have
[22:04:11 CEST] <heliumclicks> yes definitely
[22:04:17 CEST] <furq> add -vf bwdif=1
[22:05:14 CEST] <heliumclicks> testing
[22:07:19 CEST] <heliumclicks> still hovering around 0.5x, even with -preset slow. I'm surprised
[22:07:31 CEST] <JEEB> yea, x264 nowadays with modern CPUs is fast
[22:07:42 CEST] <heliumclicks> i guess transcoding is slower than i remember
[22:07:44 CEST] <furq> well 0.5x is no good for capturing so i assume that's slow
[22:07:46 CEST] <JEEB> generally if you have a bottleneck it's somewhere else
[22:08:33 CEST] <heliumclicks> I'm on a core i7 6600U @ 2.8GHz, so I suspect that's not it
[22:08:44 CEST] <JEEB> but basically you can see if the results improve by fastening the preset
[22:08:52 CEST] <JEEB> if it doesn't then it's something else
[22:08:56 CEST] <heliumclicks> indeed
[22:09:10 CEST] <furq> yeah i'm surprised 480p30 would be less than realtime on that
[22:09:14 CEST] <heliumclicks> slower and slow seem to be both going at around 0.4x
[22:09:25 CEST] <JEEB> if it does then x264 at that specific point (given nothing else changes) was your bottleneck
[22:09:59 CEST] <furq> i guess that is a 15W CPU
[22:10:18 CEST] <heliumclicks> it's a low power cpu yeah, lenovo x1 carbon
[22:10:27 CEST] <heliumclicks> so this deinterlacing thing...
[22:10:33 CEST] <heliumclicks> not a success, i would say
[22:10:49 CEST] <heliumclicks> any lateral motion turns everything into wavy squiggles
[22:11:07 CEST] <furq> maybe try bwdif=1:0 or 1:!
[22:11:09 CEST] <furq> 1:1
[22:11:31 CEST] <furq> if it had artifacts before and bwdif is making it worse then it might be misdetecting the field order
[22:12:02 CEST] <furq> also -preset fast should still be ok at crf 18
[22:12:35 CEST] <furq> you can generally compensate for using a faster preset by dropping the crf a bit
[22:12:42 CEST] <heliumclicks> https://imgur.com/IOjP8eJ
[22:12:43 CEST] <furq> obviously the filesize will duffer
[22:12:57 CEST] <ChocolateArmpits> i'd only deinterlace SD stuff with QTGMC if intended for archival and you want to deinterlace
[22:13:03 CEST] <furq> oof
[22:13:12 CEST] <furq> that looks like your capture device is upscaling
[22:13:22 CEST] <furq> in which case a regular deinterlacer isn't going to work well
[22:13:28 CEST] <heliumclicks> hmm
[22:13:58 CEST] <furq> maybe just leave that off then
[22:17:47 CEST] <heliumclicks> you know what i think it is
[22:18:08 CEST] <heliumclicks> this easycap thing is probably upscaling to PAL, but the input is ntsc
[22:18:36 CEST] <heliumclicks> video was recorded in bolivia which uses ntsc... but this easycap dongle thing undoubtedly came from china and does who knows what
[22:18:56 CEST] <furq> oh yeah that's 576p
[22:19:01 CEST] <furq> that's probably not ideal then lol
[22:19:01 CEST] <heliumclicks> yes
[22:19:09 CEST] <heliumclicks> No it would seem not :D
[22:19:13 CEST] <heliumclicks> but it's what we have
[22:21:14 CEST] <furq> well yeah i guess just don't deinterlace it
[22:21:19 CEST] <furq> but also don't throw away those tapes
[23:07:30 CEST] <heliumclicks> furq: we'll keep the tapes, but they are 20+  years old at this point and already barely usable. A few of them broke as soon as they started to play :(
[23:22:58 CEST] <ChocolateArmpits> They must've been kept in pretty harsh comditions if they are breaking like that
[00:00:00 CEST] --- Sun Jun 10 2018


More information about the Ffmpeg-devel-irc mailing list