[Ffmpeg-devel-irc] ffmpeg.log.20180618

burek burek021 at gmail.com
Tue Jun 19 03:05:01 EEST 2018

[05:00:26 CEST] <fengshaun> As I found out, it's not quite possible to make an m3u8 v4 playlist with byteranges matching with keyframes and have an hls player be able to request those bytes and actually play the file
[05:00:31 CEST] <fengshaun> what am I missing there?
[05:01:14 CEST] <fengshaun> the solution might be fragmented mp4, but why can I not serve a normal mp4 with a generated m3u8 based on I-frames?
[05:01:36 CEST] <fengshaun> ffmpeg (the commandline) converts the mp4 to .ts, what does the conversion involve?
[05:02:53 CEST] <fengshaun> my end goal is to be able to do on-the-fly transcoding with libav* while retaining the original files in mp4 container and original codecs without doubling space requirements to include .ts segments
[10:55:11 CEST] <Dorian_> Hello. I have a noisy interlaced source where the noise is mostly present in only one field. The noisy and the ok field do swap irregularily every few seconds. Side by side fields sample here: https://expirebox.com/download/65fcc65a8445217633c611f346c96d7f.html
[10:57:18 CEST] <Dorian_> I want to manually select which field to take for every range of frames to create a kinda restored version of the clip. All other turbulende aside, if I only scale up the good field to 2*height, i would introduce Bobbing artifacts on field change. How can I prevent the bobbing?
[10:58:01 CEST] <Dorian_> Algorithmically, i would have to scale to 2*h-1 and add an additional line at the top or bottom, depending on what field the source is from.
[10:58:30 CEST] <Dorian_> Any suggestions on how to scale without introducing Bobbing? :)
[11:29:36 CEST] <Cracki> good luck with that.
[11:30:19 CEST] <Cracki> there will not be a single command line to do that, of course
[11:30:45 CEST] <Cracki> bobbing will be the least of your problems
[11:31:03 CEST] <Cracki> the footage has shit tracking on top of that
[11:32:14 CEST] <Cracki> the best I can imagine is some fancy filtering based on nonlocal means
[11:32:47 CEST] <Cracki> and for the deinterlacing, of course soemthing like yadif
[11:36:08 CEST] <Dorian_> @Cracki, thanks. But I cannot use yadif, since it interpolates the "goos" and the garbade fields to a bad blend of both.
[11:36:42 CEST] <Cracki> that's not what I meant.
[11:36:44 CEST] <Dorian_> and yes, i picked an extreme sample. other occurences are more benign, tracking looks ok.
[11:37:59 CEST] <Dorian_> basically, i have 2 video files, one for each field. and i want to manually put them together again, using always the good field.
[11:38:20 CEST] <Cracki> don't do bob-deinterlacing then
[11:38:54 CEST] <Cracki> bob is related to "nearest neighbor".
[11:39:01 CEST] <Cracki> at least do linear interpolation.
[11:39:34 CEST] <Cracki> if you know yadif, you know it uses some kind of motion estimation or optical flow
[11:39:54 CEST] <Cracki> but I guess you aren't looking to mess around with source code for this, eh?
[11:40:05 CEST] <Dorian_> i dont do any deinterlacing besides separating the fields.
[11:40:06 CEST] <Dorian_> so it would help, if there was a one-liner(-is-way) to scale up both single field video files so i just have to select which file to take at which timestamp.
[11:40:38 CEST] <Dorian_> nah, i try staying on the stable trunk :)
[11:40:49 CEST] <Cracki> for the parts where you have one field only, you have to resort to deinterlacing methods that use a single field. those are either dumb methods or methods that output full frames at field-rate
[11:41:05 CEST] <Dorian_> yes, i know. thats what i am doing
[11:41:21 CEST] <Cracki> browse the available deinterlacing methods then
[11:41:31 CEST] <Dorian_> and if i scale the field to a full height frame, it will hop 1/2 line up and down when i switch the source field
[11:41:50 CEST] <Cracki> don't do that then.
[11:42:06 CEST] <Dorian_> that's what i want to fix. everything else is in the source, cant do much about it. but that half line jump irates me, since it comes from processing :D
[11:42:20 CEST] <Cracki> that processing is wrong.
[11:42:25 CEST] <Cracki> and that's not even "bobbing"
[11:43:12 CEST] <Dorian_> it's not bobbing, it introduces bobbing artifacts. i tried not to overdo with the details in the initial question
[11:43:23 CEST] <Cracki> http://www.100fps.com/why_bobbing.htm
[11:44:26 CEST] <Cracki> browse the video filters. there's a ton of choices for arranging a picture inside the frame.
[11:44:47 CEST] <Cracki> letterboxing, compositing or picture-in-picture things, ...
[11:44:49 CEST] <Dorian_> i know that source. it does describe my exact problem. only the general case is, it happens on a frame-by-frame basis. in my case, it happens whenever i change the "good" field
[11:45:12 CEST] <Cracki> if you want to add that extra line, you have several options
[11:46:55 CEST] <Dorian_> although i do not yet know how, i can find that out. just was hoping there would be a fit tool for that
[11:48:22 CEST] <Cracki> ffmpeg :>
[11:48:38 CEST] <Cracki> and if that's not good enough, its libraries
[11:51:23 CEST] <Dorian_> i was not going to use vegas, trust me ;)
[11:51:58 CEST] <Dorian_> well, thanks anyways.
[11:53:22 CEST] <Dorian_> alas, picture-in-picture is a good key, maybe i'll find a generic enough solution digging that direction.
[11:58:43 CEST] <Cracki> crop, pad, overlay, ...
[11:59:56 CEST] <Cracki> I'd recommend you create 3 videos: a properly deinterlaced one, one using the first field only, one using the second field only. then cut between those as needed.
[12:00:06 CEST] <Dorian_> that sounds so stupidly simple, it problably solves the whole thing. :D
[12:00:45 CEST] <Cracki> also browse the deinterlacing filters on https://ffmpeg.org/ffmpeg-filters.html
[12:01:26 CEST] <Dorian_> yeah, i' was going to do that. sorry, i did not mention the "normal" (deinterlaced) track. i 'll use the field switch restauration only on the spots that are broken.
[12:02:00 CEST] <Dorian_> that was one if the details i left out because it hat nothing to do with the question :)
[12:04:50 CEST] <Cracki> why do people do that, leave out details?
[12:45:12 CEST] <Dorian_> @Cracki i was just asking about how to scale the single-field-fed clips correctly to avoid verticval jumping on field change. the context about the noise in one field was just for clarification of the source material composition. so i saw no need to provide info on how i handled non-problematic scenes. i have also not informed you of the a/v-sync and negative frame duration issues the clip does have elsewhere because it does not matt
[12:46:17 CEST] <Dorian_> just in case your question was not meant rethorically... which it clearly was. so please ignore my answer :)
[12:53:50 CEST] <dradakovic> Guys, a shoutcast stream i want to capture with FFMPEG is giving me "403 Forbidden" error. This same stream works fine if i listen to it with VLC
[12:54:01 CEST] <dradakovic> Stream is on http://s41.myradiostream.com:29074.
[13:09:00 CEST] <Cracki> Dorian_, actually not rhetorical. the most common problem with people looking for help is that they are too stingy with details and context and hard facts (source code, data, citations...)
[13:09:17 CEST] <Cracki> tho you've been very close to optimal
[13:09:56 CEST] <Cracki> dradakovic, sounds like user agent is refused by the server
[13:10:42 CEST] <Cracki> uh, my vlc can't open that stream url either
[13:18:01 CEST] <dradakovic> hmmm allow me to check if i pasted url correctly
[13:18:53 CEST] <dradakovic> Weird, works fine for me. I just open a network stream and paste that url
[13:19:13 CEST] <dradakovic> I do use windows version of VLC with GUI of course
[14:25:25 CEST] <Mithgol> dradakovic, that stream seems to require free registration beforehand.
[14:37:30 CEST] <Cracki> they probably do some ip-based tracking, so you log in via one browser, and you're unlocked for all requests from your IP
[14:55:39 CEST] <dradakovic> Ahhh
[14:55:58 CEST] <dradakovic> It could be the case
[14:58:52 CEST] <new__> undefined reference to symbol 'sws_getCachedContext@@LIBSWSCALE_FFMPEG_3'
[14:59:04 CEST] <new__> how to solve this error
[14:59:31 CEST] <new__> i am using ubuntu and want to use ffmpeg in opencv
[15:01:29 CEST] <BtbN> link to swscale?
[15:09:52 CEST] <stevEEE> Hi i want to scale down a MP4 4k Video from my DJI drone to a full hd by using my Nvidia GTX 1060 3GB. i tried out "ffmpeg.exe -hwaccel cuvid -c:v h264_cuvid -i DJI_0135.MP4 -vf scale_npp=1922:-1 -c:v h264_nvenc out.mp4" but the video compared to the cpu rendered is worse and gets many artifacts. what am i doin wrong ?
[15:10:38 CEST] <BtbN> Hardware Encoding will always be worse than CPU encoding.
[15:10:48 CEST] <BtbN> If you don't absolutely cannot use the CPU, do not use it.
[15:12:14 CEST] <stevEEE> i tried out to use the same bitrate like 60.000 kb and the mp4 looked nice but its getting to big (500mb = 4k, 400mb = fullhd)
[15:13:54 CEST] <BtbN> a hardware encoder will always look considerably worse than x264 at the same bitrate. That's just how it is.
[15:14:44 CEST] <stevEEE> ok thanks for that info. so back to the eth-mine my GPU and go for rendering CPU :-)
[15:15:09 CEST] <Cracki> nah, most recent gpus have acceptable hw encoders
[15:15:22 CEST] <BtbN> acceptable, sure. But still WAY worse than x264
[15:15:41 CEST] <Cracki> and if you run x264/x265 faster/veryfast/..., their efficiency goes down too
[15:15:46 CEST] <BtbN> A static silicon hardware encoder will never be able to get even close to x264
[15:15:54 CEST] <Cracki> quantify "way"
[15:16:15 CEST] <BtbN> x264 veryfast still only needs half the bitrate nvenc needs for the same quality
[15:16:26 CEST] <Cracki> nvenc on what hw?
[15:16:30 CEST] <BtbN> If you go into slower preset the ratio gets even bigger
[15:16:48 CEST] <BtbN> Pascal only made nvenc faster. Quality did not change since Maxwell
[15:17:16 CEST] <Cracki> fascinating. are there any published tests/comparisons/...?
[15:17:25 CEST] <BtbN> none that I'm aware of
[15:17:30 CEST] <Cracki> :(
[15:17:46 CEST] <BtbN> But it's just a well known fact that hardware can never be as good as software for video encoding, no matter the codec.
[15:18:18 CEST] <Cracki> stevEEE, at least for the scaling, a gpu could be of help. not sure if ffmpeg has opencl/cuda-accelerated scaling filters
[15:18:40 CEST] <Cracki> BtbN, it's a truism obviously, because anything the hw can do, the sw can do as well.
[15:18:59 CEST] <Cracki> that's why I asked about data.
[15:19:03 CEST] <BtbN> The overhead of shoving the frames on the GPU and then back to the CPU would probably negate any benefits
[15:19:12 CEST] <Cracki> sure.
[15:19:24 CEST] <Cracki> unless they're gonna decode and encode on the gpu too
[15:19:26 CEST] <BtbN> But if you use h264_cuvid, then scale_npp, then download and encode with x264, it would sure help
[15:19:53 CEST] <BtbN> CPU load wise
[15:19:58 CEST] <BtbN> not neccesarily speed wise
[15:20:18 CEST] <BtbN> modern CPUs are usually magnitudes faster at decoding than GPUs
[15:20:30 CEST] <BtbN> So if you're not on a potato CPU...
[15:20:36 CEST] <Cracki> nvidia only recently caught up...
[15:20:56 CEST] <BtbN> My Laptop Quad Core does several thousand FPS h264 decoding on the CPU
[15:21:02 CEST] <Cracki> my 2012er hw (xeon e3, gtx 560) is a nice data point: cpu only decode: 300+ fps for my footage, gpu does iirc around 80
[15:21:05 CEST] <BtbN> The NVidia GPU caps out at somewhere between 1k and 2k
[15:21:10 CEST] <Cracki> modern nvidias promise 500+ fps decode
[15:21:40 CEST] <BtbN> Pre-Kepler GPUs were designed for real-time playback, nothing beyond it
[15:21:45 CEST] <BtbN> transcoding only came in later
[15:21:47 CEST] <Cracki> note the footage is full hd from a camcorder, and decode speed varies for me between different camcorder footage
[15:22:06 CEST] <Cracki> >several thousand FPS
[15:22:09 CEST] <Cracki> at what res?
[15:22:15 CEST] <BtbN> 1080p
[15:22:20 CEST] <Cracki> k, had to ask
[15:22:31 CEST] <Cracki> bitrate? :P
[15:22:38 CEST] <furq> 14:16:15 ( BtbN) x264 veryfast still only needs half the bitrate nvenc needs for the same quality
[15:22:41 CEST] <furq> wow really
[15:22:51 CEST] <Cracki> more bits, more stuff to paint
[15:22:53 CEST] <furq> i knew it was a long way off veryslow but i didn't think it was that bad
[15:23:02 CEST] <Cracki> furq, if you take his word for it
[15:23:04 CEST] <BtbN> CBR mode is just that bad
[15:23:14 CEST] <Cracki> cbr, lol
[15:23:16 CEST] <BtbN> If you use ConstQP/TargetQP, they are not that far
[15:23:21 CEST] <Cracki> nobody uses cbr if they can help it
[15:23:23 CEST] <furq> oh ok
[15:23:24 CEST] <Cracki> dude
[15:23:24 CEST] <BtbN> But x264 still easily beats it
[15:23:31 CEST] <Cracki> are you trying to make the numbers worse?
[15:23:36 CEST] <furq> stevEEE: i guess try with -cq 20 -preset hq
[15:23:46 CEST] <BtbN> *qp
[15:23:46 CEST] <Cracki> your bias is showing...
[15:23:55 CEST] <furq> -cq is different
[15:24:10 CEST] <furq> that's the one that's supposed to be like crf but idk whether it actually is
[15:24:13 CEST] <BtbN> Yes, I'm totally biased against nvenc. Which is why I added it to ffmpeg in the first place...
[15:24:39 CEST] <furq> Cracki: it's not really news that it's bad for archival
[15:24:58 CEST] <Cracki> would you put in the work of adding it today, knowing what you know now?
[15:24:59 CEST] <BtbN> There is just no point in using nvenc if you are not time or CPU constrained
[15:25:15 CEST] <Cracki> I agree.
[15:25:17 CEST] <BtbN> Yes, because there are a lot of valid and good uses for a hardware encoder.
[15:25:25 CEST] <furq> well it's fundmanetally worth having because it's pretty good for realtime
[15:25:27 CEST] <Cracki> I wouldn't encode anything for archival purposes using nvenc or qsv
[15:25:33 CEST] <BtbN> Archival is not one of them
[15:25:50 CEST] <BtbN> nvenc is not necesarily bad. x264 is just that good
[15:26:00 CEST] <BtbN> So if you can use x264, it's always the better option
[15:26:47 CEST] <Cracki> or x265... it's not quite as fast (a solid factor, not a fraction) but I'm glad for the saved bits
[15:27:21 CEST] <Cracki> on my cruddy 2012 cpu at least... I imagine something with more cache would perform a lot better
[15:27:22 CEST] <BtbN> For a long time for 1080p, x264 was beating x265
[15:27:34 CEST] <BtbN> x265 took their time to catch up
[15:27:50 CEST] <furq> yeah i still wouldn't bother with x265 for anything other than >1080p or super-low bitrates
[15:28:11 CEST] <BtbN> Yeah, it's still hardly worth it. But at least it's not worse anymore.
[15:28:37 CEST] <Cracki> I would expect it to be "worse" because the encoding is more complex
[15:28:42 CEST] <Cracki> worse in time spent
[15:28:58 CEST] <BtbN> No, it was also worse in quality/bitrate
[15:29:02 CEST] <Cracki> ah
[15:29:17 CEST] <BtbN> Not at 4K or other super high res
[15:29:28 CEST] <Cracki> funny, since H.265 is approximately a superset of H.264
[15:29:41 CEST] <BtbN> x265 isn't built on top of x264 though
[15:29:52 CEST] <BtbN> So all of the crazy optimizations x264 has are not in x265
[15:29:55 CEST] <Cracki> how much could they reuse/copy?
[15:30:01 CEST] <Cracki> hm that's unfortunate
[15:30:06 CEST] <BtbN> No idea. I think they started from scratch?
[15:30:15 CEST] <Cracki> o.o
[15:30:17 CEST] <BtbN> For License-Reasons alone I'd assume that
[15:30:23 CEST] <Cracki> ah of course.
[15:30:31 CEST] <Cracki> so "clean room" approach or something
[15:31:51 CEST] <furq> isn't x265 gpl
[15:32:03 CEST] <furq> or is that for the commercial license
[15:32:21 CEST] <BtbN> It's a similar model than x264
[15:32:24 CEST] <furq> s/for/because of/
[15:32:35 CEST] <BtbN> GPL2 if you are ok with it, otherwise buy a license
[15:33:06 CEST] <furq> i mean why would they start from scratch for license reasons if they're both gplv2
[15:33:23 CEST] <BtbN> Because you cannot just take the x264 GPL2 code and sell it under a commercial license
[15:33:32 CEST] <furq> right that makes sense
[15:33:54 CEST] <BtbN> iirc x264 and x265 are not from the same company?
[15:34:05 CEST] <furq> they're not
[15:34:32 CEST] <furq> which is a shame because x265 would probably be a lot better if they were
[15:37:08 CEST] <Cracki> they should not have used the "x..." maybe
[15:37:17 CEST] <Cracki> it's a household name
[16:00:39 CEST] <Hello71> huh, x264 is by VideoLAN
[16:02:19 CEST] <bencoh> not really
[16:03:14 CEST] <Hello71> eh, hosted by
[16:03:41 CEST] <bencoh> :)
[16:03:50 CEST] <Hello71> and I assume at least some of the developers are paid for
[16:04:56 CEST] <if_gaga1> hello guys, i have very strange question about how to remotely control ffmpeg stream. my usecase: ffmpeg grabs data from Xvfb through x11grab and broadcast that into rtsp stream. For example, can i remotely send to running ffmpeg instance some command? like zoom, unzoom for example?
[16:25:40 CEST] <Hello71> that's dumb
[16:45:03 CEST] <kepstin> if_gaga1: ffmpeg has some support (in some filters) to reconfigure filters based on commands. It may or may not be able to do what you want. The ffmpeg cli supports receiving commands on stdin, I believe, or you can use the zmq/azmq filter to allow sending commands to it over the network via zeromq.
[16:48:36 CEST] <lyncher> hi. how is AV_PKT_DATA_STRINGS_METADATA is being used in ffmpeg?
[16:49:00 CEST] <lyncher> decklink decoder is now (HEAD) filling AV_PKT_DATA_STRINGS_METADATA with timecode information
[16:49:20 CEST] <lyncher> how can that information be used for instance to create an overlay?
[17:11:37 CEST] <ntd> scroogle isn't playing nice, is there any way to query a v4l device inputs "native" resolution?
[17:11:49 CEST] <ntd> i tried ffprobe, apparently default is 320x240 and then the input just stays at whatever res i tried the last
[17:12:45 CEST] <furq> ntd: ffmpeg -f v4l2 -list_formats all -i /dev/video0 or v4l2-ctl --list-formats-ext
[17:13:36 CEST] <ntd> Unrecognized option '-list-formats-ext'.
[17:13:36 CEST] <ntd> Error splitting the argument list: Option not found
[17:13:44 CEST] <furq> those are two different commands
[17:13:54 CEST] <ntd> sorry, my bad
[17:16:05 CEST] <ntd> ffmpeg -f v4l2 -list_formats all -i /dev/video0: https://pastebin.mozilla.org/9087876
[17:16:34 CEST] <ntd> what was the other command, exactly?
[17:16:58 CEST] <Cracki> $ v4l2-ctl
[17:17:47 CEST] <ntd> yeah, got it
[17:17:55 CEST] <Cracki> --list-formats-ext
[17:18:59 CEST] <ntd> https://pastebin.mozilla.org/9087877
[17:19:08 CEST] <ntd> no joy in terms of actual res, though
[17:19:39 CEST] <if_gaga1> kepstin: thanks, did you see some how-to's or docs for ffmpeg+zmq interaction?
[17:20:39 CEST] <kepstin> if_gaga1: check the filters documentation page for details. You'll also have to check the docs for other filters you use to see what commands they accept.
[17:21:26 CEST] <if_gaga1> kepstin: okay, thanks again
[17:21:56 CEST] <Cracki> ntd, there must be a way to get resolutions. perhaps some other switch...
[17:23:37 CEST] <ntd> just for context: these are bttv v4l devices. never found a way to query the actual inout resolution. i know the source "tv lines" but this apparently doesn't translate into w+h pix :)
[17:24:36 CEST] <kepstin> well, tv capture cards for both ntsc and pal normally give you 720px wide images
[17:25:44 CEST] <ntd> cctv cams
[17:26:08 CEST] <ntd> but can i use the 720 width and the ratios given by v4l-utils to determine the res?
[17:26:09 CEST] <kepstin> on ntsc they'll typically give you 480 lines (some pro systems give you 486 lines instead). Pal stuff will give you 576 lines.
[17:26:28 CEST] <kepstin> my impression is that most cctv systems use regular tv encoding standards
[17:26:43 CEST] <kepstin> (to save money, no custom hardware design needed)
[17:27:12 CEST] <Cracki> 480/576 is content. full scan might be 525/625 lines
[17:27:40 CEST] <Cracki> and the 720 is just how most ADCs sample a line. a line has no nominal resolution afaik
[17:27:58 CEST] <ntd> and lines equals tentative pixels in terms of height?
[17:28:01 CEST] <kepstin> a bttv capture card with v4l should only be returning the active video area, which will be the 480/576 lines
[17:28:03 CEST] <Cracki> yes
[17:28:21 CEST] <ntd> ok, lemme try
[17:29:00 CEST] <Cracki> be aware, 720x480 is anamorphic for either 4:3 or 16:9
[17:29:13 CEST] <Cracki> so don't be surprised it it looks squashed
[17:31:59 CEST] <ntd> ok, each source is on it's own bttv chip (no channel/chip sharing), cctv cam model: hitachi vk-c307e
[17:33:19 CEST] <Cracki> http://www.tovit-vs.com/archivi/oldtelhitachi2E.htm
[17:33:26 CEST] <ntd> looking at it.
[17:33:26 CEST] <Cracki> funny resolution given there...
[17:33:33 CEST] <ntd> yeah...
[17:33:46 CEST] <Cracki> official http://www.hitachidigitalmedia.com/en-gb/brochures/101796
[17:34:01 CEST] <ntd> which might explain why prev captures look funny
[17:34:38 CEST] <ntd> Cracki, pdf blocked by proxy, what does it say?
[17:34:46 CEST] <Cracki> same as the first link afaics
[17:34:56 CEST] <Cracki> 5xx by 5xx pixels
[17:35:18 CEST] <Cracki> Total Number of Pixels 537(H) x 597(V) Effective Pixels 500(H) x 582(V)
[17:35:31 CEST] <kepstin> the "scanning" is the interesting bit there: "PAL 625 lines 50 Hz,  2:1 interlace"
[17:35:38 CEST] <Cracki> I would be surprised if it were to put *that* on the line...
[17:35:44 CEST] <Cracki> ah that looks more like it
[17:36:00 CEST] <kepstin> it doesn't matter what the sensor captures, it looks like it does digital enhancement and zooming/scaling, then outputs a standard PAL tv signal
[17:36:13 CEST] <ntd> so i should try capturing at 720x576?
[17:36:16 CEST] <furq> ntd: apparently you can try v4l2-ctl -d /dev/video0 --list-framesizes=YUYV
[17:36:26 CEST] <furq> that should normally show up in list-formats-ext though
[17:36:40 CEST] <ntd> ioctl: VIDIOC_ENUM_FRAMESIZES
[17:36:45 CEST] <kepstin> ntd: you might also need to use the -standard option (try with ffmpeg -list_standards ... to see what it can do)
[17:36:52 CEST] <Cracki> yes try 576 or 625 lines
[17:36:56 CEST] <Cracki> seems it's pal
[17:37:07 CEST] <Cracki> does your capture thingy support pal?
[17:37:11 CEST] <ntd> yup
[17:37:26 CEST] <Cracki> excellent
[17:37:28 CEST] <ntd> so 720x576 (pal res if mem serves)?
[17:38:06 CEST] <Cracki> try
[17:38:11 CEST] <Cracki> should be
[17:40:15 CEST] <ntd> major scanning lines/artifacts on motion
[17:40:19 CEST] <ntd> major/heavy
[17:40:23 CEST] <Cracki> yay
[17:40:49 CEST] <Cracki> it's interlaced, of course
[17:41:17 CEST] <Cracki> would you describe what you see as "combing"?
[17:41:42 CEST] <ntd> yes
[17:42:31 CEST] <ntd> from that first site: TV resolution	> 330 lines (H)	> 450 lines (V)
[17:42:35 CEST] <ntd> also: 2:1?
[17:42:51 CEST] <kepstin> ntd: the image sensor has nothing to do with the output format :)
[17:43:07 CEST] <kepstin> it does processing on the image before outputting a video signal
[17:43:25 CEST] <ntd> ok
[17:43:58 CEST] <kepstin> ntd: but anyways, if you use a deinterlacing filter (bwdif should be decent, assuming it's realtime on your system) that should make the image viewable on a computer monitor.
[17:45:01 CEST] <Cracki> side discussion: yadif vs bwdif, how do they differ visually and conceptually?
[17:45:27 CEST] <kepstin> bwdif internally uses yadif in combination with other filters to hopefully reduce some of the artifacts that yadif can give
[17:45:35 CEST] <Cracki> ah!
[17:47:41 CEST] <ntd> trying diff filters, still looking quite bad. got hung up on this though: Scanning	
[17:47:41 CEST] <ntd> PAL 625 lines 50 Hz,  2:1 interlace
[17:48:11 CEST] <kepstin> ntd: translated, that says "I output a standard PAL interlaced video signal"
[17:50:40 CEST] <ntd> heavy combing through all filters... iirc pal is 576 lines (meaning 720x576?), should i be trying 720x625 instead?
[17:50:52 CEST] <InTheWings> how come movenc wrongly produces "ac-3" atom with "AC-3" ? https://forum.videolan.org/viewtopic.php?f=14&t=143528&p=474869
[17:51:09 CEST] <InTheWings> supposely 3.4.2 but I can't find anything wrong
[17:52:13 CEST] <kepstin> ntd: any chance you could share a screenshot of the image at 720x576 with no filters?
[17:52:23 CEST] <kepstin> so we can see the combing?
[17:55:58 CEST] <ntd> sec cameras, so will be hard without breaking any laws
[17:56:28 CEST] <Cracki> tape some confidential printouts on the lens
[17:57:10 CEST] <ntd> basically, any people walking by look like they're doing whatever the "new" terminator from "genesys" does
[17:57:17 CEST] <ntd> "phasing" :)
[17:57:23 CEST] <kepstin> be nice if you had a test camera you could just take some selfies with :/
[17:57:59 CEST] <kepstin> hmm. actually, i wonder if that statement "2:1 interlace" means they're doing something strange
[17:58:26 CEST] <Cracki> hope not...
[17:58:44 CEST] <kepstin> (if so, that's really annoying, the only reason I can think of for them to do something strange there is to make it so you can't use off-the-shelf video software on the things)
[17:58:54 CEST] <ntd> also, any non-motion stills (at 720x576) look funny compared to other analog sec cameras i've worked with
[17:58:57 CEST] <Cracki> inb4 custom filter dev
[17:59:10 CEST] <Cracki> yes gief data
[17:59:40 CEST] <ntd> looks squashed somehow
[18:00:57 CEST] <Cracki> good enough
[18:02:33 CEST] <kepstin> squashed is expected, due to the anamorphic scanning
[18:08:57 CEST] <ntd> mplayer -tv driver=v4l2:width=720:height=576:outfmt=i420:device=/dev/video3 -vc rawi420 -vo vdpau -display :0.0 tv://
[18:09:07 CEST] <ntd> looks swell
[18:09:27 CEST] <ntd> but capturing through ffmpeg... result isn't pretty
[18:10:46 CEST] <kepstin> presumably the vdpau vo or something in mplayer is deinterlacing
[18:11:17 CEST] <kepstin> if you're recording to a file, you need to either deinterlace before encoding, or use encoder settings that preserves interlacing
[18:11:20 CEST] <Cracki> perhaps jsut displaying it at 50 fps
[18:12:02 CEST] <ntd> it is
[18:25:31 CEST] <ntd> idk how to describe the resulting files. mplayer->display is fine, recordings of people moving by the cam: looks like when john connor/t-something is trapped by that MRI machine in "genisys"
[18:25:52 CEST] <if_gaga1> guys, a have another strange question, i'm broadcasting Xfvb to rtsp server with ffmpeg, my question: i'll add another Xfvb and switch my broadcasting from one Xvfb to another, and back without ffmpeg interruption, how i can solve it? Doesn't ffserver helps me? Thanks
[18:26:00 CEST] <ntd> by all means: cool. but not very useful :)
[18:42:18 CEST] <Cracki> ntd, pics or it didn't happen ;)
[19:06:19 CEST] <ntd> heh :)
[19:06:44 CEST] <ntd> also: can two processes pull from the same v4l device simultaneously?
[19:09:44 CEST] <ntd> the idle in #v4l is quite strong
[19:11:27 CEST] <saml> hey, how do I get total number of frames (pictures) in a video file?
[19:11:39 CEST] <saml> and extract particular Nth frame
[19:11:49 CEST] <saml> or is frame not something concrete?
[19:12:29 CEST] <furq> saml: ffprobe -count_frames
[19:13:21 CEST] <kepstin> saml: getting a particular frame isn't easy (unless the file is perfectly constant framerate)
[19:13:33 CEST] <furq> there's a bunch of ways to do it but none of them are really ideal
[19:13:47 CEST] <furq> -vf select=n=12345 is the first one that comes to mind
[19:13:54 CEST] <furq> which will work fine but also decode the entire file
[19:13:54 CEST] <saml> if I write a program that uses libav, is it more doable?
[19:14:10 CEST] <furq> er
[19:14:15 CEST] <furq> select=eq(n\,12345)
[19:15:31 CEST] <kepstin> saml: if you a tool that can grab a bunch of frames at random out of a file by frame number, an application that wraps around libav and does a pass to generate a frame number index might be worth it.
[19:15:40 CEST] <kepstin> saml: other than that, not really worth the effort
[19:15:58 CEST] <kepstin> if you need a tool*
[19:16:29 CEST] <saml> what's a frame number index?
[19:17:43 CEST] <saml> like value of frame number 2  is milliseconds or something?
[19:17:50 CEST] <kepstin> an index that correlates frame numbers to pts values
[19:18:44 CEST] <furq> i would probably prefer to use avidemux or something visual for this anyway
[19:19:06 CEST] <saml> yeah i'm tring to build a tool that plays two videos frame by frame side by side
[19:19:30 CEST] <saml> ffmpeg -i a.webm -i a.mp4 -an -filter_complex "[0]scale=800:-1,pad=1600[bg]; [1]scale=800:-1[fg]; [bg][fg]overlay=w" -movflags +faststart comparison.mp4
[19:19:46 CEST] <saml> this sorta worked. but I wanted more flexibility and fancy GUI
[19:20:05 CEST] <saml> like scrolling bar to navigate through frames
[19:22:19 CEST] <kepstin> for that kind of thing, you'd normally want to sync by timestamps to frame numbers (indeed, the overlay filter does sync by timestamps)
[19:22:57 CEST] <furq> you can do that with mpv but you still need to take care of the scaling
[19:23:00 CEST] <saml> yup. i wanted to assume two videos match frames  (like ssim or psnr does)
[19:23:15 CEST] <saml> i have no idea what i'm building
[19:23:19 CEST] <furq> mpv --external-file bar.mp4 foo.mp4 --lavfi-complex "[vid1][vid2]hstack[vo]"
[19:23:48 CEST] <furq> also yeah if you do use ffmpeg for this use hstack, not pad/overlay
[19:24:48 CEST] <saml> wow mpv looks nice
[19:25:20 CEST] <if_gaga1> sorry guys, possible i miss answer after disconnection
[19:25:24 CEST] <furq> you could probably have it setup so a key binding will toggle between the two inputs
[19:25:29 CEST] <if_gaga1> guys, i have another strange question, i'm broadcasting Xfvb to rtsp server with ffmpeg, my question: i'll add another Xfvb and switch my broadcasting from one Xvfb to another, and back without ffmpeg interruption, how i can solve it? Doesn't ffserver helps me? Thanks
[19:25:32 CEST] <furq> i'd probably find that more useful for doing a frame-by-frame comparison
[19:26:20 CEST] <kepstin> mpv can also framestep forwards and backwards (although i'm not sure that framestepping backwards with filters will work as expected)
[19:26:26 CEST] <saml> oh so a key press to look at the same frame of different video
[19:26:28 CEST] <furq> skipping back seems to work
[19:26:50 CEST] <saml> this could be a lot better for encoding quality comparison manually
[19:27:13 CEST] <saml> if you had a week, what will you build?
[19:27:15 CEST] <saml> give me an idea
[19:27:31 CEST] <furq> i'd probably just write a script for mpv and then take the rest of the week off
[19:27:45 CEST] <kepstin> you'd want to set it up to do a double-blind test, really
[19:27:57 CEST] <furq> yeah an abx tester would be nice
[19:28:58 CEST] <kepstin> and you probably want to compare full-speed playback, not individual frames
[19:29:52 CEST] <saml> hrm that makes sense
[19:30:27 CEST] <saml> if_gaga1, how are you going to switch?  symlink ?
[19:32:00 CEST] <furq> saml: actually if you just get rid of the lavfi-complex and hit _
[19:32:01 CEST] <saml> ffmpeg -re -i /dev/myxfvb -an -c:v h264 -pix_fmt yuv420p -f flv  rtmp://....
[19:32:04 CEST] <furq> it'll switch between video tracks
[19:32:10 CEST] <furq> although it doesn't seem to work right here
[19:32:16 CEST] <saml> where /dev/myxfvb is a symlink?
[19:32:55 CEST] Action: saml installs mpv
[19:33:10 CEST] <saml> alias mpv='mplayer -fs -af volnorm=1:0.5,scaletempo'   i had mpv aliased :(
[19:33:51 CEST] <furq> lol
[19:34:26 CEST] <if_gaga1> saml: symlink for what?
[19:34:27 CEST] <furq> yeah _ cycles between video tracks, but it will then cycle to "no video"
[19:34:34 CEST] <furq> at which point it stops accepting keybinds
[19:34:37 CEST] <furq> so that's not very useful
[19:34:48 CEST] <furq> you could easily script it by setting --vid though
[19:36:08 CEST] <saml> _ works thanks. it does cycle to audio track as well
[19:36:31 CEST] <saml> if_gaga1, i don't know about xfvb. i'm misguiding you
[19:36:49 CEST] <saml> i was imagining there are two devices and you wanted to quickly switch them
[21:10:56 CEST] <mort> Isn't it a bit strange that libraries get stripped even when compiling with --enable-debug?
[21:11:39 CEST] <mort> I just spent a whole lot of time trying to figure out why valgrind didn't show me a stack trace deeper than av_malloc, then eventually found out that even though I asked for a debug build, my shared libraries were stripped
[21:13:19 CEST] <JEEB> mort: yea "install" always strips unless you do --disable-stripping
[21:13:34 CEST] <mort> yeah, I noticed, but it was really counter-intuitive to me
[21:13:46 CEST] <mort> why would I want to install a debug build of ffmpeg just to have it stripped?
[21:14:26 CEST] <JEEB> i totally don't disagree
[21:23:10 CEST] <fengshaun> As I found out, it's not quite possible to make an m3u8 v4 playlist with byteranges matching with keyframes and have an hls player be able to request those bytes and actually play the file
[21:23:13 CEST] <fengshaun> what am I missing there?
[21:23:16 CEST] <fengshaun> the solution might be fragmented mp4, but why can I not serve a normal mp4 with a generated m3u8 based on I-frames?
[21:23:23 CEST] <fengshaun> ffmpeg (the commandline) converts the mp4 to .ts, what does the conversion involve?
[21:23:30 CEST] <fengshaun> my end goal is to be able to do on-the-fly transcoding with libav* while retaining the original files in mp4 container and original codecs without doubling space requirements to include .ts segments
[21:30:53 CEST] <mort> valgrind's claiming that my avcodec_send_packet when decoding video is leaking memory, but I can't see where I should free anything?
[21:42:54 CEST] <kepstin> mort: check to make sure you're not taking any extra references on the AVPacket. But note that ffmpeg internally uses buffer pools to re-use memory allocations in the demuxer and decoder, so it's possible you're getting false positives in valgrind unless you're fully unreffing/freeing/closing everything on exit.
[21:45:00 CEST] <jbmcg> hey there - having a bit of a weird issue with ffmpeg, seeing some green artifacts when trying to work with certain source videos as input, wondering if anyone might have any ideas - more info here: https://pastebin.com/CAktrUMm
[21:47:52 CEST] <kepstin> jbmcg: hmm. looks like there are errors in the file (it has artifacts, just not necessarily green artifacts, when I play it in a newer mpv build)
[21:48:03 CEST] <kepstin> probably some minor difference in the error concealment
[21:48:33 CEST] <kepstin> note that ubuntu 16.04's ffmpeg is quite old, you *might* see better results with a newer version.
[21:52:18 CEST] <jbmcg> kepstin: thanks for taking a look - yeah the same rendering engine is powering several other projects so the ffmpeg version is not too easily upgraded, I wish I could figure out specifically what's wrong with the video files, not noticing anything too crazy with ffprobe or anything
[21:56:25 CEST] <kepstin> jbmcg: it looks similar to what i'd expect for smallish amounts of packet loss over rtp
[22:02:03 CEST] <if_gaga1> folks, please help me to debug, i'm running ffmpeg with next args: https://pastebin.com/3f3CT4QW
[22:02:26 CEST] <if_gaga1> how i can test zmq queue?
[22:03:07 CEST] <if_gaga1> i'll try to send "zoom" command like: echo 'scale=2*iw:-1, crop=iw/2:ih/2' | bash -x ./zmqsend.sh
[22:03:10 CEST] <if_gaga1> but no luck :/
[22:04:18 CEST] <if_gaga1> moreover i don't see any message about zmq command in ffmpeg output (but ffmpeg runs with -v debug flag)
[22:05:09 CEST] <if_gaga1> what i'm doing wrong? and how to properly test zmq queue ?
[22:09:18 CEST] <kepstin> if_gaga1: you don't send filters, you send commands. The syntax is described in the documentation for the zmq filter. You have to have a filter chain already running in ffmpeg, and then certain filters allow changing some of their parameters during runtime by receiving commands.
[22:10:34 CEST] <kepstin> filters that have a "Commands" subheader in the table of contents of https://www.ffmpeg.org/ffmpeg-filters.html can receive commands.
[22:19:23 CEST] <if_gaga1> kepstin: okay, got it, thanks, i'm read the zmq filter documentation, send 'overlay at my x 150' command through zmq api, but no changes on my stream, but don't see any changes on rtsp stream :/
[22:21:01 CEST] <if_gaga1> sorry for my dumb question, may be i'm googling with wrong keywords, but looks like a lack of documentation about zmq
[22:21:05 CEST] <kepstin> if_gaga1: I think you misread it - you use the 'filter_name at id' syntax in the filter graph to set a name on a filter. You don't use that syntax in the command
[22:21:19 CEST] <kepstin> the command only takes a target, which is just the filter name
[22:22:01 CEST] <if_gaga1> oh, yep, target, sorry, i'll continue read about it, sorry i'm totally noob in that
[22:22:16 CEST] <kepstin> oh, wait, apparently that does set the name like that, huh
[22:22:26 CEST] <kepstin> so you'd use the @ syntax in both places
[22:22:30 CEST] <kepstin> according to the examples
[22:23:00 CEST] <kepstin> I haven't personally used this, I'm just reading the same docs that you're reading...
[00:00:00 CEST] --- Tue Jun 19 2018

More information about the Ffmpeg-devel-irc mailing list