[Ffmpeg-devel-irc] ffmpeg.log.20180824

burek burek021 at gmail.com
Sat Aug 25 03:05:02 EEST 2018


[00:14:01 CEST] <feedbackmonitor> furq, So uhh, how big do you think this file is going to be. ffmpeg is still transcoding and the file is currently at 9.7 gb and the original size is 4.3
[00:48:04 CEST] <Cracki> qp 0 is for "lossless"?
[00:48:15 CEST] <Cracki> expect an insane bitrate
[00:48:20 CEST] <Cracki> what bitrate does it tell you atm?
[00:48:57 CEST] <Cracki> why do you even transcode that stuff? it's h264 at 50 Mbit/s already
[00:49:11 CEST] <Cracki> and I see no video filters... whyyyy
[00:50:28 CEST] <Cracki> if your video editing program can't use mixed frame rates, CHUCK it in the trash bin
[00:52:35 CEST] <feedbackmonitor> Cracki, Hey there
[00:52:46 CEST] <feedbackmonitor> Cracki, Apparently it can, am not sure how
[00:53:09 CEST] <feedbackmonitor> Cracki, I was just trying to make it easier, the whole work flow thingy
[00:53:13 CEST] <Cracki> do not edit videos with blender. that's insane.
[00:53:21 CEST] <Cracki> at least look at kdenlive
[00:53:25 CEST] <feedbackmonitor> Cracki, I am insane in the membrane.
[00:53:35 CEST] <feedbackmonitor> Cracki, Actually, it is not a bad editor
[00:53:42 CEST] <Cracki> enough highlighting thx
[00:54:22 CEST] <feedbackmonitor> highlighting? What is that?
[00:55:22 CEST] <feedbackmonitor> Cracki, There are editors who work in TV studios who use Blender as a video editor and they have pythn scripts (which they share) to make workflow easier
[00:55:36 CEST] <Cracki> "tv studios"
[00:55:39 CEST] <feedbackmonitor> Plus there are some good tutorials
[00:55:47 CEST] <feedbackmonitor> Cracki, hehehe
[00:55:54 CEST] <Cracki> pls stop highlighting
[00:56:09 CEST] <Cracki> and figure out how you can mix frame rates
[00:56:17 CEST] <feedbackmonitor> Cracki, You need to explain, I am new to this channel
[00:56:25 CEST] <Cracki> stop using my nickname
[00:56:35 CEST] <feedbackmonitor> Cracki,  I am not sure what you mean by highlighting
[00:56:47 CEST] <feedbackmonitor> oh, why is that bad?
[00:57:13 CEST] <Cracki> "highlighting" is one of the most common features on irc
[00:57:37 CEST] <feedbackmonitor> Well, I mean no harm.
[00:57:38 CEST] <Cracki> and when you repeatedly use my name, I repeatedly get audio alarm
[00:58:02 CEST] <Cracki> there is no need for that
[00:58:20 CEST] <feedbackmonitor> Cracki, My system has no such thing. I will make a note for your Nick
[00:58:23 CEST] <feedbackmonitor> sorry
[00:58:26 CEST] <feedbackmonitor> unintentional
[00:58:31 CEST] <feedbackmonitor> whew
[00:59:16 CEST] <Cracki> mentioning people's names is like tugging their pants leg
[01:00:08 CEST] <feedbackmonitor> this is the first channel I encountered this in my years of usage of IRC, but that is okay. I aim to comply where applicable.
[01:01:32 CEST] <Cracki> I think I remember you from some channel...
[01:01:49 CEST] <feedbackmonitor> ?
[01:01:56 CEST] <feedbackmonitor> What OS do you use?
[01:02:34 CEST] <feedbackmonitor> I spend time in Blender, Blender VSE and assorted GNU?Linux distributions
[01:03:01 CEST] <feedbackmonitor> Perhaps an opensource program like GIMP? Inkscape? Cinelerra?
[01:03:42 CEST] <feedbackmonitor> Ardour? I am on there a lot
[04:05:03 CEST] <FishPencil> If I have two images, is it possible to create a resulting image from only the identical pixels between them with FFmpeg?
[04:12:45 CEST] <furq> FishPencil: -filter_complex blend=all_mode=xor
[04:13:14 CEST] <furq> actually nvm that does the exact opposite
[04:20:26 CEST] <furq> if you just want to see what the differences are then blend=all_mode=grainextract
[04:20:35 CEST] <furq> but that doesn't pass through the original colours
[04:33:45 CEST] <FishPencil> I want to see what the similarities are
[09:33:09 CEST] <zyme> dumb re-encoding/transcoding with ffmpeg and target codec x264, whats the cli-syntax needed for lossless video output compared to source?
[09:35:07 CEST] <zyme> I was told x264 has a lossless recording mode (Probably pairs well in a mkv with flac hehe) - so I'm assuming ffmpeg can do that - since it does just about everything anyway..
[09:35:26 CEST] <zyme> ilove11ven: hi
[09:37:10 CEST] <ritsuka> lossless h.264 will take a lot more space than your input, if your input is already compressed
[09:37:36 CEST] <zyme> I only need it temporarily for streaming purposes..
[09:38:00 CEST] <zyme> so as long as 1gbit is enough bandwidth..
[09:38:33 CEST] <ritsuka> http://trac.ffmpeg.org/wiki/Encode/H.264
[09:41:59 CEST] <zyme> I'm hoping that using a ssd for storing, and maybe cuda or one of the other hardware assists will be able to make it in a reasonable amount of time, perhaps even in evenly length chunks and streamed in succession so it starts quick, but I found the command to auto-do-that in the tmp files from videostream lol =)
[10:01:52 CEST] <furq> zyme: streaming to what
[10:09:30 CEST] <zyme> A ChromeCast Ultra, using a gbit via usb 2 wired Ethernet -- source PC on same switch (Airport Extreme [specs in case I need to id the rev: gbit router w/802.11n dual band, 60mhz wide channels for wifi @450mbit]) -- I just need to know what output format(s) it supports so I can see which ones I can use on the chromecast, unless it has the ability to use google cast naively like VLC can [except subs are broke].
[10:11:21 CEST] <furq> the chromecast won't be able to decode lossless h264
[10:11:49 CEST] <furq> lossless is always hi444pp and pretty much only software decoders can handle that
[10:15:09 CEST] <furq> if you have an nvidia card then probably just use nvenc -preset slow -cq 10
[10:15:16 CEST] <furq> or some similarly low number for -cq
[10:25:35 CEST] <zyme> It suppoerts Profiles up to level 5.1, hi Main and Main10.
[10:26:52 CEST] <zyme> https://developers.google.com/cast/docs/media
[10:31:07 CEST] <furq> yeah that doesn't include hi444p
[10:31:16 CEST] <furq> also that's for hevc
[10:31:48 CEST] <zyme> I did a general google search about hi444 and chromecast, seems it does support it
[10:31:52 CEST] <furq> with that said, if this supports h264, hevc and vp9, why do you need to transcode
[10:31:56 CEST] <zyme> though my TV probably doesn't
[10:32:56 CEST] <furq> what your tv supports is irrelevant because the chromecast decodes it
[10:33:04 CEST] <furq> but the chromecast definitely doesn't support hi444pp
[10:34:13 CEST] <JEEB> not even nvidia itself decodes lossless (even though you can encode lossless even with 4:4:4 with it) :D
[10:35:20 CEST] <zyme> It supports HDR and 4k res, I won't me using any HDR or above 1080p media..
[10:35:51 CEST] <JEEB> those have nothing to do with lossless coding
[10:35:59 CEST] <furq> also it doesn't support hdr or 4k with h264
[10:36:03 CEST] <furq> that's just for hevc
[10:36:13 CEST] <JEEB> a hw decoder with AVC supports most likely up to high profile, level 4.1
[10:36:15 CEST] <furq> but yeah, like i said, why do you need to transcode
[10:36:29 CEST] <JEEB> a hw decoder with HEVC most likely supports up to main 10 profile, level 5 or something
[10:36:31 CEST] <furq> JEEB: the page he just linked says high:4.1, so yeah
[10:36:33 CEST] <furq> @
[10:36:43 CEST] <zyme> Inserting image based subs primarily
[10:37:20 CEST] <zyme> but also I'm trying to reduce multiple transcoding degradation
[10:37:51 CEST] <furq> just throw a lot of rate at it
[10:39:55 CEST] <furq> JEEB: does lossless hevc require some fancy profile
[10:40:30 CEST] <furq> apparently pascal nvenc does lossless but i'm guessing it requires main444 or something that'll make it not work here
[10:42:03 CEST] <JEEB> I would expect the same they do with AVC
[10:42:06 CEST] <JEEB> you can do lossless encoding
[10:42:08 CEST] <JEEB> but not decoding
[10:45:21 CEST] <zyme> I think google cast requires yuv420p
[10:45:50 CEST] <zyme> if I'm starting to understand whatcolor modes are correctly that is..
[10:47:32 CEST] <JEEB> 4:2:0 yes, but the main thing is to also not use profiles/features not supported
[10:47:39 CEST] <zyme> is there a way to make only the color mode lossy?
[10:47:55 CEST] <JEEB> (lossless AVC is always high 4:4:4 whatever profile, even if 4:2:0)
[10:48:05 CEST] <JEEB> zyme: you cannot do lossless coding with hw decoders. QED
[10:48:14 CEST] <JEEB> be it lossless 4:2:0 or lossless 4:4:4
[10:48:16 CEST] <JEEB> or 4:2:2
[10:48:26 CEST] <JEEB> it's just not supported even if in theory they could support it
[10:48:42 CEST] <JEEB> use a low enough CRF/quantizer
[10:52:21 CEST] <zyme> Then can I set a the max bitrate to something like 100MB/s with a variable output bitrate so it uses less if possible, like say on frames that are just 100% black for example (I imagine that ex. compresses well)
[10:52:43 CEST] <JEEB> what I noted should do for you
[10:53:13 CEST] <JEEB> x264 has CRF, and I think those pesky hw encoders have quantizer settings
[10:57:29 CEST] <zyme> eh, I guess, I just hate CRF, most of the time I'd be okay with 10x the output size as long as the quality was better - but CRF has to combine the speed and quality with the same cli-argument
[10:57:49 CEST] <JEEB> no
[10:57:54 CEST] <JEEB> CRF is only quality
[10:58:02 CEST] <JEEB> it's a variable quantizer depending on the complexity and some others
[10:58:07 CEST] <JEEB> it has zero % to do with *speed*
[10:58:56 CEST] <zyme> "(CRF) is the default quality (and rate control) setting for the x264 and x265 encoders. You can set the values between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point you will notice the quality degradation."
[10:59:25 CEST] <JEEB> yes, nothing does it say anything about speed
[10:59:31 CEST] <JEEB> it is rate control
[10:59:40 CEST] <JEEB> it's like quantizer, except dynamic
[10:59:50 CEST] <furq> zyme: do you have an nvidia card in the pc you're streaming from
[10:59:51 CEST] <zyme> rate means speed in rl english lol
[11:00:03 CEST] <JEEB> rate of bits
[11:00:09 CEST] <JEEB> so amount/speed of bits per second
[11:00:14 CEST] <JEEB> as opposed to rate of the encoder
[11:00:26 CEST] <JEEB> bit rate | rate of bits
[11:00:33 CEST] <JEEB> so rate control :P
[11:01:02 CEST] <zyme> and an upscaled unfiltered output needs to read faster if the rate is higher..
[11:01:28 CEST] <JEEB> you've lost me
[11:01:43 CEST] <JEEB> yes, *any* rate control if it uses more bits will cause more bits to be written
[11:01:54 CEST] <JEEB> and thus the decoder has to be able to read more bits when decoding
[11:02:10 CEST] <JEEB> that's unreltaed 100% ot the actual rate control mode used
[11:02:42 CEST] <furq> lower crf values will be marginally slower
[11:02:51 CEST] <JEEB> that's unrelated to *CRF*
[11:02:51 CEST] <furq> but it's very little compared to changing preset
[11:02:55 CEST] <furq> true
[11:03:03 CEST] <JEEB> since you're writing more bits it takes longer
[11:03:05 CEST] <zyme> I wish the amount of quality output wasn't controlled by the same setting as bitrate, ie the reverse of higher quality compresson, think less compression same quality..
[11:03:14 CEST] <furq> zyme: it isn't
[11:03:33 CEST] <JEEB> ok, so what was meant is
[11:03:41 CEST] <JEEB> CRF higher => encoder tries to compress more
[11:03:46 CEST] <furq> also as i said you shouldn't use x264 for this if you can avoid it
[11:03:47 CEST] <JEEB> CRF lower => encoder tries to compress less
[11:04:05 CEST] <JEEB> and that leads to smaller/bigger file sizes
[11:04:19 CEST] <JEEB> it only controls quality within specific parameters
[11:04:31 CEST] <furq> "compress more" in the sense that it will discard more information
[11:04:44 CEST] <JEEB> yes, it will use higher quantizers
[11:04:50 CEST] <furq> not that it will try harder to keep information, which is the thing that takes time
[11:04:54 CEST] <JEEB> yes
[11:04:59 CEST] <JEEB> that is controlled by presets
[11:05:02 CEST] <furq> right
[11:05:14 CEST] <JEEB> the algorithms in the back are controlled by presets, and then the quantizer range is controlled by rate control
[11:06:54 CEST] <zyme> the point has always been: I want to compress less for my output. what setting can I change to make two transcodes of a video file that should be identical in quality but largely different in size?
[11:07:59 CEST] <JEEB> you just set your preset to be much faster
[11:08:12 CEST] <JEEB> although technically CRF values are not the same across presets
[11:08:22 CEST] <JEEB> because what the encoder "sees" depends on the presets
[11:08:31 CEST] <JEEB> and thus the algorithms of rate control work on different stuff
[11:08:40 CEST] <JEEB> so in theory to hit the same quality you have to tweak both :P
[11:08:53 CEST] <JEEB> but veerry simplified you just tweak the preset until you're fast enough
[11:09:05 CEST] <JEEB> and then you tweak your CRF value for your required quality level
[11:09:54 CEST] <zyme> so if CRF is 51 (highest - am I right?) and the preset is changed then the outputs should be visibly identical?
[11:10:13 CEST] <furq> if the crf is 1
[11:10:42 CEST] <JEEB> zyme: technically since what CRF "sees" is different between presets you cannot expect the CRF value's results to be the same, but generally speaking they should be *similar*
[11:10:49 CEST] <zyme> and superfast or veryfast, etc are the different preset speeds?
[11:10:52 CEST] <furq> yes
[11:11:02 CEST] <furq> also yeah they won't be identical but you will not be able to spot the difference
[11:11:11 CEST] <furq> honestly you won't be able to see any difference at crf 15
[11:11:22 CEST] <furq> but since this is streaming over a lan then you might as well go nuts
[11:11:48 CEST] <JEEB> basically I always drive the point home because people tend send out question marks as they go slower and slower in presets and around slow-placebo you actually get *larger* files from the same CRF
[11:11:54 CEST] <JEEB> because the encoder is seeing more and more stuff
[11:11:57 CEST] <zyme> so we're at about 98 or <99% the same?
[11:12:47 CEST] <JEEB> "depends" is the 100% correct answer
[11:13:06 CEST] <JEEB> but generally speaking you are in the very similar ballpark
[11:13:12 CEST] <JEEB> but it's generally just simpler if you first tweak the speed
[11:13:14 CEST] <JEEB> and then your CRF
[11:13:19 CEST] <JEEB> then you have it all set up :P
[11:13:48 CEST] <zyme> if I'm using the fastest setting should I just use HEVC, or will it still take a very long time even on the fastest setting?
[11:15:50 CEST] <JEEB> I have no idea hwich encoder you're taling about anyways
[11:16:15 CEST] <zyme> originally I was talking about x264
[11:16:25 CEST] <JEEB> (and I have no idea of the speed or quality of x265's fastest presets)
[11:16:31 CEST] <JEEB> x264 is much more of a known quantity
[11:16:52 CEST] <zyme> but that was before I found out that 265 supported lossless as well, which I'd been told before it didn'y
[11:17:04 CEST] <JEEB> oh it did support, but your clients don't
[11:17:17 CEST] <JEEB> so even if you did lossless encode it doesn't help much if your client can't habla espanol
[11:18:29 CEST] <zyme> other than the color mode I thought it would? was the impression I was getting..
[11:19:39 CEST] <zyme> it supports HDR, Level 5.1, Main10, 4k res, hevc decoding, etc..etc.. (I'm a little confused as to what the level #'s are though lol)
[11:19:44 CEST] <kerio> is lossless h264/h265 supported by anything other than ffmpeg?
[11:20:28 CEST] <kerio> cuz sure as shit quicktime doesn't support it
[11:22:23 CEST] <zyme> "Constant Rate Factor. Determines output quality. 0 is lossless, 23 is default, and 51(8-bit x264) or 63(10-bit x264) is worst possible. Increasing the CRF value +6 is roughly half the bitrate while -6 is roughly twice the bitrate."
[11:22:43 CEST] <zyme> from some page called ffmpeg tricks
[11:27:03 CEST] <JEEB> zyme: profiles = features, levels = memory requirement from the decoder
[11:27:27 CEST] <kerio> a hardware decoder will support certain combinations of profiles and levels
[11:27:55 CEST] <JEEB> so levels control resolution and how many references (and bit rate in theory)
[11:28:00 CEST] <kerio> ffmpeg don't care, ffmpeg don't give a fuck
[11:28:13 CEST] <JEEB> and profiles control features
[11:29:20 CEST] <JEEB> for example in AVC the features basically go up: constrained baseline < main < high < { all of those "high xxx" profiles, up to high 4:4:4 whatever it was }
[11:29:50 CEST] <JEEB> with HEVC it's main < main 10 < { all of those "main XXX" profiles }
[11:36:52 CEST] <zyme> how does main 10 compare to levels, as its always listed in place of a levels number in mediainfo?
[11:38:49 CEST] <JEEB> levels are compltely separate, levels are in both AVC and HEVC
[11:39:07 CEST] <JEEB> they're orthogonal
[11:39:09 CEST] <zyme> if I use a lower level then with crf at 1 for example, then that should also speed up the encoding and be similar quality no?
[11:39:35 CEST] <zyme> or at least with 0 for lossless is should be different bitrates same output
[11:41:21 CEST] <zyme> https://usercontent.irccloud-cdn.com/file/2BbjncZr/Color%20doesn't%20matter%20for%20lossless%20output.png
[11:42:37 CEST] <zyme> So I can make a lossless transcode after-all, it doesn't matter that my Chromecast doesn't support hi444
[11:43:08 CEST] <JEEB> I thought I mentioned this already if you read back through what I've noted
[11:43:15 CEST] <JEEB> but lossless coding is limited to that profile as a feature
[11:43:24 CEST] <JEEB> so ALL LOSSLESS SHIT WILL BE High 4:4:4 blah profile
[11:43:40 CEST] <JEEB> and as I've noted UMPTEEN TIMES ALREADY lossless is not supported on hw decoders
[11:43:42 CEST] <JEEB> for eff's sake
[11:44:40 CEST] <durandal_1707> but can I play 8k with PotPlayer ?
[13:58:17 CEST] <Cracki> wat does that mean
[14:08:22 CEST] <durandal_1707> Cracki: what?
[14:14:42 CEST] <occivink> hey, maybe a silly question but how does licensing work when using distribution versions. Let's say I create an executable that I dynamically linked against an LGPL-compliant build of ffmpeg. Then, I distribute a package containing the executable but not the LGPL-ffmpeg, instead it specifies to use the system version (which happened to be built with --enable-gpl). Is my executable now GPL? If yes, do I need
[14:14:44 CEST] <occivink> to package my own LGPL-ffmpeg to stay compliant?
[14:18:02 CEST] <JEEB> IANAL, but if you expect and cannot work without GPL features there might be some reason to think you are basing on a GPL configuration. But if you don't, and you don't distribute FFmpeg itself then you're basing on LGPL FFmpeg
[14:28:06 CEST] <occivink> uh okay, and if it's just some optional feature? e.g. h264 encoding is possible, and it might use libx264 if present
[14:28:52 CEST] <JEEB> as long as the binary does not expect to have libx264 there I'd say it's OK
[14:30:13 CEST] <JEEB> hmm, I wonder if you can give a format name to -encoders or something
[14:32:05 CEST] <occivink> avcodec_find_encoder only takes a codec id at least, not a specific encoder name
[14:32:35 CEST] <occivink> but this seems like a "loophole" to me, at least in the spirit of the gpl
[14:36:11 CEST] <JEEB> there's another API to request specific encoders
[14:36:29 CEST] <JEEB> avcodec_find_encoder_by_name
[14:37:35 CEST] <occivink> right
[14:37:39 CEST] <zyme> My laptop kept BSOD'ing every time I opened chrome... This has happened a few times the past week, ever since I enabled Intels HD4400 iGPU to work with my nvidia card thinking gpu offloading would be split between the gpus...
[14:38:15 CEST] <kepstin> zyme: that sounds like a windows driver bug completely unrelated to ffmpeg...
[14:38:36 CEST] <zyme> I think Chrome defaults to the intel, possibly because theres more feature support compared to my NVS Fermi
[14:39:21 CEST] <zyme> it is unrelated, except that I had 2 or 3 sentences I was about to hit enter on, now I've forgotten
[14:42:46 CEST] <zyme> something about testing things to see what happens when using something in a way not intended is the best form of discovery, its just to bad all non-system code can't be executed like a vbs file with "On Error Resume next" -> although for video formats spoofing the details can usually make a non-compatible video play will little or no corruption...
[14:44:25 CEST] <zyme> Every time my chromecast2 hit something the hardware decoder crashed on I had to restart the player, the cc2-ultra just pauses, I resend resume/play and it contines on
[14:48:06 CEST] Action: zyme notices the statementon ffmpeg's page that NVENC combined with CUVID == 100% GPU encoding, I wonder if you have two Nvidia GPU's, and only one has NVENC support - does it still offload work to the sli partner? 
[14:51:07 CEST] <kepstin> if you have two different gpus, they're not in sli. Compute stuff doesn't use sli anyways, and the hardware encoder/decoder is completely separate per card.
[14:51:15 CEST] <zyme> And what others can combine? Will Cuda and Quick Sync?
[14:51:39 CEST] <kepstin> so you have to explicitly use a particular card for something, and you want to avoid transferring video between cards because that means a slow copy through memory or pci
[14:52:40 CEST] <kepstin> ffmpeg can do something silly like decode video using nvdec, download to system ram, then encode to quicksync if you manually set up the appropriate downloads/uploads in filters.
[14:52:41 CEST] <zyme> if the software supports it, they do use the PCI-E to distribute the load though, NVENC is a hardware decoder so it would do its thing alone
[14:52:49 CEST] <kepstin> but there's no good reason to do so
[14:54:12 CEST] <kepstin> (well, maybe there's reason to do so if you have an nvidia card like the gt 1030 with no nvenc, but its hardware decoder can do something quicksync can't decode, but that's a bit of an edge case)
[14:54:16 CEST] <zyme> I have an Ivy Bridge on my laptop and my desktop is a 4-core unlocked Sandy Bridge.. Quick sync probably wont help nearly as much as the latest revs
[14:55:11 CEST] <BtbN> kepstin, ffmpeg on Windows can do nvdec decode to quicksync encode without even downloading to ram in theory
[14:56:21 CEST] <BtbN> As nvdec can output d3d surfaces, and qsv can take them. Not sure how well that works in practice though, as they will clearly be on different GPUs
[14:56:36 CEST] <kepstin> how is that even possible? nvdec decodes to gpu ram, and the intel igpu can't access nvidia gpu ram
[14:56:52 CEST] <BtbN> It can
[14:56:56 CEST] <kepstin> if it *does* work, then direct3d is doing the copy behind your back
[14:57:13 CEST] <zyme> That reminds me of the two Nice Quadro cards I use to have, which I ubtained for free - and retailed for over 2k USD each, it was  a few years back and I ended up selling them off on ebay for $500 each after their value dropped.
[14:57:24 CEST] <BtbN> Nvidias unified gpu memory makes it possible, as it blends in the VRAM into the normal address space
[14:57:30 CEST] <zyme> I salvaged them from the trash at work
[14:57:51 CEST] <BtbN> So at least in theory, the Intel GPU could read it straight from the nvidia VRAM
[14:58:32 CEST] <DHE> it's faster and maybe more convenient on a busy system to let the GPU do most of the work onboard, but you can steal the frames out of the GPU no problem
[14:58:35 CEST] <zyme> Integrated gpu's use system ram, DMA..
[14:58:37 CEST] <kepstin> BtbN: either way it's still doing a copy out over the pcie bus
[14:58:40 CEST] <BtbN> If this is some Laptop with Optimus, you have to keep in mind that they very often entirely remove the video de/encoding unit from the nvidia GPU
[14:59:51 CEST] <kepstin> hmm, the optimus stuff is weird, isn't it, where the nvidia card does something like render into the system ram used by the igpu as a framebuffer?
[15:00:08 CEST] <BtbN> Yes, in the optimal setup that's used on Windows, that's how it works.
[15:00:31 CEST] <zyme> as long as you disable protected memory/process isolation you should be able to specify the memory address, create a pointer, and use OpenCL or something to make the processes initiated by the card..
[15:01:05 CEST] <zyme> drivers work at above user level so the API might allow for it cleanly
[15:02:39 CEST] <zyme> btw, I think the outdated PCI-E bus on my laptop would kill most the potential speed of upgrading its gpu (but its awesome that my laptop has a removable gpu lol)
[15:05:39 CEST] <zyme> my desktop, the Ivy Bridge is What I'd likely upgrade.. and it runs 4*3.4Ghz x86_64 so.. The new GPU would take 90% of the web browser load anyway so you probably wouldn't notice the impact..
[15:06:59 CEST] <kepstin> web browser load is still mostly cpu outside of certain animation/rendering steps, video decoding (in some configs), and webgl.
[15:08:14 CEST] <BtbN> Specially on Linux, Webbrowsers don't use video decoding hwaccel
[15:09:34 CEST] <kepstin> last i checked, firefox doesn't even fully enable gpu compositing on linux yet
[15:09:47 CEST] <kepstin> (this is partly why webgl is so slow in firefox on linux...)
[15:10:22 CEST] <atomnuker> firefox with gpu compositing is actually even slower on 4k displays
[15:10:35 CEST] <atomnuker> because every time a pixel changes it uploads the entire window to the gpu
[15:10:55 CEST] <atomnuker> software compositing does partial updates so it only uploads what has actually changed
[15:11:31 CEST] <zyme> OpenCL 1.2 is supposedly fully C++ compatible, how well it performs depends on the program..
[15:11:57 CEST] <zyme> and how much code has been modified to use it instead of the cpu
[15:12:11 CEST] <zyme> and balancing the two if needed..
[15:12:43 CEST] <kepstin> stuff like opencl is interesting, yeah. if you get it wrong it'll slow the program down due to extra synchronization/setup required when switching between cpu and gpu :/
[15:31:54 CEST] <kerio> i wonder if opencl and the like would work better with shared memory between cpu and gpu
[15:32:55 CEST] <atomnuker> actually yes, with vulkan there's an improvement if you need the data to remain on the gpu
[15:33:26 CEST] <atomnuker> since with a normal upload operation you copy it on the gpu linearly and then you do a gpu memcpy to tile the image to a format gpus are good with
[15:34:01 CEST] <atomnuker> you can cut the second part with a shared memory interface, but doing operations on linear images is slower, especially if you have quite a lot to do
[15:34:48 CEST] <atomnuker> so it'll only give benefit for small filterchains (e.g. upload, color correct, download) or for display purposes (upload, convert to rgb to a separate image, display)
[15:45:24 CEST] <kerio> hm, why do you need to tile the image?
[15:45:28 CEST] <kerio> can't you do that beforehand?
[15:57:26 CEST] <zyme> [off-topic] other then transcoding the only bottleneck that's bothersome on my desktop is a 250mb proprietary database with encryption and no odbc's or api, in plain text export its only 120mb but I don't know why its so slow, it uses very little cpu so I'm thinking a little bit HDD IOP's, maybe a whole lot of memory bandwidth? [ddr3 dual channel 1333 mobo-limited 1800-capable]..
[16:02:35 CEST] <zyme> I have a MS-SQL licence from work that cost about 10k new that I could've used via my old technet licence if it only supported sql servers.. the company that made this uses open source everything obscured within mostly encrypted zip files..even in the temp folder. *sigh*
[16:11:31 CEST] <zyme> [another offtopic comment] I bet that SQL server is still running mostly untouched since I left, the previous one went untouched for several years, 1 or 2 went by without hiring someone to admin - combined with different client ver. used on every users PC.. it was a flight plan app so: it ended up the cause of a mid-air collision.
[16:13:01 CEST] <zyme> (Every IT guy loves it when the bosses come around and say "here's $22,000 - fix our server please" =)
[16:19:50 CEST] <_Mike_C_> Hello again, sorry for the repeat question.  Can anyone point me to an example that programatically creates and uses the h264_nvenc or hevc_nvenc encoders?  I am running into access violations, I assume due to missing some extra steps needed in their instantiation.
[16:21:01 CEST] <_Mike_C_> Access violations in the avcodec_send_frame() call
[16:21:19 CEST] <zyme> _Mike_C_: there were no make errors? and did you compile w/static library's?
[16:21:36 CEST] <zyme> btw is it linux/win/mac/etc?
[16:21:41 CEST] <_Mike_C_> I'm using the distributed DLLs from the ffmpeg site, I'm on windows
[16:21:57 CEST] <_Mike_C_> Both the encoders work fine when used from the command line
[16:22:13 CEST] <zyme> Windows 10? version 1803?
[16:22:46 CEST] <zyme> if its 1803 they added a bunch of new memory protective security settings in the security manager
[16:23:40 CEST] <_Mike_C_> I'm not sure what exact windows it is, it is windows 10 though.
[16:23:57 CEST] <zyme> and the av protects access to many file locations unless its explicitly added, but that shound give you a notice from the sec mgr if that happens
[16:24:05 CEST] <_Mike_C_> The encoders work fine from the command line ffmpeg, so I assume its not something related to that, wouldn't that be correct?
[16:24:15 CEST] <zyme> press windows key and r
[16:24:37 CEST] <zyme> a run box should open, type winver
[16:24:46 CEST] <_Mike_C_> 1709
[16:25:41 CEST] <zyme> that's the previous update rolled out - which means it should be less troublesome
[16:25:42 CEST] <_Mike_C_> It's definitely some setting I'm either not setting or setting incorrectly.
[16:26:23 CEST] <_Mike_C_> I'm looking at the vdpau example and there are a lot of extra steps to use that hardware encoder, is there a similar example for the nvenc hardware encoders?
[16:26:45 CEST] <zyme> I've never used the shared dll version - did/do they need to be registered in windows with regsvr32.exe?
[16:27:32 CEST] <_Mike_C_> No, other software encoders work fine.  This is specifically the hardware encoders.
[16:28:08 CEST] <zyme> I'm not a ffmpeg expert like most those in here, but I spent 9 years in IT so I have a general knowledge..
[16:28:33 CEST] <zyme> I assume you needed to download a sdk from nvidia and #include it for your compile?
[16:29:07 CEST] <_Mike_C_> No they worked out of the box, tested with the command line version of ffmpeg
[16:30:38 CEST] <zyme> Oh.. your using a gui/some soft of syntax builder?
[16:31:28 CEST] <_Mike_C_> You can use ffmpeg from the command line and call the .exe or you can make a program that calls the api.  I'm running into issues due to missing steps when calling the api directly
[16:34:49 CEST] <atomnuker> kerio: nope, each GPU has a different way of tiling which is opaque to users, only linear tiling is universally understood
[16:37:41 CEST] <haasn> an extension to make the tiling requirements public would be cool
[16:37:53 CEST] <haasn> useful for people who could then precalculate their data in the right tiling
[16:38:16 CEST] <zyme> I wonder how that would have worked with 3dfx's rampage had it been released. it supposedly used some new method that natively tiled rendering without rendering the part of the image unseen from overdraw,
[16:38:18 CEST] <haasn> and it being an extension means you wouldn't have to implement it if you do exotic or usage-dependent tiling in some way that isn't compatible with the specification
[16:38:31 CEST] <haasn> I mean "tiling" is an overloaded term
[16:38:44 CEST] <haasn> these days GPUs use tiled textures, tiled rasterizers, tiled binning, and none of these are "tiled rendering"
[16:39:22 CEST] <zyme> funny how nvidia infringed on all off their patents - then bought them to gain licencing rights and threw out most of their tech the first chance they got..
[16:41:37 CEST] <zyme> They had pushed back a court trial date for more than 2 years - and were only a month away from it when they went bankrupt.. I'd boycott them except they hired most of 3dfx's employees so I figure its still supporting the same people minus the ceo that mismanaged them into the ground at the end..
[16:43:11 CEST] <zyme> _Mike_C_: they work out of the box when in the same dir as the .exe - what about if it isn't, or did you already test that?
[16:43:38 CEST] <atomnuker> haasn: tru dat, would allow you to map drm textures directly and download them without mapping to something else
[16:43:49 CEST] <zyme> if the .exe is in a path variable contained dir it's likely the same result as being in the same folder
[16:44:24 CEST] <atomnuker> libdrm's does document what each tiling is meant to look like in memory but a lot of them are just defined as vendor specific and undocumented
[16:45:49 CEST] <zyme> this the same memory that uses advanced randomized placement as a barrier against buffer overflow attack attempts? lol.
[16:47:32 CEST] <zyme> https://usercontent.irccloud-cdn.com/file/WLbUkpgw/High%20Entropy%20ASLR.png
[16:47:48 CEST] <_Mike_C_> I'm using the same DLLs, yes.  They're in my system path.
[16:50:04 CEST] <zyme> doesn't a api call not access the code via the file system method? the .exe cli would work wherever it was located but since it's designed to work when in the same folder..
[16:50:36 CEST] <haasn> zyme: ASLR just randomizes the base offset for the executablle
[16:50:41 CEST] <zyme> I would try and run regsvr32 on each of them just to see if that works or gives an error
[16:51:03 CEST] <haasn> and I wonder how that "Force randomization of images not compiled with /DYNAMICBASE" works
[16:51:04 CEST] <zyme> generally if it needed it it will work, if it doesn't it'll error..
[16:51:07 CEST] <haasn> seems like it would immediately segfault
[16:52:12 CEST] <zyme> the last option in that list is "Terminate a process when heap corruption is detected"
[16:52:30 CEST] <haasn> how nice of them to provide the option to *not* do that
[16:52:34 CEST] <zyme> turn that off and maybe it'll run while leaking memory lol
[16:52:49 CEST] <haasn> leaking memory and executing arbitrary code
[16:52:57 CEST] <haasn> but heap corruption != segfault
[16:52:59 CEST] <zyme> good stuff
[16:53:14 CEST] <haasn> heap corruption just means your malloc state is inconsistent
[16:53:34 CEST] <haasn> like a double free
[16:53:38 CEST] <zyme> I've never coded using a memory manager/garbage cleanup mechanism like with .net code
[16:57:23 CEST] <zyme> The most automation I'm familar with is using string variables instead of char -- but I imagine complex enough software needs to be more dynamic with memory allocation.. how does ASLR work by adding somethin akin to pointers at each fragmentation point, chunk sizes like file system clusters or packet mtu's?
[16:58:43 CEST] <zyme> malloc inconsistent with what?
[16:59:14 CEST] <zyme> unexpected early termination?
[16:59:34 CEST] <haasn> ASLR works with position independent code, by doing indirect address lookups / offsets instead of hard-coding them
[17:00:07 CEST] <haasn> inconsistent with its internal assumptions, like representing anything valid or sane
[17:00:28 CEST] <haasn> e.g. imagine something as simple as a counter tracking the number of allocated objects
[17:00:29 CEST] <zyme> like pointers that are not statically assigned, I follow so far..
[17:00:46 CEST] <haasn> on a double free() that pointer will be erroneously decremented and eventually hit negative numbers
[17:00:54 CEST] <haasn> s/pointer/counter/
[17:01:17 CEST] <haasn> >In 2017, an attack named "ASLR•Cache" was demonstrated which could defeat ASLR in a web browser using JavaScript.[41]
[17:01:20 CEST] <haasn> classic
[18:01:37 CEST] <zyme> if only misuse and security wasn't a concern, imagine how much faster everything would be without logical software layers, isolated exec data with software gateways, hardware based virtual environments on top of software ones and software levels working with user-based levels working with kernel based levels etc. everything being monitored by av scanners and malware scanners etc.. soon AI adaptive security vs AI based malware
[18:01:59 CEST] <haasn> misuse? security???
[18:02:13 CEST] <haasn> my main concern is buggy software tearing everything down.
[18:02:22 CEST] <haasn> imagine getting a BSOD every 3 seconds
[18:02:30 CEST] <haasn> this is what life would be like if we didn't have umpteegn security layers
[18:02:37 CEST] <haasn> s/teegn/teen/
[18:03:07 CEST] <haasn> but yeah AV software is a scam industry
[18:05:14 CEST] <Hello71> you heard it here first, ffmpeg developers admit their software is a scam
[18:05:27 CEST] <haasn> lol
[18:07:35 CEST] <zyme> true - but with frequent validation testing and the right programmed logic it'll only slow down to course correct when some bad code starts breaking things. instead now we have hardware based av+anti-malware + hardware drm to trying to use rl hardware to isolate itself from software.. btw I'm reminded of a couple months ago reading that the latest malware is ram-stored 100% so no av software scans can find it..
[18:31:19 CEST] <jokke> hi
[18:33:26 CEST] <jokke> i'm trying to figure out what's the easiest way to get the codec (such as "x264") for a stream of data coming over the network without having to use command line tools
[18:34:38 CEST] <jokke> my scenario: someone sends a file over http, i want to check if it's in an allowed format, if not i want to stop reading the request body and respond with a 400 error code
[18:36:55 CEST] <DHE> well ffmpeg does include API options, so you could buffer in RAM and try letting libavformat guess the type. however the HTTP spec officially requires you to read the whole body before you send the error message and some formats are not streaming-friendly or only conditionally so
[18:39:50 CEST] <jokke> DHE: are you sure baout that?
[18:39:52 CEST] <jokke> *about
[18:40:05 CEST] <jokke> i recently read that it's not required to read the whole body
[18:40:52 CEST] <jokke> https://tools.ietf.org/html/rfc7540#section-8.1
[18:42:18 CEST] <DHE> well this is HTTP v2
[18:42:29 CEST] <jokke> ah
[18:42:31 CEST] <jokke> dang
[18:42:35 CEST] <jokke> true
[18:42:46 CEST] <DHE> and I've had issues in the past when using the apache LimitRequestBody directive in the past. might be an implementation detail but it sucked for me
[18:42:56 CEST] <jokke> i see
[18:43:01 CEST] <jokke> damn that sucks
[18:43:15 CEST] <jokke> waiting for a multi gigabyte upload to finish
[18:43:37 CEST] <jokke> just to tell "nah, i won't take that"
[18:44:22 CEST] Action: DHE is pleased with his 10gigabit servers
[18:44:39 CEST] <jokke> heh, but your client's won't be happy
[18:44:44 CEST] <jokke> *clients
[18:45:14 CEST] <jokke> waiting for the upload to finish for an hour only to see an error. That sure makes everybodys day
[18:45:48 CEST] <jokke> DHE: are you sure this also applies for Transfer-Encoding: chunked?
[18:46:09 CEST] <Foaly> uploading from a browser?
[18:46:11 CEST] <jokke> yeah
[18:46:40 CEST] <Foaly> there is definitley a way to partially upload a file, because i have seen websites that do that
[18:47:01 CEST] <Foaly> but i don't know anything about web development
[18:47:06 CEST] <Foaly> probably javascipt magic
[18:47:16 CEST] <jokke> hmm
[18:47:39 CEST] <DHE> also resumable uploads, etc
[18:48:44 CEST] <DHE> or something where you upload in 1 megabyte increments or something, giving your application the chance to look at an incomplete file and veto the next 1meg chunk or such
[18:48:49 CEST] <jokke> yeah
[19:01:47 CEST] <Cracki> definitely javascript magic. js gets a handle to the local file, can seek and read it any way it wants, sends stuff to the server, which can handle the data any way it needs to.
[19:02:08 CEST] <Cracki> you can chunk it, or POST/PUT/... ranges, anything goes
[19:02:32 CEST] <Cracki> "chunked" is inside the http layer, so transparent to whoever uses it
[19:04:13 CEST] <JEEB> yeh
[19:04:50 CEST] <jokke> https://lists.w3.org/Archives/Public/ietf-http-wg/2004JanMar/0044.html
[19:12:53 CEST] <zyme> nice protocal function, I would've gone straight to ethereal/wireshark + a special filter
[21:56:29 CEST] <Mista_D> Any method for FFPLAY to render HDR 4K as "intended" please?
[21:58:07 CEST] <JEEB> use a proper player that has tone mapping
[22:01:21 CEST] <remlap> mpv
[22:01:23 CEST] <Mista_D> VLC?
[22:01:29 CEST] <remlap> nope
[22:02:52 CEST] <Mista_D> remap: thanks, it worked
[23:02:51 CEST] <haasn> VLC 3
[23:07:09 CEST] <FishPencil> Not sure if this is ontopic here, but I have a computer vision question and IDK where to ask it
[00:00:00 CEST] --- Sat Aug 25 2018



More information about the Ffmpeg-devel-irc mailing list