[Ffmpeg-devel-irc] ffmpeg.log.20160805

burek burek021 at gmail.com
Sat Aug 6 03:05:01 EEST 2016


[04:40:13 CEST] <leea> I increased the volume of a video by 40x. Now I have some annoying noise fragments that also got amplified. Is it possible to get rid of those some how?
[04:40:34 CEST] <leea> I'm not sure what the right terms to search are but using some limiter and setting a floor?
[07:43:59 CEST] <Kd_user> Hello! I am planning to modify vf_convolution.c to support strided convolutions. I wanted to know how can I add new options to the convolution filter? (i.e. after -vf, I wish to add option like x_stride, y_stride)
[07:44:39 CEST] <Kd_user> I was confused about how to assign these to the Silter Context struct
[07:46:01 CEST] <bp0> Kd_user, https://www.ffmpeg.org/doxygen/2.0/group__avoptions.html
[07:46:28 CEST] <bp0> no, wait...  https://www.ffmpeg.org/doxygen/trunk/group__avoptions.html
[07:47:25 CEST] <bp0> see "Implementing AVOptions"
[07:48:06 CEST] <Kd_user> @bp0 Thanks for the reply. This was very helpful :)
[07:48:25 CEST] <bp0> np
[08:23:14 CEST] <Mandevil> Hm, ffmpeg -h type=<codec> fails with "Unknown help option 'type'."
[08:23:23 CEST] <Mandevil> But this exact syntax is in the help...
[08:23:25 CEST] <Mandevil> What gives?
[08:26:35 CEST] <Mandevil> Oh, it's suposed to be encoder,decoder,...
[08:32:28 CEST] <Mandevil> Hm, can I conver colorspaces with ffmpeg?
[08:32:31 CEST] <Mandevil> convert
[08:34:38 CEST] <furq> Mandevil: https://ffmpeg.org/ffmpeg-filters.html#format-1 and/or https://ffmpeg.org/ffmpeg-filters.html#colorspace
[08:34:47 CEST] <furq> or you can do both with zscale if you have it
[08:34:47 CEST] <Mandevil> I found -pix_fmt
[08:34:55 CEST] <furq> -pix_fmt is just an alias for -vf format
[08:35:59 CEST] <Mandevil> Looks fairly involved.
[08:36:19 CEST] <furq> what do you actually want to do
[08:37:02 CEST] <Mandevil> I want to get dnxhd encoded video for testing purposes.
[08:37:09 CEST] <Mandevil> But my source is incompatible.
[08:37:14 CEST] <Mandevil> But I'll just get better source.
[08:37:59 CEST] <furq> -vf format=yuv422p
[08:38:03 CEST] <furq> looks like it should work
[08:38:28 CEST] <Mandevil> Hm, 5DmkII file is not compatible with dnxhd either.
[08:38:30 CEST] <Mandevil> WTF.
[08:39:04 CEST] <Mandevil> BTW, colorspace is yuvj420p... what's that 'j' in the colorspace name?
[08:40:07 CEST] <furq> jpeg
[08:40:27 CEST] <furq> apparently you need to set the bitrate to one of the approved bitrates
[08:40:27 CEST] <Mandevil> Uh?
[08:40:45 CEST] <Mandevil> Why does 5DmkII footage has this colorspace?
[08:41:12 CEST] <furq> mjpeg?
[08:41:43 CEST] <Mandevil> furq: No, it's AVC.
[08:41:54 CEST] <Mandevil> furq: And yes, setting the bitrate makes it work fine.
[08:42:42 CEST] <furq> maybe it's using mjpeg internally and converting that to avc
[08:43:29 CEST] <Mandevil> That would be odd. Doesn't it just denote full range instead of 16-235?
[08:43:50 CEST] <furq> that makes more sense
[08:44:01 CEST] <furq> i guess it just saves a colourspace conversion on the hardware encoder
[08:45:48 CEST] <Mandevil> Yeah, that indeed makes sense. Video on 5d is fairly quirky, it's clearly an afterthought.
[08:46:21 CEST] <Mandevil> VLC uses ffmpeg for en/decoding?
[09:00:33 CEST] <Mandevil> ffms2 reads dnxhd video just fine it seems. Cool.
[09:04:27 CEST] <Kd_user> Hello! I had a question about adding a new filter to libavfilter. I am planning to compile the entire ffmpeg library. Based on http://ffmpeg.gusari.org/viewtopic.php?f=25&t=1437, I created the vf_newfilter.c file, registered the filter by adding a line in the allfilters.c file.
[09:04:53 CEST] <Kd_user> However, ./configure and make do not seem to recognize the new filter .. and dont compile it
[09:05:36 CEST] <Kd_user> Is it necessary to explicitly add a line to the Makefile as mentioned in the link? (I had hoped not, as eventually, I am aiming to submit the filter )
[09:21:45 CEST] <Mandevil> "No such filter: 'in_range' Error opening filters!"
[09:21:47 CEST] <Mandevil> Uh what?
[09:22:08 CEST] <Mandevil> I use this ... -vf "in_range=full:out_range=mpeg"
[09:27:34 CEST] <Mandevil> OK, it must be -vf "scale=in_range=full:out_range=mpeg"
[10:28:58 CEST] <Guest24440> Hello guys! One quick question, I'm not quite sure does ffmpeg 3.1 have support for H265 (Intel HW acceleration)?
[10:41:52 CEST] <jkqxz> Guest24440:  Yes.  It's usable via qsv/libmfx on Windows and via vaapi on Linux.
[10:48:31 CEST] <Mandevil> Wonder how do I know what does -profile accept?
[10:49:45 CEST] <vadim27> Hi! I'm trying to use ffmpeg 3.1 API for encoding/decoding webcam video with h264 codec. Where i can find some examples of encoding/decoding code? I have some problems with creating of AVCodecContext. For some reasons i must fill fields of AVCodecContexts manually. I need examples. Thanks!
[10:58:07 CEST] <Soelen> hello everyone, want to create a video based on a sequence of png images
[10:58:59 CEST] <Soelen> they start at 0257.png and end at 2000.png, when I use the command with the parameters "ffmpeg -f image2 -i %04d.png -start_number 0257 video.webm"
[10:59:40 CEST] <Soelen> I get "Could find no file with path '%04d.png' and index in the range 0-4, %04d.png: No such file or directory", though I am in the right directory. Any ideas what I am doing wrong? : P
[11:02:01 CEST] <jkqxz> Soelen:  Put "-start_number 257" before the "-i".
[11:04:45 CEST] <Soelen> jkqxz: oh yeah that was the trick, thank you!
[12:50:11 CEST] <mrjoker> what is the difference between choosing "main" as profile  and "none" as profile
[13:01:06 CEST] <furq> mrjoker: the profile is normally automatically selected based on the other encoder parameters
[13:01:39 CEST] <mrjoker> i don't understand
[13:01:53 CEST] <mrjoker> i see "main" and "none" for x265
[13:02:02 CEST] <furq> oh, x265
[13:03:31 CEST] <furq> apparently none allows you to generate noncompliant streams
[13:03:46 CEST] <furq> http://x265.readthedocs.io/en/default/cli.html#cmdoption--allow-non-conformance
[13:04:50 CEST] <mrjoker> is that a good thing or bad thing
[13:05:00 CEST] <furq> "Compliant HEVC decoders may refuse to decode such streams."
[13:05:01 CEST] <furq> you tell me
[13:05:50 CEST] <mrjoker> no idea
[13:59:28 CEST] <soulshock> does ffmpeg on windows detect hyper-threading cores as real cores or hyper-threaing cores?
[14:05:06 CEST] <BtbN> There is no distinction between HT and real cores for applications.
[14:27:36 CEST] <soulshock> ok
[14:28:28 CEST] <soulshock> I'm thinking the automatic calculation of number of threads (cores * 1,5) would be wrong with hyper threading
[14:29:34 CEST] <BtbN> no, why would it?
[14:30:04 CEST] <BtbN> The point of HT is to use all of the virtual cores, so the CPU can fill its pipelines more efficiently.
[14:30:05 CEST] <DHE> why not give each thread something to do? an idle thread is wasted capacity
[14:30:24 CEST] <DHE> and modern intel CPUs are actually quite good at hyperthreading
[14:31:26 CEST] <soulshock> my thought was that the virtual cores have very little power compared to real cores. so calculating each virtual core as a real core would create too big a thread count
[14:31:36 CEST] <soulshock> since each virtual core cannot give 100% same calculation power as real cores
[14:32:14 CEST] <DHE> no. more like a core has a large amount of hardware available (cache, FPU, integer unit) but doesn't really use all of them simultaneously. so hyperthreading lets 2 threads use the same hardware and take advantage as they go
[14:32:36 CEST] <DHE> 1 thread might stall for a few cycles because of complex FPU work, but the other thread can use the integer unit and work with pre-cached data
[14:33:00 CEST] <DHE> sure, they butt heads from time to time, but +50% performance overall is a win
[14:33:20 CEST] <soulshock> yeah I'm going to do some benchmarking with HT on and off
[14:33:33 CEST] <soulshock> but I wanted to understand a bit better first, hence my question. cheers
[14:34:21 CEST] <DHE> if you're not running a lot of threads anyway, HT off can be of benefit since it prevents accidental sharing of a core while leaving other cores completely idle. if you're actually running the CPU hard, hyperthreading will win
[14:35:06 CEST] <DHE> there is a bad situation that can arise with cache sharing, but ffmpeg shouldn
[14:35:21 CEST] <DHE> shouldn't have a problem if it's 1 instance
[14:36:06 CEST] <soulshock> it's 1 instance with 3-9 result files simultaneously. different resolutions and bitrates
[14:46:51 CEST] <furq> SouLShocK: i take it you mean actually disabling HT rather than running -threads 4
[14:47:11 CEST] <furq> since the latter probably won't use four hardware cores
[14:50:31 CEST] <SouLShocK> furq yes
[15:38:00 CEST] <Kadigan_KSB> Hi. I'm in a bit of a bind, and Google is not helping me much. I need to convert 1080p 4:2:0 material to XDCAM HD 50Mbps 4:2:2 50i TFF in a MOV container. Tips?
[15:41:41 CEST] <kepstin> huh, according to the table on wikipedia, XDCAM MPEG HD422 in mp4/mov isn't a valid combination
[15:42:13 CEST] <SouLShocK> hm that's interesting
[15:42:21 CEST] <SouLShocK> we use XDCAM MPEG HD422 in MOV all the time
[15:42:22 CEST] <kepstin> ffmpeg should be able to let you do that anyways if you want
[15:42:24 CEST] <SouLShocK> have been for years
[15:43:38 CEST] <SouLShocK> Kadigan_KSB: take a look at https://github.com/bcoudurier/FFmbc perhaps that can do what you want
[15:43:48 CEST] <SouLShocK> under features: Create XDCAM HD422 files in .mov or .mxf
[15:43:59 CEST] <kepstin> but yeah, it's just mpeg2 video in an mp4/mov container, nothing really exotic
[15:45:13 CEST] <kepstin> hardest part is probably gonna be deciding how to convert the progressive input video to 50i
[15:45:25 CEST] <kepstin> (what framerate is the input?)
[15:45:28 CEST] <Kadigan_KSB> 25p
[15:46:05 CEST] <kepstin> ok, then that's simple, you don't do anything to the video, just set some flags when encoding so the decoder thinks it's interlaced :/
[15:46:25 CEST] <kepstin> equivalent to a so-called "2:2" pulldown
[15:46:34 CEST] <Kadigan_KSB> Will that pass by-hand scrutiny at delivery?
[15:46:56 CEST] <Kadigan_KSB> (where in the specs they specifically say it's illegal to encode progressive)
[15:47:21 CEST] <furq> you're not encoding progressive
[15:47:24 CEST] <kepstin> well, basically what you do is encode a progressive video using the interlaced coding modes
[15:47:26 CEST] <furq> you're just not encoding 50 fields per second
[15:47:34 CEST] <kepstin> it's a bit less efficient, but should work fine
[15:47:39 CEST] <furq> much like a "progressive" DVD
[15:48:06 CEST] <Kadigan_KSB> I'm really in a bind here, our client delivered the specific outlets today, and it has a Monday delivery deadline :/
[15:48:31 CEST] <furq> there isn't any other solution if it's a 25p source
[15:48:33 CEST] <Kadigan_KSB> (and I have absolutely NO experience with interlaced material delivery whatosever, much less XDCAM since I do HD in ProRes)
[15:50:32 CEST] <SouLShocK> wouldn't the tinterlace filter work?
[15:52:16 CEST] <furq> i don't think that will produce a watchable video
[15:53:48 CEST] <Kadigan_KSB> So how do I tell it to output an interlaced video file?
[15:54:00 CEST] <pointer> are there any good hardware guidelines on how much hardware I _should_ need to transcode a 41Mbps 4k HEVC stream down to 15-25Mbps in realtime?  we were seeing 5-6fps on a VM with 12 cores from a 2cpu box w/E5-2680 v3 @ 2.50GHz
[15:55:19 CEST] <Kadigan_KSB> Sometthing like -flags +ildct? -top 1?
[15:55:55 CEST] <furq> -flags +ilme+ildct
[15:56:15 CEST] <furq> that's for x264, i'm not totally sure if mpeg2 is the same
[15:56:22 CEST] <BtbN> pointer, i don't think encoding 4K in real time is possible at all right now.
[15:56:37 CEST] <BtbN> unless you use some hardware encoder.
[15:56:48 CEST] <Kadigan_KSB> furq: I've seen it in two examples on the web already
[15:56:51 CEST] <pointer> BtbN: that makes me sad :-\
[15:56:56 CEST] <Kadigan_KSB> but I'm getting an error opening the encoder
[15:57:06 CEST] <DHE> BtbN: while I haven't tried it, benchmarks of libx264 on my E5-2698 at 1080p suggest it's doable.
[15:57:10 CEST] <DHE> my wallet, however... :/
[15:57:18 CEST] <BtbN> DHE, yes, with x264 it's no problem.
[15:57:26 CEST] <BtbN> With x265 though..
[15:57:32 CEST] <pointer> and 4k
[15:57:53 CEST] <DHE> oh, I'm mixing up the two conversations...
[16:00:47 CEST] <pointer> BtbN: yeah, we're trying a hardware encoder too...
[16:04:00 CEST] <jkqxz> A desktop Skylake can do it, but the output quality is not good.
[16:10:26 CEST] <pointer> jkqxz: it looks like what I have is 2x 12 core haswell-EPs
[16:16:26 CEST] <pointer> is there any other software encoder that may be able to handle it if I throw enough hardware at it or is it really only possible at this point with an asic/fgpa solution?
[16:21:07 CEST] <jkqxz> Why not a graphics card or non-server Intel processor?  (A cheap Skylake Core i3 can do 60fps 4K transcode in Quick Sync, I believe recent NVidia cards can too.)
[16:24:06 CEST] <pointer> jkqxz: I'm willing to try stuff... I need to take a 41Mbps 4k hevc multicast hevc stream and output a 25Mbps multicast stream...
[16:24:25 CEST] <jkqxz> The tradeoffs to make software faster bring it down to the quality of the hardware encoders anyway, so using a huge Xeon like that is probably just wasting money and power.
[16:28:20 CEST] <Venti^_> pointer: I'm able to encode a pretty difficult 60fps sequence with an E5-2797 at 47fps using superfast at 20Mb/s, it hardly uses any cpu
[16:29:21 CEST] <Venti^_> with libx264
[16:34:00 CEST] <rmoorelxr> is using ffmpeg to create rtmp stream that is then piped to red5 a thing that ppl do?
[16:41:44 CEST] <Venti^_> realtime 4k hevc encoding is also possible, with a xeon, but you have to turn off so many features you are probably better off using h.264
[16:46:26 CEST] <rmoorelxr> is it possible to stream a file as it is being written
[16:46:53 CEST] <c_14> rmoorelxr: depends on the file format
[16:47:40 CEST] <rmoorelxr> h264
[16:47:57 CEST] <c_14> Raw h264 bitstream?
[16:48:44 CEST] <rmoorelxr> i get a raw "h264" (in air quotes) byte stream from third party
[16:48:56 CEST] <c_14> that should work
[16:49:53 CEST] <rmoorelxr> do i have to reconstruct the stream every so often if i want the new content to stream?
[16:50:11 CEST] <rmoorelxr> or can i somehow pipe the bytes directly
[16:50:26 CEST] <c_14> as long as you're getting new data at least as fast as you're streaming it you should be fine
[16:50:39 CEST] <pointer> Venti^_: I can get another CPU if that'll make the difference
[16:52:34 CEST] <rmoorelxr> so if i build a stream from file being written, and then more file is written, the new content will be part of same stream?
[16:52:55 CEST] <c_14> as long as you never hit EOF, yes
[16:53:46 CEST] <rmoorelxr> what dark magic is this
[16:54:00 CEST] <Venti^_> pointer: I think your hardware should easily be fast enough
[16:54:41 CEST] <Venti^_> you just have a bottleneck somewhere... maybe hevc decoding, or you are encoding with too high of a preset
[16:57:40 CEST] <pointer> Venti^_: I'm open to suggestions on what switches/options to use :)
[16:58:50 CEST] <pointer> Venti^_: I can't change the source from 265
[16:58:52 CEST] <Venti^_> first I would check if you can decode the HEVC stream at the speed you need
[16:59:15 CEST] <Venti^_> if you can't, maybe use a GPU to decode it
[17:00:00 CEST] <Venti^_> and maybe to also encode... it's probably the easiest way if you have money to spend
[17:01:42 CEST] <furq> pointer: -preset ultrafast
[17:08:57 CEST] <pointer> Venti^_: what GPU would you recommend?
[17:09:20 CEST] <furq> depends how many streams you need to transcode
[17:09:46 CEST] <furq> iirc the quadros nvenc engines aren't any faster, they just don't have the artificial restriction on number of streams
[17:09:48 CEST] <pointer> IIRC, it's 1 32-25Mbps 4k HEVC video stream and 2 audio PIDs...
[17:09:52 CEST] <furq> quadros'
[17:10:16 CEST] <furq> you'll need some recent nvidia card to do 4k hevc though
[17:13:48 CEST] <Venti^_> something like 960 should be more than enough... just get the most expensive one you can afford
[17:16:41 CEST] <ozette> if a video is 24fps and its duration is 10 seconds, does that mean there's 240 frames in the video?
[17:17:01 CEST] <furq> one would hope so
[17:17:15 CEST] <ozette> hope?
[17:17:29 CEST] <Venti^_> looks like gtx1080 promises 10bit 4K 60fps, which is pretty nice
[17:17:42 CEST] <ozette> are there exceptions?
[17:19:06 CEST] <kadigan_> Say, can I use .mov to wrap MXF?
[17:19:14 CEST] <furq> well for starters you probably actually have a 23.97fps video
[17:19:22 CEST] <kadigan_> ie. ffmpeg -i file.mxf -???? output.mov ?
[17:19:41 CEST] <furq> kadigan_: -c copy
[17:19:47 CEST] <furq> i don't know if that'll work but otherwise you're not wrapping anything
[17:21:57 CEST] <Venti^_> ozette: for interlaced video you might have double the number of pictures, but whether those are counted as frames depends on your definition of frame
[17:22:09 CEST] <kadigan_> Well, the process did produce a .mov file, and it does seem to play in QTP
[17:22:25 CEST] <kadigan_> -and- mediainfo claims it's XDCAM HD422, so...
[17:22:41 CEST] <ozette> i am wondering about the definition of frame in ffmpeg so that's why i'm bothering at all
[17:23:41 CEST] <ozette> i'd like to specify the amount of segments i want when segmenting a video file
[17:24:16 CEST] <ozette> i have a 19 seconds long video, and tried several things, but the output stays the same: always 3 segments
[17:24:41 CEST] <ozette> i was told that maybe it's because of the 'frames'
[17:26:36 CEST] <Kadigan|Work> Nope, file rejected for technical reasons.
[17:26:42 CEST] <furq> ozette: if you're cutting without reencoding then it's because of the keyframes
[17:26:56 CEST] <furq> there's no way to predict where those are from the duration alone
[17:27:32 CEST] <ozette> are keyframes and frames two different things?
[17:27:36 CEST] <furq> there's also not much you can do about it even if you do know
[17:27:40 CEST] <furq> and yes
[17:27:58 CEST] <furq> a segment has to start on a keyframe
[17:28:50 CEST] <ozette> i see
[17:29:00 CEST] <ozette> because i have a 10 second long video which segments into 5 parts
[17:29:17 CEST] <ozette> (m3u8)
[17:29:36 CEST] <furq> if you want to have exact control then you need to reencode the video
[17:29:57 CEST] <ozette> would it help if i read about keyframes in mpeg 4?
[17:30:02 CEST] <furq> maybe
[17:30:36 CEST] <ozette> i was hoping i could just say somthing like ffmpeg -i file.mp4 -segment 10 out.m3u8
[17:30:50 CEST] <furq> well you can do that but it'll reencode
[17:31:26 CEST] <furq> https://en.wikipedia.org/wiki/Video_compression_picture_types
[17:31:27 CEST] <ozette> oh
[17:33:09 CEST] <ozette> not sure how ffmpeg can encode for me
[17:33:27 CEST] <ozette> is it a flag?
[17:34:46 CEST] <furq> the command you just pasted will reencode to whatever it thinks is the best codec for .m3u8
[17:34:49 CEST] <furq> which is presumably h264
[17:35:04 CEST] <furq> you need to specify -c copy to copy the streams from the input file
[17:37:22 CEST] <ozette> ah.. i've seen people use -c copy
[17:48:06 CEST] <pointer> furq and Venti^_: it looks like this is main10... so 7th gen should work
[18:10:32 CEST] <pointer> furq and Venti^_: I'm working on getting a skylake cpu to try first...and also getting a 1070
[18:10:49 CEST] <pointer> furq and Venti^_: so the thought is that with GPU offload of the decode... it _may_ work?
[18:20:35 CEST] <jkqxz> Skylake only has 8-bit encode/decode.  Does the output have to be 10-bit?  (Changing the format there might be a way of making the software encode significantly faster, too.)
[18:27:03 CEST] <pointer> jkqxz: the input is 10bit... I'm not sure if the output has to be, I'll check
[18:29:15 CEST] <pointer> jkqxz: the documentation says the the output can be Main or Main10
[18:36:14 CEST] <jkqxz> In that case you could try adding "-pix_fmt yuv420p" to your libx265 command-line and see if that makes it significantly faster (so that it is decoding as 10-bit but encoding as 8-bit).
[18:36:47 CEST] <pointer> jkqxz: yuv420p?
[18:37:30 CEST] <jiiimstr> hi. is there a list of supported video cards by ffmpeg? i think my card is left out and not supported - having confirmation would be nice (it's hauppauge colossus 1 / pci-e card)
[18:38:30 CEST] <pointer> jkqxz: googling
[18:41:30 CEST] <pointer> jkqxz: will that change it to 420p? that's what the documentation seems to indicate...
[18:43:44 CEST] <c_14> jiiimstr: support for what?
[18:48:16 CEST] <jiiimstr> c_14: i'm trying to figure out how to take that RTMP feed from my setup (xsplit pro+colossus) and pass it thru something (ffmpeg) to transmux into HLS.. this is over my field of knowledge by a large margin.. sorry for using wrong terms
[18:48:40 CEST] <c_14> jiiimstr: that shouldn't use your gpu at all?
[18:48:59 CEST] <c_14> ffmpeg doesn't use your gpu unless you try using something like nvenc and support for that depends on the card and your drivers
[18:51:32 CEST] <jiiimstr> i never mentionned a gpu.. is a gpu a cause? capture card have onboard encoding, i was under the impression only CPU matter.. well, more stuff to confuse me. i think i just need a new capture card that has software that directly output HLS so I don't have to mess with that (sorry) incomprehensible stuff
[18:53:06 CEST] <DHE> depends on the card. it might not output in a codec that's usable. some will output h264 directly, some don't
[18:53:19 CEST] <DHE> h264 direct is good, hls splitting is easy
[18:54:59 CEST] <c_14> jiiimstr: I misunderstood. I read video card and thought you meant gpu instead of video capture card.
[18:57:47 CEST] <jiiimstr> my card is old and got prorietary drivers, it only works in xsplit pro :'( is someone in position to recommand a 1080p60fps video capture card that will be bestfriend with ffmpeg please?
[18:58:43 CEST] <jiiimstr> drivers havent been updated in 3 years.. just for the record, it's an hauppauge colossus 1 pci-e
[18:58:53 CEST] <BtbN> If you need Linux Support, there is pretty much only blackmagic.
[18:59:11 CEST] <jiiimstr> i would rather stay on win7 if it is possible
[19:00:50 CEST] <pointer> so I'm going to try to do the decode with the gtx 1070... how should I go about the encode...? the VA-api on linux or quicksync on windows or linux? or does it matter?
[19:04:02 CEST] <furq> those are for intel
[19:04:04 CEST] <furq> you want nvenc
[19:06:27 CEST] <pointer> furq: can I do both the encode and decode on the card simultaneously?
[19:07:24 CEST] <furq> nvenc is an encoder
[19:07:45 CEST] <furq> i think you can use dxva2 for decoding on nvidia on windows
[19:07:54 CEST] <furq> vaapi doesn't work according to the wiki
[19:10:50 CEST] <furq> oh nvm apparently there's a cuvid decoder in ffmpeg now
[19:12:01 CEST] <furq> you'll need a build with --enable-cuvid and --enable-nvenc
[19:12:34 CEST] <BtbN> Enabling CUDA makes ffmpeg non-redistributable, so you'll have to build it yourself if you want that.
[19:13:03 CEST] <furq> i can't imagine there'd be any builds with that anyway
[19:13:06 CEST] <furq> it looks pretty new
[19:26:51 CEST] <pointer> is there a way to make ffmpeg leave the PMT alone?  it looks like it's munging it
[19:35:22 CEST] <pointer> of slight interest... we were able to play back some video with VLC and get it through the downstream system successfully... we think it's because VLC wasn't munging the PMT/PIDs
[19:38:05 CEST] <pointer> furq: am I going to have better luck going this on windows then?
[19:38:34 CEST] <furq> cuvid should work on linux so it'll make no difference
[19:38:42 CEST] <furq> i assume you'll get better results with that than with dxva2
[19:38:57 CEST] <pointer> kk, I'll get linux on this box then
[19:39:42 CEST] <pointer> I have a skylake cpu now w/16GB memory...and someone will be back in around an hour with a GTX 1070 for me
[19:40:05 CEST] <furq> you'll want vaapi for encoding on intel as well
[19:41:16 CEST] <pointer> furq: I'm a little confused... am I going to use nvenc, or vaapi or both to encode? vaapi is the intel "open" api, right?
[19:41:50 CEST] <furq> vaapi is the easiest way to encode with intel quicksync
[19:41:58 CEST] <furq> nvenc is the nvidia hardware encoder
[19:42:12 CEST] <furq> and cuvid is the nvidia hardware-accelerated decoder
[19:42:56 CEST] <furq> the skylake will only make a difference if you're planning on using quicksync (unless you managed to find a 12-core skylake)
[19:43:49 CEST] <furq> if you just want to use the 1070 then don't bother with vaapi
[19:45:45 CEST] <pointer> furq: ubuntu 16.04 sound good?
[19:45:53 CEST] <furq> i guess
[19:46:01 CEST] <furq> you'll be compiling ffmpeg anyway so it doesn't make much difference
[19:48:47 CEST] <pointer> furq: yeah, I just figured it would be easier to grab anciallary libraries I needed on ubuntu
[19:49:07 CEST] <pointer> furq: s/anciallary/ancillary/
[19:49:07 CEST] <furq> i prefer debian but ubuntu is probably fine
[19:54:00 CEST] <jkqxz> The libva in Ubuntu 16.04 is slow on Skylake - large performance bugs have been fixed since that release.  (Ignore that if you only want to try nvidia.)
[20:37:07 CEST] <pointer> well... crud... the zeranoe ffmpeg builds don't have cuvid in them
[20:38:16 CEST] <pointer> or is that part of libavcodec?
[20:39:04 CEST] <furq> you'll need to compile it yourself
[20:39:10 CEST] <furq> the cuda stuff isn't redistributable
[20:50:41 CEST] <danimal> I'm trying to transmux an mpeg-4 to an mpegts, but I keep getting dts<pcr errors unless I set the mux rate to 80% overhead or above?
[21:50:28 CEST] <Dresk|Dev> So I'm compiling ffmpeg 2.8.6 through VS2015, I made my own solution file and configured options via config.h, etc.  For the time being I'm playing around with static linking, since we use a very limited feature set of ffmpeg and it really doesn't add much to our binary size.  However, we can't enable Whole Program Optimization on ffmpeg due to that delaying optimizations until the linker stage, which means ffmpeg won't
[21:50:28 CEST] <Dresk|Dev> compile properly due to its dependency on DCE.  Any ideas?
[21:52:03 CEST] <shincodex> WRITE YOUR CRAP IN C SHARP FFMPEG
[21:52:07 CEST] <shincodex> C# is taking over the world
[21:52:35 CEST] <shincodex> Whole program optimization is suck
[21:52:39 CEST] <shincodex> use yasm.exe
[21:52:51 CEST] <shincodex> and allow ffmpeg to build there 12 year old written assemblies
[21:53:15 CEST] <shincodex> cause ftlo doesnt seem to make a difference
[21:53:24 CEST] <shincodex> though i have seen visual studios whole program optimization win wonders
[21:53:30 CEST] <shincodex> cause windows defeats shit linux os
[21:53:31 CEST] <corrideat> Hello. Do you know how can I convert DSF (DSD stream file) to DSDIFF? ffmpeg supports DSD to PCM conversion, apparently, but not between DSD formats
[21:53:47 CEST] <Dresk|Dev> That's some, difficult to follow advice
[21:54:08 CEST] <furq> Dresk|Dev: it's best not to listen to anything shincodex says
[21:54:23 CEST] <Dresk|Dev> furq: Well I'll take it in stride
[21:55:07 CEST] <shincodex> ^
[21:55:17 CEST] <shincodex> im going mad
[21:55:18 CEST] <shincodex> right now
[21:55:44 CEST] <shincodex> literally cause of .net taking over the world
[21:56:56 CEST] <Dresk|Dev> Has anyone done static linking of ffmpeg and had success if you only need a few codecs and demuxers?  I'm not sure if I even wanna pursue getting Whole Program Optimization to work with DCE if it's advocated against for good reason
[21:57:23 CEST] <shincodex> I have
[21:57:40 CEST] <shincodex> I have a library where i use only mjpeg/rstp/http/h264
[21:57:46 CEST] <shincodex> turned off everything else
[21:58:14 CEST] <shincodex> avcodec to avformat to the other 2 are static compiled all in and i make a C++ wrapper to use the junk and its one .dll or .so for linux
[21:58:28 CEST] <shincodex> but the im using mingw on windows side
[22:01:09 CEST] <rmoorelxr> who doesnt love .net
[22:25:39 CEST] <Dresk|Dev> How bad is ffmpeg 3.x compared to 2.x in terms of API?  I mean are we looking at hours or days of work due to changes, or just some fundamentals?
[22:54:22 CEST] <rmoorelxr> how to stream a file thats being written?
[22:55:33 CEST] <rmoorelxr> i can pipe it wowza but it stops at EOF
[23:03:47 CEST] <BtbN> ffmpeg -re -i thatfile.mkv rtmp://whatever
[23:03:59 CEST] <BtbN> + -c copy
[23:40:01 CEST] <PlanC> is it possible to have FFmpeg append contents to a file instead of overwriting it?
[23:40:27 CEST] <PlanC> so right now I have "contains_content.avi"
[23:40:37 CEST] <BtbN> Well, write to stdout, and use >>
[23:40:51 CEST] <BtbN> But only very few containers actually "support" that.
[23:41:01 CEST] <PlanC> BtbN: will that really work?
[23:41:10 CEST] <BtbN> With mpegts, it should.
[23:41:11 CEST] <PlanC> won't I get all the FFmpeg info with it as well?
[23:41:32 CEST] <BtbN> But with any more complex container, it will fail horribly.
[23:41:43 CEST] <BtbN> And even for mpegts I wouldn't recommend it.
[23:41:46 CEST] <PlanC> I thought about it before but it just seems like a risky way of doing it as data could easily become corrupt
[23:42:13 CEST] <PlanC> I'm not asking for much either
[23:42:22 CEST] <PlanC> I just don't want FFmpeg don't clean/overwrite the file
[23:42:49 CEST] <PlanC> http://stackoverflow.com/questions/33109284/how-to-get-ffmpeg-to-append-to-existing-ouput-file-and-not-overwrite-it
[23:43:02 CEST] <PlanC> someone asked this in october of last year with the same reply
[23:43:35 CEST] <BtbN> containers and video-streams are simply not made to work like that.
[23:47:08 CEST] <PlanC> I see
[23:47:30 CEST] <PlanC> but in a case (like mine) where I just want to have raw data appended to a file a feature like that would be super useful
[23:49:48 CEST] <BtbN> It's not a feature that could be added. It just doesn't work that way. Save to a lot of individual files, and concat them later.
[23:51:11 CEST] <PlanC> that's the only option there is I guess
[00:00:00 CEST] --- Sat Aug  6 2016


More information about the Ffmpeg-devel-irc mailing list