[Ffmpeg-devel-irc] ffmpeg.log.20170529

burek burek021 at gmail.com
Tue May 30 03:05:01 EEST 2017


[00:56:54 CEST] <hiihiii> hello
[00:57:09 CEST] <hiihiii> can I add chapter to an mp4?
[00:59:22 CEST] <hiihiii> I've muxed a couple videos into one and I'd like to see chapter marks indicating the end and start of what was a single video
[01:00:08 CEST] <furq> hiihiii: https://www.ffmpeg.org/ffmpeg-formats.html#Metadata-1
[01:03:57 CEST] <tezogmix> hey furq, remember the other day I was asking about that batch file with all those examples? Where do we put the "pause" line?
[01:04:31 CEST] <hiihiii> furq: ahh that helpful
[01:05:18 CEST] <hiihiii> but can I o it with the concat demuxer
[01:05:36 CEST] <hiihiii> or do I have to manually edit the metadata to fit my needs
[03:53:51 CEST] <GenN> According to FFmpeg help `ffmpeg -help encoder=h264_nvenc`, h264_nvenc supports RGB pixel format (bgr0 and rgb0). I try to encode to RGB, but the output is yuv420p. How to fix? See: https://thepasteb.in/p/wjhzcKwxzvVEAU6
[03:54:41 CEST] <GenN> NVIDIA GeForce GTX 1070, driver 382.33, Windows 10 1703
[03:55:26 CEST] <c3-Win> GenN: Did you look at the NVENC docs and confirm that NVidia supports encoding RGB h264?
[03:57:32 CEST] <c3-Win> I can tell you that NVENC support receiving an RGB source, but it runs a shader on the GPU to convert it to YUV, and then encodes it... so I seriously doubt that NVENC supports outputting an RGB h264 file.
[04:00:02 CEST] <GenN> Nv says a lot about RGB input but not output: https://www.developer.nvidia.com/nvenc-programming-guide
[04:00:18 CEST] <GenN> So I guess there is no fix, for now
[04:01:11 CEST] <c3-Win> More like there's nothing to fix. FFMPEG can't make NVidia grow new hardware parts.
[04:05:51 CEST] <GenN> Thanks for your input. Bye
[05:42:36 CEST] <Exairnous> durandal_1707: thanks for your help the other day
[05:43:07 CEST] <Exairnous> furq: thanks for your help the other day
[10:08:41 CEST] <nadermx> so no one? I posted the question on super user, but it seems I'm getting told it's not possible to add a picture to a mp3 with ffmpeg when making it go to stdout
[10:08:48 CEST] <nadermx> https://superuser.com/q/1213776/535078
[10:21:25 CEST] <c_14> nadermx: the answer is probably right, someone would have to patch ffmpeg to write the id3 headers before writing the mp3
[12:00:43 CEST] <Sanqui> hi, I'm trying to convert a series of pngs into a video with the following command: `ffmpeg -framerate 1 -i %d.png output.mp4`
[12:01:18 CEST] <Mavrik> yay.
[12:01:26 CEST] <Sanqui> each png is at most 5MB but I run out of 8gb of memory quickly.  does ffmpeg try to load them into memory all at once or something?
[12:02:28 CEST] <JEEB> Sanqui: you're not rescaling so there are multiple buffers kept in x264
[12:02:40 CEST] <JEEB> since it optimizes the encoding for compression
[12:02:58 CEST] <Mavrik> Also, "at most 5MB" doesn't actually tell anything about how big those PNGs are when decompressed :)
[12:03:48 CEST] <JEEB> yea
[12:03:53 CEST] <JEEB> but probably pretty big :P
[12:03:54 CEST] <Sanqui> sure, they're pretty huge (about 5k pixels).  maybe I should rescale then
[12:04:11 CEST] <JEEB> -vf zscale=w=blah:h=blah
[12:04:13 CEST] <Mavrik> Yeah, 5K videos probably won't play well :)
[12:04:16 CEST] <JEEB> after -i
[12:05:10 CEST] <Sanqui> JEEB: no such ffilter zscale, but I did -vf scale=1280:720 and that seems to run much better haha
[12:05:26 CEST] <JEEB> right, you have to build FFmpeg with zimg
[12:05:29 CEST] <JEEB> for zscale
[12:05:44 CEST] <JEEB> zimg being a high-quality scaling library
[12:05:52 CEST] <JEEB> https://github.com/sekrit-twc/zimg
[12:06:51 CEST] <Sanqui> neat, thanks!
[12:08:56 CEST] <atomnuker> JEEB: asking someone to use zscale and recompile for casual encoding changing colorspaces is overkill
[12:09:23 CEST] <JEEB> sure
[12:09:32 CEST] <atomnuker> its more overkill than asking them to use the colorspace filter to convert (and get correct results)
[12:09:34 CEST] <JEEB> he found the normal scale filter himself rather well :)
[12:09:58 CEST] <JEEB> well, in this case it wasn't just colorspace that was needed :3
[12:10:26 CEST] <Sanqui> I was just a bit dumb thinking I wouldn't bother with resizing ("youtube will do it for me" XD)
[12:11:17 CEST] <JEEB> well a 64bit binary would have happily used that RAM and finished. would have used quite a bit more time though since there's a lot more of image to encode
[12:11:27 CEST] <JEEB> (as long as you had enough RAM)
[13:07:14 CEST] <formruga> Hi. Is a way to "attach" PCR to audio packet not to video packet? My video has a low framerate (1/5) and PCRs are every 5 seconds.
[15:11:02 CEST] <DHE> if I understand this right, the purpose of av_interleaved_write_frame() is so that audio and video streams will have their packets reordered so that they are written in sync to each other.  Like merge-sorting each stream together before submitting to av_write_frame(). Am I right?
[15:11:11 CEST] <DHE> I ask because it doesn't seem to be doing that
[17:38:40 CEST] <jokkker87> hi everyone, i'm trying to cut an mpeg-ps into pieces for dvd authoring
[17:39:17 CEST] <jokkker87> i'm using dvdauthor, which as actually a problem, if the pts of the video and audio are too far apart
[17:39:24 CEST] <jokkker87> *has actually
[17:41:01 CEST] <jokkker87> i know that i get the delay because of the different fps of the video (which is 25FPS) and audio (which is 31.250FPS)
[17:42:05 CEST] <jokkker87> so i'm able to produce a working dvd with dvdauthor by trying a range of 4 seconds for the split (4 * 6.25)
[17:43:07 CEST] <jokkker87> this is my command to ffmpeg:
[17:43:16 CEST] <jokkker87> ffmpeg.exe -ss 02:42:27 -i .\video.mpg -ss 00:01:00 -t 00:17:23 -target pal-dvd -codec copy result.mpg
[17:45:25 CEST] <jokkker87> is there a parameter for ffmpeg to tell it, that the delay between audio and video should not exceed a given limit?
[19:53:14 CEST] <Mista-D> what
[19:53:50 CEST] <Mista-D> 's mpeg-ts/udp codec with alpha channel support please?
[20:17:10 CEST] <ChocolateArmpits> Mista-D, what about jpeg2000 ?
[20:18:24 CEST] <Mista-D> probbaly woudl work, definetly PNG, but was hoping ofr a little higher efficinecy video codec.
[20:19:48 CEST] <ChocolateArmpits> You'd had to supply the alpha channel as a separate video track and then combine that with the color video at the end
[20:21:18 CEST] <furq> Mista-D: vp8
[20:21:44 CEST] <ChocolateArmpits> furq,  not supported in mpegts
[20:21:56 CEST] <furq> i just tried it and ffmpeg will mux it
[20:22:01 CEST] <ChocolateArmpits> hummm
[20:22:10 CEST] <furq> what a player will make of it i don't know
[20:22:18 CEST] <Mista-D> A unique mpegts stream (:
[20:22:23 CEST] <furq> i can't think of anything else though
[20:22:42 CEST] <furq> also you're stuck with yuva420p if you do that, no 444 or rgba
[20:22:43 CEST] <Mista-D> Thanks
[20:23:36 CEST] <ChocolateArmpits> actually if you want to do pure udp streaming you just use the rawvideo format, -f rawvideo udp://
[20:23:46 CEST] <ChocolateArmpits> But the bandwith expense will be enormous
[20:23:54 CEST] <furq> yeah that's not going to work over wan or wifi
[20:23:59 CEST] <furq> or probably 100mbit, for that matter
[21:14:47 CEST] <nadermx> @c_14 could you maybe point me in the right direction as to where in the source I could look into possibly trying to either make a patch myself or find some one who can?
[22:04:34 CEST] <DHE> Is there a way to force audio and video sync inside a container? av_interleaved_write_frame() isn't writing in order of global dts
[22:16:21 CEST] <Mavrik> It should O.o
[22:18:21 CEST] <DHE> if I set "-c copy" it does, but if I set "-c:a aac -c:v libx264 [codecoptions]" it doesn't.
[22:18:45 CEST] <DHE> to clarify I'm using the ffmpeg CLI tool, but looking at what's wrong from an API standpoint because I had the same issue with the API as well
[00:00:00 CEST] --- Tue May 30 2017


More information about the Ffmpeg-devel-irc mailing list