[Ffmpeg-devel-irc] ffmpeg.log.20180829

burek burek021 at gmail.com
Thu Aug 30 03:05:02 EEST 2018


[00:01:13 CEST] <jkqxz> At which point x11grab may well just be easier, since the whole point of kmsgrab is to keep stuff on the GPU side and only use resources there because you want to use the CPU for something else.
[00:02:47 CEST] <hojuruku> Ah so I need to get prime up and running I guess. Reverse prime on the intel monitor because xrandr is making my pull my head out and xinerama is hell too these days.
[00:03:06 CEST] <hojuruku> POLARIS10 here. RX560
[00:03:34 CEST] <jkqxz> To avoid the high-bandwidth transfer maybe you could encode on the AMD, then decode that stream on the Intel and reencode with your working setup?  (Quite possibly that's worse overall, would need testing.)
[00:03:36 CEST] <hojuruku> but x11grab chews up 15% of the CPU - KMSGRAB only does 2%
[00:04:35 CEST] <hojuruku> interesting concept with the encode -> decode -> encode option. i wonder if the intel could cope though it's only a haswell refresh.
[00:05:15 CEST] <hojuruku> i'd rather amd just fix their vaapi driver. I think it worked a little better with their gstreamer omx driver.
[00:12:47 CEST] <kainengran> Hi! I'm a bit new to compiling stuff, so sorry if it may seem too lame. I'm trying to compile ffmpeg with openh264. I get an error that it isn't found by pkg=config (I've checked that it's the latest version). I've compiled libopenh264 from the source code at openh264.org and the files are sitting in /usr/local/lib. Made sure the $PKG_CONFIG_PATH has this directory. Here's the ffbuild/config.log:
[00:12:53 CEST] <kainengran> http://termbin.com/70ac What am I doing wrong here?
[00:20:07 CEST] <jkqxz> Does "PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/ pkg-config --exists --print-errors openh264" find it?
[00:26:39 CEST] <kainengran> yes
[00:28:02 CEST] <jkqxz> Is PKG_CONFIG_PATH definitely being passed to configure?
[00:30:22 CEST] <kainengran> How to be sure?
[00:30:49 CEST] <jkqxz> Put it on the command line: "PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/ .../configure ..."
[00:34:41 CEST] <kainengran> That was it. Thanks from Mr. Noob :)
[01:22:40 CEST] <hojuruku> baah i get 70 mins or so from youtube before it panic's and dumps me.
[01:23:28 CEST] <hojuruku> It's it's new resolution changing based on the stream feature, it thinks my screen resolution is 65536x65536 and wants me to have 27MBIT streaming capacity or it gets pissed off and hangs up on me.
[01:25:25 CEST] <hojuruku> https://i.imgur.com/M9Coy8H.png
[01:26:13 CEST] <hojuruku> I need to do battle with xrandr to get the second monitor up, or run off the intel framebuffer and use PRIME for gaming which would be interesting (not that I game much)
[01:27:02 CEST] <hojuruku> I used to have some success with obs vaapi patch but that's probably before this youtube upgrade. I'll go check that out.
[01:28:51 CEST] <hojuruku> av_interleaved_write_frame(): End of fileB time=00:38:44.13 bitrate=2604.6kbits/s speed=   1x
[01:28:51 CEST] <hojuruku> [flv @ 0x55cff0488f40] Failed to update header with correct duration.604.4kbits/s speed=   1x
[01:28:51 CEST] <hojuruku> [flv @ 0x55cff0488f40] Failed to update header with correct filesize.
[01:28:51 CEST] <hojuruku> Error writing trailer of rtmp://a.rtmp.youtube.com/live2/....
[02:37:41 CEST] <XorSwap> is there a way to make a video with a looping gif and an audio file so that the gif repeats for the length of the audio?
[04:00:31 CEST] <pi--> I'm trying to freeze the last frame of a video.
[04:00:41 CEST] <pi--> For some reason is taking an incredible amount of time.
[04:00:44 CEST] <pi--> https://paste.pound-python.org/show/c5JfErb2vnTSm84XLSiG/
[04:01:06 CEST] <pi--> It takes 1min+ to process a 20 MB video
[04:02:00 CEST] <pi--> Why does it need to take so long?
[06:50:59 CEST] <kepstin> pi--: that is (by necessity since you're modifying the video content) re-encoding the video. Video encoding takes time. There are options you can use to change the speed/compression/quality tradeoffs.
[13:47:02 CEST] <Nacht> Is there a difference between '-itsoffset 3' and 'setpts=PTS-STARTPTS+3/TB' ?
[14:26:37 CEST] <fling> Is there something new about mixing multiple live input sources together?
[14:27:10 CEST] <fling> It was not working few years back when I tried to mux live feeds from multiple uvc and alsa devices into a single file.
[14:29:37 CEST] <fling> BtbN: genkernel respects properties and mounts everything properly.
[14:29:40 CEST] <fling> But not /usr :D
[15:04:47 CEST] <kepstin> Nacht: -itsoffset applies to all streams in a file, the filter only applies to selected streams (you need to separately do video, audio, and you can't do subtitles at all)
[15:54:18 CEST] <Nacht> kepstin: Ah, I thought it only applied to the video stream. cheers
[16:52:40 CEST] <learningc> Can ffmpeg do real time encoding on small arm processor, say a quad core A7?
[16:53:47 CEST] <JEEB> depends on its power, you can try benchmarking libx264 and openh264 with very fast settings
[16:54:02 CEST] <Foaly> just try it, i guess
[16:54:19 CEST] <JEEB> I think there's also some HW encoding support as well, but not sure how well it works and if it works with ffmpeg.c at all
[16:54:32 CEST] <JEEB> also for decoding you probably want to do hwdec if you're planning on decoding anything
[16:55:57 CEST] <learningc> Does it take more processing power to do encoding or decoding?
[16:56:58 CEST] <Foaly> encoding
[16:58:16 CEST] <JEEB> to be honest, you can encode something in a really simple manner and if you have AVC or HEVC decoding that's most likely going to be more processing power intensive
[16:58:25 CEST] <JEEB> so the real response is "it depends"
[16:59:09 CEST] <learningc> I see.
[17:01:02 CEST] <learningc> So what are my options for least power intensive encoding I can use?
[17:01:19 CEST] <DHE> if there is hardware encoding available, use it
[17:01:24 CEST] <JEEB> what is your use case?
[17:04:22 CEST] <fling> learningc: I always use '-c copy' on slower systems but it depends.
[17:06:54 CEST] <ChocolateArmpits> you can also pick lighter encoding standards, for instance mjpeg
[17:07:05 CEST] <ChocolateArmpits> of course at the expense of bandwith
[17:10:18 CEST] <learningc> There is only mjpeg encoder on the soc, will that work?
[17:10:54 CEST] <learningc> But I'm not sure there is a driver available
[17:11:06 CEST] <learningc> So I might resort to software encoding
[17:11:12 CEST] <fling> learningc: what is the input video?
[17:11:23 CEST] <learningc> Webcam
[17:11:32 CEST] <learningc> or usb camera
[17:11:35 CEST] <fling> learningc: just capture mjpeg from the cam and use -c copy then
[17:12:04 CEST] <learningc> I did not get, -c copy is for what?
[17:12:17 CEST] <fling> learningc: with mjpeg you will get the best resolution from your cam (unless it is one of two cams capable of h264 output via uvc)
[17:13:19 CEST] <learningc> fling, do you mean ffmpeg will do the mjpeg encoding or?
[17:13:23 CEST] <fling> learningc: ffmpeg -f v4l2 -input_format mjpeg -framerate 15 -video_size 1280x1024 -i /dev/video0 /path/to/file.nut
[17:13:48 CEST] <fling> learningc: no, it will just copy the stream without any encoding. It will mux input to the output without consuming your cpu
[17:14:04 CEST] <fling> whoops forgot to add -c copy :D
[17:14:33 CEST] <learningc> ffmpeg -f v4l2 -input_format mjpeg -framerate 15 -video_size 1280x1024 -i /dev/video0 /path/to/file.nut -c copy    ?
[17:14:59 CEST] <fling> ffmpeg -f v4l2 -input_format mjpeg -framerate 15 -video_size 1280x1024 -i /dev/video0 -c copy /path/to/file.nut
[17:15:06 CEST] <fling> learningc: any audio input? pulse?
[17:15:31 CEST] <learningc> no audio needed for my application
[17:16:10 CEST] <fling> learningc: what is the camera model?
[17:17:21 CEST] <learningc> It's a custom camera made in korea. Not sure which brand. I just got it as sample
[17:18:30 CEST] <fling> learningc: use this `v4l2-ctl --list-formats-ext` to list the supported formats.
[17:18:40 CEST] <learningc> For the command line you gave me, is the camera itself doing mjpeg encoding?
[17:19:20 CEST] <fling> learningc: run `v4l2-ctl --list-formats-ext` and see
[17:19:50 CEST] <fling> choose MJPG pixel format over YUYV
[17:19:58 CEST] <learningc> Ok, in case I don't see mjpeg listed, will the above command works? (I don't have the camera with me right now)
[17:20:03 CEST] <fling> to get higher resolution and fps!
[17:20:48 CEST] <learningc> YUYV is an encoding format?
[17:20:49 CEST] <fling> learningc: you could just drop `-input_format mjpeg` then or specify whichever is supported
[17:21:02 CEST] <fling> is a raw pixel format
[17:21:08 CEST] <fling> MJPG is the compressed one
[17:21:20 CEST] <fling> then you specify proper values for fps and resolution
[17:21:26 CEST] <learningc> I can record in raw pixel format?
[17:22:04 CEST] <fling> notice if you will use `-c copy` with YUYV pixel format you will get a raw video with very high bitrate
[17:22:15 CEST] <fling> Sure you can but prepare your storage :D
[17:22:43 CEST] <learningc> I see. And if I don't use -c copy? What will the command do?
[17:23:02 CEST] <fling> learningc: it will reencode so some codec depending on the file extension you choose for the output
[17:24:29 CEST] <learningc> I see
[17:24:51 CEST] <fling> learningc: I used nut in the example because it is not getting file corrupted on the muxing interrupts. With mkv format file will bork when there will not be storage space left
[17:24:52 CEST] <learningc> If I specify .nut, what will the format be?
[17:24:59 CEST] <learningc> I see
[17:25:16 CEST] <fling> if you specify .nut then the format will be .nut obviously
[17:25:51 CEST] <fling> not sure if .nut works for raw video but you could always switch to .mp4 or whatever. test it and see!
[17:26:06 CEST] <fling> anyway .nut works best for mjpeg
[17:26:41 CEST] <fling> learningc: don't you have any camera? You could test now with another one, I'm leaving soon.
[17:26:44 CEST] <learningc> Ok. Thanks.  I will try when I get my hand on the camera. But I can test with my integrated webcam for now
[17:27:21 CEST] <learningc> fling, ok, no problem. Thanks again. very helpful info to me as a beginner in ffmpeg
[17:27:24 CEST] <fling> show me the output of `v4l2-ctl --list-formats-ext`
[17:27:34 CEST] <learningc> Will do.
[17:29:46 CEST] <fling> learningc: and `ffmpeg -f v4l2 -list_formats all -i /dev/video0`
[17:30:30 CEST] <learningc> Ok. Apparently I don't have v4l2-ctl installed. I will install first
[17:31:17 CEST] <fling> learningc: what about the second command?
[17:32:21 CEST] <learningc> Trying to install ffmpeg now
[17:32:28 CEST] <fling> haha
[17:32:41 CEST] <fling> Which distro?
[17:32:50 CEST] <learningc> Ubuntu 16.04
[17:33:41 CEST] <learningc> I don't want to hold you. I will let you know v4l2-ctl output when you are back
[17:34:24 CEST] <fling> learningc: `apt install v4l-utils ffmpeg`
[17:35:04 CEST] <learningc> Ok. Thanks.
[17:37:06 CEST] <fling> I have 15 minutes :D
[17:38:14 CEST] <Ke> now that we are on this topic, is there some library for using v4l mem2mem
[17:39:19 CEST] <fling> Ke: not sure but v4l2loopback is here and you could read from one device with ffmpeg copy input to output and write to another
[17:39:50 CEST] <fling> Ke: both of them could be loopbacks or any of them could be real v4l devices
[17:40:22 CEST] <Ke> fling: can I do rescale with it
[17:40:52 CEST] <fling> Ke: sure, you could recode to raw video changing the resolution
[17:41:24 CEST] <Ke> mem2mem does not have decode, at least this one
[17:41:30 CEST] <Ke> just scale and copy
[17:41:43 CEST] <Ke> but thanks, I'll have a look
[17:41:59 CEST] <learningc> About ffmpeg itself, is it also a C api for codec, or just a command line application for Linux?
[17:42:19 CEST] <Ke> learningc: there are C apis
[17:42:27 CEST] <Ke> libavcodec for example
[17:42:37 CEST] <Ke> but quite a few other libs as well
[17:43:06 CEST] <Ke> like the livswscale that is causing me problems =o)
[17:43:14 CEST] <Ke> libswscale
[17:44:28 CEST] <fling> Ke: ffmpeg -input_format mjpeg -framerate 15 -video_size 1280x1024 -i /dev/video0 -s 800x600 -r 10 -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video1
[17:45:09 CEST] <fling> Ke: this will capture mjpeg from video0 scale it to 800x600, chance fps to 10 and reencode to raw and send to video1 which is imaginary loopback
[17:45:36 CEST] <fling> Ke: then you could pretend video1 is a real v4l device :D
[17:45:58 CEST] <fling> Ke: you could apply any filters or do whatever you do with the video not only scaling it is possible
[17:45:59 CEST] <Ke> but I have real v4l device, and I need it's computing power
[17:46:11 CEST] <fling> Ke: umm what?
[17:46:11 CEST] <learningc> How is ffmpeg different from gstreamer?  It seems like gstreamer can do what ffmpeg does
[17:46:17 CEST] <fling> Ke: what are you trying to do?
[17:46:31 CEST] <Ke> fling: rescale at a reasonable speed
[17:46:40 CEST] <fling> learningc: it is failing to build on my box and the plugins are broken
[17:46:51 CEST] <fling> Ke: define rescale
[17:47:11 CEST] <learningc> fling, which kind of box do you have?
[17:47:23 CEST] <fling> learningc: libreboot x200
[17:47:29 CEST] <Ke> convert from yuv420p 1080p somwthing to 2400x1600 rgb888
[17:47:38 CEST] <Ke> eg.
[17:48:12 CEST] <Ke> I have v4l device that can do this
[17:48:18 CEST] <fling> Ke: input is v4l device with no 2400x1600 rgb888 capability right?
[17:48:30 CEST] <Ke> input is a file
[17:48:31 CEST] <fling> What is the output?
[17:48:39 CEST] <Ke> output is a buffer in memory
[17:49:03 CEST] <Ke> mostly KMS buffer I imagine
[17:49:39 CEST] <fling> Ke: either input or output should be a v4l device or you have another question
[17:49:45 CEST] <Ke> then libavcodec translates the file into yuv420p buffers
[17:50:24 CEST] <Ke> as I understand, mem2mem is a device type that operates between buffers
[17:50:35 CEST] <Ke> input on one side output on other
[17:50:45 CEST] <fling> so no v4l?
[17:50:51 CEST] <Ke> I actually care about both sides
[17:50:58 CEST] <Ke> it's v4l
[17:51:10 CEST] <Ke> just like hw decoders
[17:51:19 CEST] <Ke> or some hw decoders
[17:51:56 CEST] <Ke> it's a device, but you don't send the data rather you receive it in another buffer
[17:51:59 CEST] <Ke> or something
[17:52:06 CEST] <fling> Ke: Will this send the video to the device? -> ffmpeg -i /some/file -vcodec rawvideo -pix_fmt yuv420p /dev/video0
[17:52:30 CEST] <Ke> I guess, you probably know that thing better
[17:52:39 CEST] <Ke> I have no idea, how those work
[17:53:22 CEST] <Ke> but I have no idea how it could work with just input defined, it also needs an output somehow
[17:53:23 CEST] <fling> Ke: then you do this -> ffmpeg -input_format rgb888 -video_size 2400x1600 /dev/video0 -c whatever /your/actual/output.file
[17:53:40 CEST] <fling> Ke: am I right? you let the v4l device to rescale ^
[17:54:04 CEST] <Ke> fling: perhaps, I'll study your suggestion a bit
[17:54:04 CEST] <fling> Ke: 1. you use file input and video0 as the output
[17:54:17 CEST] <fling> Ke: magic happens in v4l
[17:54:33 CEST] <fling> Ke: 2. you use video0 as input to get the rescaled video and write it to your actual output
[17:55:10 CEST] <fling> Ke: gtg
[17:55:19 CEST] <Ke> sure, thanks
[17:55:34 CEST] <fling> cya later
[17:55:42 CEST] <fling> learningc bye
[17:55:54 CEST] <fling> whoops they are not here
[17:56:06 CEST] <fling> Ke: tell learningc when he comes I'm away now for few hours
[17:56:16 CEST] <Ke> yup
[18:14:59 CEST] <learningc> fling, http://termbin.com/bgq5
[18:16:45 CEST] <Ke> (18:56:06) <     fling> Ke: tell learningc when he comes I'm away now for few hours
[18:17:24 CEST] <learningc> Ok thanks
[18:22:05 CEST] <learningc> How can I play back the file with ffmpeg?
[18:30:07 CEST] <learningc> ok got it, with ffplay
[18:34:32 CEST] <_Mike_C_> Hello, does anyone have any experience with using the nvenc hardware encoders, programatically on windows?
[18:36:01 CEST] <_Mike_C_> I'm having an issue attempting to use the h264_nvenc encoder, I followed the example in code that created the VAAPI encoder (but with h264_nvenc) and everything "seems" to work fine, but when I finally get packets out of the encoder, there is no buffer assigned (its NULL)
[18:36:37 CEST] <BtbN> You can treat nvenc like any software encoder if you don't have hwframe inputs.
[18:40:50 CEST] <_Mike_C_> I had to use hwframe inputs, it would throw errors if I didn't set up that whole thing
[18:41:21 CEST] <_Mike_C_> I am able to send HW frames to it successfully, but for some reason when it sends back a pkt, theres nothing in it
[18:41:51 CEST] <BtbN> You definitely do not have to do that for nvenc
[18:41:59 CEST] <BtbN> it happily takes software frames and does all hw setup internally.
[18:43:00 CEST] <_Mike_C_> That's interesting, do you know of another reason that it would throw an access violation when I try to send a sw frame to it?
[18:43:20 CEST] <BtbN> "throw an access violation"? You mean, it crashes?
[18:45:04 CEST] <_Mike_C_> I'm writing the program in C++, debugging in visual studio.  When I try to send a software frame to a h264_nvenc context
[18:45:09 CEST] <_Mike_C_> It throws an exception
[18:45:18 CEST] <_Mike_C_> "Access violation reading location xxxxx"
[18:45:30 CEST] <BtbN> You did something wrong then. That's just a crash.
[18:46:24 CEST] <_Mike_C_> Could you please help me figure out what I did wrong?  The program works just fine when using a software encoder.
[18:46:51 CEST] <_Mike_C_> But when I swap it to the nvenc, it throws the exception, are there other steps I have to do with the nvenc encoder?
[18:52:35 CEST] <BtbN> No, it behaves virtually the same as libx264 for that
[18:52:51 CEST] <BtbN> Just look at the backtrace and see what's crashing
[18:55:04 CEST] <_Mike_C_> I'm not sure what you mean by that
[18:56:28 CEST] <_Mike_C_> its an exception thrown from inside the avcodec-58.dll, what does "backtrace" mean
[19:10:16 CEST] <ChocolateArmpits> stacktrace
[19:13:58 CEST] <_Mike_C_> is there somewhere I can get debug ffmpeg dlls?  Or would I have to compile them myself
[19:14:55 CEST] <JEEB> if the stuff you got doesn't have debug symbols, then you need to build it yourself, yes
[19:15:29 CEST] <_Mike_C_> Well, I'm using the distro dll's so they do not have debug symbols in them
[19:15:46 CEST] <JEEB> > distro > windows
[19:15:48 CEST] <JEEB> ?
[19:15:54 CEST] <_Mike_C_> Yes
[19:15:58 CEST] <JEEB> as far as I know MS doesn't distro FFmpeg
[19:16:24 CEST] <JEEB> or you mean you're running something under WSL?
[19:16:31 CEST] <JEEB> and then what you mean as dll is actual so files?
[19:16:36 CEST] <JEEB> *actually
[19:16:46 CEST] <JEEB> although I'd be surprised if nvenc would work in that case
[19:17:24 CEST] <_Mike_C_> I'm confused with what you're saying.  Maybe I used the wrong terminology
[19:17:48 CEST] <_Mike_C_> I'm using the Windows build of ffmpeg (from the ffmpeg site) for windows, shared linking
[19:17:50 CEST] <JEEB> distro = distribution , "using the distribution's DLLs"
[19:18:00 CEST] <JEEB> usually used for stuff packaged by the OS vendor
[19:18:08 CEST] <JEEB> _Mike_C_: the windows binaries are not from FFmpeg itself
[19:18:17 CEST] <JEEB> someone just linked a 3rd party's windows builds there
[19:18:26 CEST] <JEEB> FFmpeg itself only distributes source code
[19:18:46 CEST] <JEEB> so you might want to ask whomever you got the binaries from for debug symbols
[19:18:52 CEST] <JEEB> if those are not available, build yourself
[19:20:21 CEST] <_Mike_C_> I'm literally following the links from the ffmpeg website, https://ffmpeg.zeranoe.com/builds/#
[19:20:42 CEST] <_Mike_C_> is there no better way to debug this than to compile debug dll
[19:20:48 CEST] <_Mike_C_> myself and use those instead?
[19:28:58 CEST] <raytiley> Having trouble wrapping my head around tee and map and was hoping someone could give me a nudge in the right direction.
[19:30:05 CEST] <raytiley> I have  ffmpeg.exe -i captions.vtt -i video.mpg ... out.m3u8 working nowto create an hls stream. Can I use tee / map to dumb the same encoded data into an mp4 as well?
[19:36:49 CEST] <ChocolateArmpits> raytiley, absolutely
[19:37:31 CEST] <ChocolateArmpits> raytiley, there's an article about it https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Teepseudo-muxer
[19:40:35 CEST] <raytiley> yup
[19:41:09 CEST] <raytiley> so I tried this: -f tee -map 0:s -map 1:v 1:a "test.m3u8|test.mp4"
[19:41:26 CEST] <raytiley> but I get an error about not selecting an encoder
[19:41:40 CEST] <raytiley> https://www.irccloud.com/pastebin/KYNUuQ0I/output.txt
[19:41:49 CEST] <ChocolateArmpits> >" -map 1:v 1:a"
[19:41:55 CEST] <ChocolateArmpits> is that syntax legal
[19:42:20 CEST] <_Mike_C_> Is there somewhere I can find documentation on specific encoders inside ffmpeg, like the h264_nvenc encoder?
[19:43:18 CEST] <ChocolateArmpits> _Mike_C_, did you try ffmpeg -help encoder=h264_nvenc ?
[19:43:30 CEST] <raytiley> copy paste error
[19:43:37 CEST] <raytiley> -map 0:s -map 1:v -map 1:a
[19:44:29 CEST] <_Mike_C_> I meant something more along the lines of, how to use them
[19:45:01 CEST] <ChocolateArmpits> raytiley, did you specify video encoder and an audio encoder?
[19:45:14 CEST] <_Mike_C_> Because BtbN says that it can use software frames, but how would I know that?  The only example that covers hardware acceleration shows VAAPI and using hardware frames
[19:45:46 CEST] <BtbN> Because it accepts software pix_fmts as input
[19:47:05 CEST] <_Mike_C_> Ok, well clearly there are other parameters that are specific to the encoder that need to be taken care of because if I just plug and play with it, I get access violations.  So I'm asking if there is somewhere I can go to read more about specific steps that need to be performed.
[19:48:18 CEST] <raytiley> @ChocolateArmpits I beleive
[19:48:24 CEST] <raytiley> full command is: ffmpeg.exe -y -i D:\test\captions.vtt -i D:\\content\\13127-1-vod-test-with-captions.mpg -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod  -b:v 800k -maxrate 856k -bufsize 1200k -b:a 96k -hls_segment_filename D:\\vod\\13127-vod-test-with-captions-v15/360p_%03d.ts -f tee -map
[19:48:24 CEST] <raytiley> 0:s -map 1:v -map 1:a "test.m3u8|test.mp4"
[19:51:32 CEST] <ChocolateArmpits> raytiley, is "h264" really an encoder?
[19:52:00 CEST] <ChocolateArmpits> ok it's an alias for libx264
[19:52:02 CEST] <raytiley> yeah
[19:52:24 CEST] <raytiley> so without the tee / map stuff and a single m3u8 output the command works fine
[19:53:58 CEST] <ChocolateArmpits> I think this one "-hls_segment_filename" needs to go inside of the output descriptor for that particular ouput
[19:54:36 CEST] <ChocolateArmpits> but I think the issue is with the caption file
[19:54:42 CEST] <ChocolateArmpits> the error specifically pertains to it
[19:54:56 CEST] <ChocolateArmpits> you haven't specified an encoder for it
[19:55:11 CEST] <ChocolateArmpits> and aren't -c copying it
[19:59:42 CEST] <raytiley> cool, I'll keep digging
[20:31:51 CEST] <_Mike_C_> BtbN, can you tell me if there a special steps I have to take when setting up the encoder context for h264_nvenc
[20:32:06 CEST] <BtbN> If you treat it as a software encoder, no
[20:32:27 CEST] <BtbN> Just don't set any CUDA stuff
[20:38:40 CEST] <qxt> I am making a manifest for some DASH adaptive video. I am getting this warning "[webm_dash_manifest @ 0x56523e975460] Could not find codec parameters for stream 0 (Video: vp9, none(progressive), 1280x720): unspecified pixel format
[20:38:40 CEST] <qxt> Consider increasing the value for the 'analyzeduration' and 'probesize' options"
[20:39:11 CEST] <qxt> ok so I have a unspecified pixel format. What can I do about it?=
[21:01:31 CEST] <_Mike_C_> BtbN, do you know how to diagnose a "Resource temporarily unavailable error?"  I got it to not throw exceptions on receive packet... but it never gives me packet back now.
[21:01:56 CEST] <BtbN> Look at the backtrace and see when it happens
[21:02:12 CEST] <BtbN> Does nvenc work when you use the ffmpeg cli?
[21:02:20 CEST] <_Mike_C_> yes
[21:05:27 CEST] <pi--> ffmpeg  -loop 1  -framerate 1  -i LAST_FRAME.png  -r 1  -t "$FREEZE_S"  -pix_fmt yuv420p  LAST_FRAME.mp4
[21:06:14 CEST] <pi--> ^ if I set FREEZE_S to 10, VLC correctly reports a 10 second video https://www.dropbox.com/s/yfu9g86iofdyt7n/Screenshot%202018-08-29%2020.06.09.png?dl=0
[21:06:36 CEST] <pi--> But if I play it, VLC only plays it for six seconds.
[21:07:38 CEST] <pi--> Can anyone tell me what's going wrong?
[21:10:03 CEST] <poutine> pi--, https://trac.videolan.org/vlc/ticket/214 Any reason you wouldn't want to make output a more reasonable frame rate?
[21:10:32 CEST] <poutine> I guess that says it's fixed, but I knew it used to have trouble with very low fps
[21:10:37 CEST] <poutine> not sure which version you're using
[21:11:21 CEST] <pi--> poutine: I'm trying to freeze the last frame of a video for 30s.
[21:11:42 CEST] <pi--> I have one technique but it takes 60s+ to encode.
[21:11:49 CEST] <pi--> So I'm trying out this technique instead.
[21:12:12 CEST] <pi--> First I capture the last Frame. Then I create a video of the appropriate length.
[21:12:27 CEST] <pi--> Then I concatenate the 2 videos
[21:14:44 CEST] <pi--> It doesn't matter what I set the framerate to
[21:14:52 CEST] <pi--> I still only get six seconds.
[21:17:43 CEST] <pi--> Setting FREEZE_S=30, I get 26s.
[21:17:53 CEST] <pi--> So it appears to be 4s short(!)
[21:18:46 CEST] <pi--> 60 -> 56
[21:18:49 CEST] <pi--> so yes, 4s!
[21:19:34 CEST] <pi--> So I can fix it by applying a fudge factor. but MEH.
[21:20:24 CEST] <poutine> When you do a frame rate -r that matches the other video, (what frame rate did you put there) and how many frames are in the output video? What is ffprobe reporting duration as?
[21:25:11 CEST] <pi--> ffprobe reports the correct duration
[21:26:13 CEST] <pi--> I've been doing `ffmpeg  -loop 1  -framerate 30  -i LAST_FRAME.png  -r 1  -t 60  -pix_fmt yuv420p  LAST_FRAME.mp4` -- but looking at the documentation I see that `-r` and `-framerate` appear to be the same.
[21:26:30 CEST] <poutine> -r 1
[21:26:35 CEST] <pi--> Maybe this is a malformed instruction to ffmpeg.
[21:27:42 CEST] <pi--> ah `-r 30` fixes. Thanks!
[21:29:29 CEST] <pi--> `-r 5` still seems to shave a couple of seconds off
[21:29:39 CEST] <poutine> how are you concatenating these videos?
[21:29:53 CEST] <pi--> I haven't got that far yet.
[21:30:18 CEST] <pi--> `ffmpeg  -f concat  -i "$SRC"  -i LAST_FRAME.mp4  -c copy  "$DST"`
[21:30:26 CEST] <poutine> if you're using ffmpeg -f concatenate it should be the same frame rate, unless you have reason to be messing with the frame rate there it's best to work with normalized values here in my experience
[21:30:29 CEST] <pi--> ^ that's my plan.
[21:30:39 CEST] <poutine> I don't see why you need 5 fps or 1 fps
[21:31:31 CEST] <poutine> If it's the last frame you're freezing I think there might be a simpler way as well
[21:31:44 CEST] <pi--> True. My previous approach was taking a huge amount of processing time, so I was thinking maybe lowering the frame rate of the frozen portion might work.
[21:32:07 CEST] <pi--> `ffmpeg  -y  -i "$SRC"  -vf trim=0:$BLACK_S,geq=0:128:128  -af atrim=0:1,volume=0  -video_track_timescale 600  black.mp4`
[21:32:26 CEST] <pi--> ^ this was my first approach, but as mentioned the processing time was crazy.
[21:34:35 CEST] <poutine> pi--, what are you doing with the video in plain terms, are you freezing the last frame for 10 seconds?
[21:34:50 CEST] <pi--> yes!
[21:34:53 CEST] <pi--> for k seconds
[21:38:06 CEST] <poutine> https://superuser.com/questions/1250900/freeze-last-frame-of-a-video-with-ffmpeg
[21:38:14 CEST] <poutine> check that super hacky way, I think there has to be a different way
[21:46:24 CEST] <pi--> https://paste.pound-python.org/show/64YYm7zQ9u7Yxzy3a0Vc/
[21:46:33 CEST] <pi--> ^ I don't understand the error here
[22:26:47 CEST] <pi--> Okay, I'm really close to having something working
[22:26:57 CEST] <pi--> It even worked once!
[22:27:04 CEST] <superlinux> hello. I want to merge two videos side by side. but I  want the second video to start showing from a point in time of the first. how can I achieve that?
[22:27:07 CEST] <pi--> But I cannot replicate.
[22:27:10 CEST] <pi--> https://paste.pound-python.org/show/91GrAYFsmsYZarUP8D27/
[22:27:19 CEST] <pi--> It is the final concatenation that is giving trouble.
[22:27:26 CEST] <pi--> The screen goes duck-blue.
[22:27:45 CEST] <pi--> Rather than freezing on the correct final frame.
[22:40:00 CEST] <poutine> superlinux, Can you explain what you mean "showing from a point in the time of the first", do you really mean "the videos should start at different times"?
[22:40:27 CEST] <DHE> I assume he wants to synchronize them at some point in time which is not frame 0
[22:42:01 CEST] <pi--> I am falling at the final hurdle. I just can't get my videos to concatenate.
[22:42:02 CEST] <superlinux> I want them next to each other. however, the one of them will start showing after lets say 15 min from the start of the 1st one. it's just i have done two facebook lives
[22:43:35 CEST] <superlinux> two live broadcasts from two different phones.
[22:44:03 CEST] <superlinux> there is a camera I started it like 15 minutes later.
[22:45:48 CEST] <poutine> just seeking the input should accomplish that, would it not?
[22:48:27 CEST] <poutine> ffmpeg -ss 00:15:00 -i <video you want to start 15 minutes in on> -i <some other one> w/ hstack filter for side by side
[22:48:44 CEST] <superlinux> ok thanks
[00:00:00 CEST] --- Thu Aug 30 2018


More information about the Ffmpeg-devel-irc mailing list