[Ffmpeg-devel-irc] ffmpeg.log.20170401

burek burek021 at gmail.com
Sun Apr 2 03:05:02 EEST 2017


[00:00:28 CEST] <Justin_> llogan: i updated the user user with a simplified version that still shows the issue. also: To test the code, input any 640x480 images.
[00:04:41 CEST] <Justin_> *super user
[00:06:07 CEST] <Justin_> https://superuser.com/questions/1194405/ffmpeg-why-is-the-drawbox-filter-breaking-the-crossfade-and-how-can-i-get-the-de
[00:07:11 CEST] <Justin_> I have to go now.
[00:07:25 CEST] <Justin_> If you can help, please post on super user :) thanks a lot!
[00:11:22 CEST] <petecouture> If I'm concating two video files together and then amerging a mp3 music track over both videos, I have to match the length of the mp3 to the composited video first right?
[00:11:54 CEST] <Justin_> I think you can use -shortest to cut one off to the length of the other
[00:12:08 CEST] <petecouture> Awesome thank you Justin_ that was what I was hoping for
[00:12:12 CEST] <Justin_> i'm pretty new though, you might want to serach docs for it
[00:12:20 CEST] <Justin_> n/p
[00:12:28 CEST] <petecouture> Ya I'm just starting to get the hang of filters.
[00:12:47 CEST] <Justin_> https://superuser.com/questions/1194405/ffmpeg-why-is-the-drawbox-filter-breaking-the-crossfade-and-how-can-i-get-the-de
[00:12:59 CEST] <Justin_> this is my super use question for now. It coudl use an upvote ;)
[00:15:04 CEST] <petecouture> No enough rep to change votes
[00:15:54 CEST] <Justin_> no worries, thanks anyway
[00:16:15 CEST] <Justin_> i'm going now - for real this time :)
[00:16:17 CEST] <Justin_> bye all
[00:16:32 CEST] <petecouture> Later
[00:53:00 CEST] <dbro> Hello! Wondering if there's a way to detect a missing keyframe in H.264. I'm workon a realtime video product for Android and it seems some vendor's hardware decoders have awful "timewarp" artifacts when a keyframe is lost. Would love to detect this and suppress decoder input until the next keyframe. My stream only contains P and SP slices within th
[00:53:00 CEST] <dbro> e non-IDR NALs that I'd need to detect the missing keyframe in
[00:53:19 CEST] <dbro> I also can't use simple timing because loss events can trigger additional keyframe generation by the video source
[00:55:20 CEST] <dbro> the frame_num value in the slice header looked promising from reading the spec, but it doesn't seem to actually increment as I'd expect...
[00:56:30 CEST] <dbro> either way, thanks in advance for your braincycles!
[01:15:11 CEST] <petecouture> Got a question. I'm working to composit an image between two video clips. The image needs to be converted to a 5 second movie. I haven't found any clear tutorials or documentation on using an image as a video. Is it recommend to bring the image in as an -i input or do I use the movie filter and bring in it that way?
[01:16:03 CEST] <petecouture> The goal being the image->movie generation and compositing are all down via the same filter
[01:17:40 CEST] <petecouture> Also my understanding is you bring the image in looped, trim it to the desired length and the composit it into the movie.
[01:17:52 CEST] <furq> there are a lot of different ways you could do it
[01:18:08 CEST] <furq> depending on what "composite between" means
[01:18:16 CEST] <petecouture> I'm concating
[01:19:20 CEST] <petecouture> Basically it would concat the first video with the 5 seconds of image then concat the 2 video on the end.
[01:19:47 CEST] <petecouture> There's a lot of stack exchange articles asking the question but none have a solution afaik
[01:20:43 CEST] <furq> -i clip1 -i clip2 -i image -f lavfi -i nullsrc=s=1280x720:d=5 -filter_complex "[2:v][3:v]overlay[inter];[0:v][inter][1:v]concat"
[01:20:52 CEST] <furq> is the first thing that comes to mind
[01:21:01 CEST] <furq> there are a ton of ways to do it though
[01:21:48 CEST] <petecouture> Oh so you create a 5 seconds of null video and overlay the image ontop to generate the movie
[01:22:09 CEST] <furq> yeah that's the easiest way i can think of
[01:23:38 CEST] <furq> you might need to concat the audio and video separately
[01:23:52 CEST] <furq> assuming you have audio
[01:24:20 CEST] <petecouture> There's audio on both of the video clips
[01:24:36 CEST] <petecouture> and the very last filter mixes an mp3 over the whole movie
[01:25:12 CEST] <petecouture> I know you can run into issues with concating media that don't have audio channels with those that do
[01:25:14 CEST] <petecouture> Is that what you mean?
[01:25:21 CEST] <furq> -i clip1 -i clip2 -i image -f lavfi -i nullsrc=s=1280x720:d=5 -f lavfi -i anullsrc=d=5 -filter_complex "[2:v][3:v]overlay[inter];[0:v][inter][1:v]concat=3[vout];[0:a][4:a][1:a]concat=3:0:1[aout]"
[01:25:24 CEST] <petecouture> Could I generate null audio?
[01:25:26 CEST] <furq> ^
[01:25:38 CEST] <petecouture> awesome thanks mate
[01:26:13 CEST] <furq> actually i'm dumb
[01:26:19 CEST] <petecouture> -f lavfi?
[01:26:39 CEST] <furq> -i clip1 -i clip2 -i image -f lavfi -i nullsrc=s=1280x720:d=5 -f lavfi -i anullsrc=d=5 -filter_complex "[2:v][3:v]overlay[inter];[0][inter][4:a][1]concat=3:1:1[out]"
[01:28:20 CEST] <furq> -f lavfi is just a way to use sources as an input
[01:28:22 CEST] <furq> !source list
[01:28:39 CEST] <furq> er
[01:28:50 CEST] <petecouture> gotcha
[01:29:05 CEST] <furq> !source anullsrc
[01:29:05 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#anullsrc
[01:29:07 CEST] <furq> what
[01:29:13 CEST] <petecouture> Thanks furq mate, I think this solution will work
[01:29:29 CEST] <furq> !source list
[01:29:29 CEST] <nfobot> furq: abuffer, aevalsrc, allrgb, allyuv, amovie, anoisesrc, anullsrc, buffer, cellauto, color, coreimagesrc, flite, frei0r_src, haldclutsrc, life, mandelbrot, movie, mptestsrc, nullsrc, rgbtestsrc, sine, smptebars, smptehdbars, testsrc, testsrc2, yuvtestsrc
[01:29:32 CEST] <furq> wtf
[01:30:01 CEST] <furq> well yeah you get the idea
[01:30:11 CEST] <petecouture> ;-)
[01:30:20 CEST] <furq> stupid robots
[01:31:50 CEST] <petecouture> sshhh they are listening. http://alturl.com/o6cpk
[01:35:41 CEST] <thebombzen> how does one pronounce "smpte"
[01:35:49 CEST] <thebombzen> is it prounced "Sempty"
[01:36:51 CEST] <atomnuker> thebombzen: sim-pty
[01:37:06 CEST] <atomnuker> but I always just spell it out
[01:39:53 CEST] <riataman> Hey guys
[01:40:13 CEST] <riataman> I have a ffmpeg line that outputs a mp4
[01:40:19 CEST] <petecouture> furq does lavfi need to be configured on install? I'm getting this ieeu ERROR Output format lavfi is not available
[01:40:32 CEST] <llogan> petecouture: it can be simplified: ffmpeg -i vid1.mp4 -loop 1 -t 5 -framerate 25 -i img.png -f lavfi -t 5 -i anullsrc=channel_layout=stereo:r=44100 -i vid2.mp4 -filter_complex "[0:v][0:a][1:v][2:a][3:v][3:a]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
[01:41:28 CEST] <llogan> make sure image width, height, sar is same as videos. if you need to change it you can add another filterchain with scale/scale2ref, pad, crop, setsar, etc.
[01:41:42 CEST] <riataman> I have a ffmpeg line that outputs a mp4 I want to copy that mp4 as its being encoded. I managed to do that using tail -f. But there's always 3 bytes that are different. In the replicated file byte 42/43/44 are set to 0. But in the normal file output by ffmpeg they are set to different values. Seems like ffmpeg writes zeros there and then at some point goes back and write different values. Anyway to avoid
[01:41:48 CEST] <riataman> that?
[01:41:55 CEST] <petecouture> llogan I'm getting this error though ERROR Output format lavfi is not available
[01:42:44 CEST] <riataman> I already played with a few "movflags" but I can't avoid ffmpeg from going back and writing there
[01:42:57 CEST] <petecouture> It errors our before the script is shown in the terminal
[01:44:10 CEST] <llogan> at least show your command. you probably have an input option as an output option
[01:45:52 CEST] <thebombzen> the option order matters
[01:46:07 CEST] <thebombzen> you need to put -f lavfi before -i input_filter_options
[01:47:11 CEST] <petecouture> llogan and thebombzen Sorry it had to do with my node wrapper. It requires you to format the -f lavfi AFTER the input
[01:49:27 CEST] <llogan> ...and match the anullsrc parameters (channel_layout & r) to the parameters of the other files.
[01:51:01 CEST] <llogan> and for concat filter, "to work correctly, all segments must start at timestamp 0." so you can run each segment through: (a)setpts=PTS-STARTPTS
[01:51:18 CEST] <riataman> so is there any way to make ffmpeg to write mp4s strictnly in sequential order?
[01:51:25 CEST] <riataman> without going back and fixing the headers?
[01:51:48 CEST] <petecouture> llogan I'm getting an error but here is the script
[01:51:49 CEST] <petecouture> https://pastebin.com/2yCZLPnK
[01:51:54 CEST] <llogan> ...and if you're outputting MP4 add "-movflags +faststart" if your vids are being viewed via progressive download
[01:52:11 CEST] <petecouture> getting  anullsrc=d=5: Option not found
[01:52:53 CEST] <petecouture> Gotcha thanks
[01:53:13 CEST] <llogan> show the complete console output
[01:53:29 CEST] <llogan> also, it's "anullsrc" not "nullsrc"
[01:54:02 CEST] <petecouture> I have both listed
[01:54:44 CEST] <petecouture> Looks like it's the d paramater I'm sending anullsrc
[01:54:45 CEST] <llogan> oh, i assumed you dumped the superfluous nullsrc
[01:55:03 CEST] <petecouture> When I run the script via command line i get  Option 'd' not found
[01:55:09 CEST] <petecouture> will look at it thanks for your help
[01:56:13 CEST] <llogan> remove that "-re" and "-strict experimental"
[02:00:31 CEST] <riataman> I wonder if this is a bug
[02:00:40 CEST] <riataman> I'm outputing a segmented mp4
[02:00:58 CEST] <riataman> the first segment the file is written sequantially without going back and fixing the header
[02:01:15 CEST] <riataman> for subsequent segments ffmpeg goes back at the end and fixes the headers
[02:02:06 CEST] <llogan> riataman: if you want a copy use tee muxer
[02:02:24 CEST] <riataman> yeah, I know I can use that
[02:02:39 CEST] <thebombzen> "Sorry it had to do with my node wrapper. It requires you to format the -f lavfi AFTER the input"
[02:02:41 CEST] <riataman> but then I have to kill ffmpeg to remove/add} outputs
[02:02:46 CEST] <thebombzen> well then your node wrapper is incorrect
[02:03:03 CEST] <thebombzen> if it requires you to do something wrong then it's wrong
[02:03:15 CEST] <llogan> riataman: ok. i guess i don't quite understand what you're doing/want to do
[02:03:33 CEST] <riataman> llogan: and it still will a problem, because I want segments to be playable even if ffmpeg is killed (power goes off)
[02:03:45 CEST] <thebombzen> llogan: tee muxer?
[02:03:49 CEST] <riataman> lomancer: seems like only the very first segment is playable even if interrupted
[02:03:52 CEST] <llogan> riataman: don't use mp4
[02:04:01 CEST] <riataman> I need to play this on the web
[02:04:11 CEST] <riataman> html5 video doesn't seem to support a lot of things other than mp4 :(
[02:04:19 CEST] <petecouture> thebombzen: Ya fluent_ffmpeg has some weird convenstions but just so you understand how I'm writting my script with it https://pastebin.com/kazMh0dt
[02:04:25 CEST] <llogan> thebombzen: http://ffmpeg.org/ffmpeg-formats.html#tee
[02:04:57 CEST] <llogan> petecouture: get ffmpeg working first manually, unscripted in a cli. then try to script it in whatever is the popular language of the month.
[02:05:04 CEST] <thebombzen> llogan: ohey I didn't know about that
[02:05:05 CEST] <thebombzen> it's useful
[02:05:25 CEST] <thebombzen> I had before used -f nut - | ffmpeg -f nut -i - -c copy out1 -c copy out2
[02:05:31 CEST] <petecouture> llogan what I do is use the wrapper to get the cli command line generated then I debug via cli
[02:05:37 CEST] <petecouture> It helps to build it that way for me
[02:05:45 CEST] <thebombzen> have you considered not using a broken wrapper
[02:05:46 CEST] <riataman> this really seems like a bug, because the first segment looks like it works very well
[02:05:46 CEST] <llogan> thebombzen: most basic zample: ffmpeg -i input -f tee "output.mp4|output.mkv"
[02:05:51 CEST] <thebombzen> yea that's useful
[02:05:57 CEST] <llogan> ...proabably needs some maps in there too
[02:06:02 CEST] <thebombzen> once I read the docs it makes sense
[02:06:03 CEST] <petecouture> thebombzen it's working for me
[02:06:10 CEST] <riataman> seems like I'm using an ancient ffmpeg version ...
[02:06:16 CEST] <thebombzen> petecouture: if it requires you to put -f lavfi after -i then it's not
[02:06:23 CEST] <thebombzen> llogan: yea I can read the docs I just didn't know it existed
[02:06:25 CEST] <thebombzen> that's good to know thx
[02:26:52 CEST] <riataman> llogan: that was really a bug. upgraded from ffmpeg 2.6 to 3.2 and it works fine now :D
[02:31:09 CEST] <llogan> riataman: ah, good. first step when encountering an FFissue is to update (preferrably to a build from git master branch instead of a release)
[02:46:25 CEST] <petecouture> furq llogan thebombzen I got the script working! Thanks again for your help with turning the image into a video!
[03:44:48 CEST] <nicolas17> hi
[03:45:02 CEST] <nicolas17> I used youtube-dl to get a video from youtube, which uses DASH
[03:45:24 CEST] <nicolas17> it got me the video and audio files and *tried* to merge them but I ended up with only the last few seconds of the video
[03:45:41 CEST] <nicolas17> and indeed, I can't merge them manually either
[03:46:04 CEST] <nicolas17> playing the video part alone with "ffplay video.f135.mp4" gives this:
[03:46:06 CEST] <nicolas17> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f96c8000920] Found duplicated MOOV Atom. Skipped it
[03:46:07 CEST] <nicolas17>     Last message repeated 982 times
[03:46:12 CEST] <nicolas17> and then plays the last few seconds of the video
[03:46:25 CEST] <nicolas17> vlc plays the entire thing just fine (with no audio since that's in a separate file)
[04:08:54 CEST] <unomystEz> I have a 20 min video, and I want to replace seconds 5-10 with an image, I tried splitting the video into 3 files, replacing the image on the middle file, then concatting them with ffmpeg but the resulting video is only 5s long
[04:09:38 CEST] <unomystEz> last night someone helped me use the filter_complex overlay to insert the image and while it works great, the only problem is it encodes at like 4x speed - and I have a lot of videos to process
[04:10:00 CEST] <unomystEz> just running -loop 1 -i img.jpg though for the whole video I get like 30x speedup
[04:12:06 CEST] <dystopia_> encode your image to 5-10 second of video
[04:12:18 CEST] <dystopia_> to the same spec as the video you want to insert it in
[04:12:28 CEST] <dystopia_> cut source video in two
[04:12:43 CEST] <dystopia_> concat it with your new image video
[04:12:57 CEST] <dystopia_> fastest/easiest way imo
[04:14:04 CEST] <unomystEz> I'd like to keep the audiostream intact though
[04:14:12 CEST] <furq> you won't get exact cuts at 5-10 seconds without reencoding with ffmpeg
[04:14:18 CEST] <furq> unless your keyframes happen to be exactly at those points
[04:14:20 CEST] <unomystEz> furq: hey!
[04:14:30 CEST] <dystopia_> oh
[04:14:46 CEST] <dystopia_> if you want to keep source audio during your image then overlay will be the only way
[04:14:51 CEST] <furq> there are apparently tools which will do it by just reencoding what's needed but i've never used any of them
[04:14:57 CEST] <unomystEz> hmm
[04:15:31 CEST] <unomystEz> so what if i split the video in 3 parts (0-5, 5-10, 10+)
[04:15:35 CEST] <unomystEz> replace the image in the middle one
[04:15:50 CEST] <unomystEz> then concat+re-encode?
[04:15:52 CEST] <dystopia_> it will work but you would have to demux audio from middle
[04:16:01 CEST] <dystopia_> and mux audio with your new image one
[04:16:03 CEST] <dystopia_> then concat
[04:16:11 CEST] <dystopia_> no need to re encode if it's all the same spec
[04:16:26 CEST] <unomystEz> sorry, I'm not really familiar with mux/demux
[04:16:39 CEST] <furq> unomystEz: like i said, it won't necessarily cut at those points
[04:16:52 CEST] <furq> you can only cut at keyframes when stream copying
[04:17:03 CEST] <dystopia_> audio and video when together are multiplexed of muxed, when seperate the are demultiplexed or demuxed
[04:17:07 CEST] <unomystEz> so re-encoding is an optin then?
[04:17:09 CEST] <furq> if you have a keyframe at :05 and one at :10 then great, otherwise it'll just cut at the closest one
[04:17:19 CEST] <dystopia_> yeah ^ this
[04:17:26 CEST] <unomystEz> so actually, maybe I'm making this more complicated than it seems
[04:17:38 CEST] <unomystEz> I have an mp3 file and 2 images
[04:17:46 CEST] <dystopia_> you could re-encode it and change keyframe interval though but it's more work
[04:17:57 CEST] <unomystEz> i want to make a video that is mp3 the whole time, then image1 0-5, image2 5-10, image1 10+
[04:18:01 CEST] <unomystEz> perhaps that makes it easier?
[04:18:13 CEST] <dystopia_> yeah it does heh
[04:18:30 CEST] <dystopia_> just make your video with the images and mux in the audio at the end
[04:18:48 CEST] <unomystEz> furq: your method you gave me last night works well, but I find it's a bit too slow (4x speedup)
[04:18:55 CEST] <unomystEz> oh
[04:19:16 CEST] <unomystEz> also, for some reason, youtube didn't accept some of the videos too
[04:19:22 CEST] <unomystEz> about 20% failed for some reason
[04:19:29 CEST] <dystopia_> ffmpeg -loop 1 -f image2 -i img1.png -c:v libx264 -t 10 out1.mp4
[04:19:31 CEST] <unomystEz> so I'm now transcoding the audio to aac
[04:19:36 CEST] <dystopia_> repeat for second and third image
[04:19:40 CEST] <dystopia_> cat them
[04:19:45 CEST] <dystopia_> mux in audio
[04:20:00 CEST] <furq> unomystEz: there's not really an easier solution than the one i gave you
[04:20:08 CEST] <furq> you can add -preset veryfast if you want to speed it up
[04:20:16 CEST] <unomystEz> furq: I used ultrafast even
[04:20:27 CEST] <unomystEz> maybe my e3-1231v3 isn't up to snuff! ;)
[04:20:31 CEST] <furq> what resolution
[04:20:38 CEST] <unomystEz> 1280x720
[04:20:49 CEST] <furq> weird
[04:20:53 CEST] <furq> that seems too slow
[04:20:56 CEST] <unomystEz> yeah
[04:21:01 CEST] <furq> is it using 100% cpu
[04:21:12 CEST] <unomystEz> yup but only 1 core
[04:21:25 CEST] <unomystEz> I have 8
[04:22:15 CEST] <furq> oh
[04:22:31 CEST] <furq> that's odd
[04:22:31 CEST] <unomystEz> https://pastebin.com/uX6WaqfG
[04:22:55 CEST] <furq> is that using libx264
[04:23:16 CEST] <furq> if it is then i guess it's either the audio or enable that's bottlenecking it
[04:23:22 CEST] <dystopia_> you could speed it up a tiny bit by not encoding to aac
[04:23:31 CEST] <dystopia_> and just -c:a the audio
[04:23:47 CEST] <furq> yeah if your source is an mp3 then there's no need to encode it
[04:24:17 CEST] <unomystEz> still around the same
[04:24:44 CEST] <furq> add -c:v libx264
[04:24:45 CEST] <furq> just to make sure
[04:25:09 CEST] <unomystEz> same
[04:25:27 CEST] <unomystEz> and still only 1 core pegged
[04:26:02 CEST] <unomystEz> dystopia_: ffmpeg -loop 1 -f image2 -i img1.png -c:v libx264 -t 10 out1.mp4
[04:26:13 CEST] <unomystEz> that will produce 10 seconds of img1?
[04:26:49 CEST] <unomystEz> I'm curious if it would be faster to generate the 0,5,10+ video only of the images
[04:26:54 CEST] <unomystEz> then combine that with the mp3
[04:27:00 CEST] <furq> http://vpaste.net/Vdng6
[04:27:01 CEST] <furq> try that
[04:27:54 CEST] <unomystEz> nice, seems to have stabilized at 9.5x speedup
[04:28:05 CEST] <furq> fun
[04:28:08 CEST] <furq> i guess enable is slow then
[04:28:48 CEST] <furq> oh also you probably want to add -tune stillimage
[04:28:59 CEST] <unomystEz> 1 core is pegged still, but there are now 16 threads active
[04:29:08 CEST] <furq> oh
[04:29:10 CEST] <furq> weird.
[04:29:15 CEST] <unomystEz> loadavg is 1.6 now
[04:29:21 CEST] <furq> something's bottlenecking libx264 then
[04:29:56 CEST] <unomystEz> any other codec that might be better?  this is just being uploaded to youtube
[04:30:06 CEST] <furq> no, and also that's not the issue
[04:30:18 CEST] <furq> something earlier in the chain is preventing x264 from processing at full speed
[04:31:45 CEST] <unomystEz> https://pastebin.com/tN7ZBH7Q
[04:31:50 CEST] <unomystEz> that's the full output from the run
[04:33:07 CEST] <furq> well it's not the issue but i spy a fuckup i made
[04:33:56 CEST] <furq> http://vpaste.net/2BupB
[04:34:18 CEST] <unomystEz> wow
[04:34:20 CEST] <unomystEz> you did it
[04:34:22 CEST] <unomystEz> 24x
[04:34:31 CEST] <unomystEz> about as fast as just doing a single image
[04:34:34 CEST] <furq> is it using multiple cores yet
[04:35:11 CEST] <furq> if it is then i guess x264 just doesn't like yuvj444p
[04:35:25 CEST] <unomystEz> looks about the same
[04:35:36 CEST] <unomystEz> although, the other threads are a bit more busy
[04:35:45 CEST] <unomystEz> similar loadavg
[04:35:51 CEST] <unomystEz> still a single core is pegged
[04:36:07 CEST] <furq> shrug
[04:36:09 CEST] <furq> as long as it's fast enough
[04:36:11 CEST] <unomystEz> perhaps I can convert the image into something it prefers?
[04:36:21 CEST] <furq> yeah that'd probably help
[04:36:42 CEST] <furq> also fwiw if cover_caption.jpg is just cover.jpg with a text overlay, you can probably do that with drawtext
[04:36:48 CEST] <furq> but i guess you already made a bunch of them now
[04:37:09 CEST] <unomystEz> it is an overlay with a translucent background
[04:37:18 CEST] <unomystEz> I made it using imagemagick
[04:37:55 CEST] <furq> well yeah you can do that with drawbox, drawtext, overlay and enable
[04:37:59 CEST] <unomystEz> I'd be darned, looks like ffmpeg can do it too!
[04:38:07 CEST] <furq> !filter drawtext
[04:38:07 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#drawtext-1
[04:38:17 CEST] <furq> drawtext is a bit finicky though
[04:38:29 CEST] <furq> you could also potentially do it with .ass subtitles and -vf subtitles
[04:38:40 CEST] <unomystEz> aha
[04:38:49 CEST] <unomystEz> I think the subtitle approach might even be better
[04:39:03 CEST] <unomystEz> what an amazing tool
[04:39:07 CEST] <furq> yeah if you have a lot of these to do then that's probably a good choice
[04:40:30 CEST] <unomystEz> .ass is your recommended subtitle approach?
[04:40:37 CEST] <unomystEz> I suppose it will hardcode them in the video?
[04:40:45 CEST] <furq> -vf subtitles does that yeah
[04:40:58 CEST] <furq> !filter subtitles
[04:40:58 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#subtitles-1
[04:41:32 CEST] <nicolas17> that's hardsub yeah
[04:42:49 CEST] <furq> libass handles .srt as well which is easier to hand-edit
[04:43:07 CEST] <furq> and if it's just one bit of text then the filter's styling options might be good enough
[04:43:13 CEST] <unomystEz> nice, this is actually a lot easier and flexible than I originally set this up
[04:43:14 CEST] <furq> obviously you can do much fancier stuff with .ass
[04:43:27 CEST] <unomystEz> I'm reading up on the spec
[04:44:22 CEST] <voxadam> Is anyone here familiar with Intel Quicksync (QSV)? If I'm logged in and using the GPU is it possible for another user, a Plex/Emby/Kode/etc. daemon, to use the same GPU simultaneously for hardware accelerated encode/decode?
[04:44:24 CEST] <unomystEz> yeah srt is easier, but I wonder if it supports fg and bg color
[04:44:33 CEST] <unomystEz> seems you can encode it in ass
[04:44:57 CEST] <furq> you can use force_style with the subtitles filter
[04:45:04 CEST] <unomystEz> dang, there's even image subtitles I probably could have leveraged if I wanted to too
[04:51:39 CEST] <nicolas17> [23:43] <furq> obviously you can do much fancier stuff with .ass
[04:51:45 CEST] <nicolas17> so that it doesn't look like ass?
[04:52:06 CEST] <furq> do you not have a fancy .ass
[04:52:11 CEST] <furq> i do. it's great
[04:52:21 CEST] <nicolas17> I don't :(
[04:52:52 CEST] <furq> mine gradually lights up red as the song progresses
[04:52:55 CEST] <furq> it's a sight to behold
[04:53:03 CEST] <nicolas17> o.O
[04:55:26 CEST] <unomystEz> think nvenc would help?
[04:55:48 CEST] <furq> sure
[04:55:57 CEST] <unomystEz> I'm recompiling ffmpeg with it
[04:56:25 CEST] <unomystEz> well, that and about 10 other deps
[04:56:26 CEST] <furq> the quality sucks, but it'll do fine compared to x264 ultrafast
[04:56:50 CEST] <unomystEz> why does the quality suck?
[04:57:03 CEST] <furq> all consumer hardware encoders kind of suck
[04:57:22 CEST] <furq> it should do better than x264 ultrafast but it can't compare to x264 veryslow
[04:57:45 CEST] <furq> it's not really built with that use case in mind
[06:16:23 CEST] <unomystEz> furq: subtitles work pretty well
[06:17:27 CEST] <unomystEz> thanks for all your help with this
[06:17:35 CEST] <unomystEz> I should have just went with subs in the beginning
[07:06:49 CEST] <hanetzer> jello. So, in an investigation into how things are done with regard to a certain dvr and its mobile app, I discover it ships a binary libffmpeg.so in the apk... I believe that may be a violation of your guy's gpl
[07:08:08 CEST] <furq> ffmpeg is lgpl, so that's ok as long as they distribute the ffmpeg sources
[07:08:26 CEST] <hanetzer> afaik they don't, or at least I've not seen it.
[07:09:45 CEST] <furq> this is assuming that they're not including any gpl'd components
[07:09:59 CEST] <furq> you can probably pull the configuration out with strings
[07:12:16 CEST] <hanetzer> --target-os=linux --prefix=../android/armv7-a-vfp --enable-cross-compile --extra-libs=-lgcc --arch=arm --cpu=armv7-a --cc=/home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc --cross-prefix=/home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/bin/arm-linux-androideabi-
[07:12:18 CEST] <hanetzer> --nm=/home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/bin/arm-linux-androideabi-nm --sysroot=/home/tom/tool/android-ndk-r8e/platforms/android-9/arch-arm/ --extra-cflags=' -O2 -fPIC -DANDROID -mfpu=vfp -mfloat-abi=softfp' --disable-shared --enable-static
[07:12:20 CEST] <hanetzer> --extra-ldflags='-Wl,-T,/home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/arm-linux-androideabi/lib/ldscripts/armelf_linux_eabi.x -Wl,-rpath-link=/home/tom/tool/android-ndk-r8e/platforms/android-9/arch-arm//usr/lib -L/home/tom/tool/android-ndk-r8e/platforms/android-9/arch-arm//usr/lib -nostdlib
[07:12:22 CEST] <hanetzer> /home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/lib/gcc/arm-linux-androideabi/4.4.3/crtbegin.o /home/tom/tool/android-ndk-r8e/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/lib/gcc/arm-linux-androideabi/4.4.3/crtend.o -lc -lm -dl' --disable-symver --disable-doc --disable-debug --disable-ffplay --disable-ffmpeg --disable-ffprobe --disable-ffserver
[07:12:24 CEST] <hanetzer> --disable-network --disable-bsfs --disable-filters --disable-devices --disable-encoders --disable-decoders --disable-muxers --disable-demuxers --disable-swscale --disable-protocols --disable-everything --enable-decoder=h264 --enable-decoder=hevc --enable-asm
[07:12:26 CEST] <hanetzer> ohshit. sorry.
[07:13:11 CEST] <furq> well that looks like lgpl anyway
[07:13:44 CEST] <hanetzer> no source, or even any notice that  it uses lgpl stuff on either the app page, in the app, or the webpage for the devs.
[07:16:16 CEST] <furq> https://www.ffmpeg.org/legal.html
[07:16:19 CEST] <furq> you might want to send them this
[07:16:31 CEST] <nicolas17> I miss the hall of shame
[07:17:57 CEST] <hanetzer> furq: send the company that, or send the company to ffmpeg?
[07:18:08 CEST] <furq> the former
[07:18:22 CEST] <furq> i'm not in any way represenative of the project though
[07:18:26 CEST] <furq> +t
[07:18:47 CEST] <hanetzer> ja. will do.
[10:53:03 CEST] <hanetzer> christ... ffmpeg has hella stuff.
[11:28:07 CEST] <utack> Did you massively improve HEVC decoding between 3.2.4 and now, or did I mess up my benchmark? It was almost 40% faster
[11:42:38 CEST] <iive> utack: i think there were merges with hevc assembly
[11:43:05 CEST] <utack> interesting..for a moment i suspected it uses my GPU now. but holy crap, that is a massive improvement
[15:55:13 CEST] <kepstin> huh, looks like there's been a bunch of improvements to multithreaded encoding and encoding speed in libvpx git (for vp9)
[15:55:23 CEST] Action: kepstin looks forward to testing that out
[16:21:53 CEST] <DelphiWorld> hey guys
[16:22:08 CEST] <DelphiWorld> why i am getting va display not found for /dev/dri/renderD128?
[16:25:14 CEST] <Tom_B> did this patch: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-January/205510.html get applied? I'm using ffmpeg-git but whenever I try using deinterlace_vaapi I get "Assertion !link->hw_frames_ctx && "should not be set by non-hwframe-aware filter" failed at libavfilter/avfilter.c:360"
[16:25:35 CEST] <DelphiWorld> ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -i fs.webm -an -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi out.mkv
[16:25:42 CEST] <DelphiWorld> failing at va display not found
[16:26:27 CEST] <Tom_B> if you type ls /dev/dri do you see the renderer device? it may have a different name
[16:26:41 CEST] <DelphiWorld> Tom_B: yes, exactly same name
[16:39:10 CEST] <jkqxz> Tom_B:  Try now.
[16:39:13 CEST] <jkqxz> DelphiWorld:  Log?
[16:39:35 CEST] <DelphiWorld> jkqxz: lol, how to output it to a file please ;)
[16:41:11 CEST] <DelphiWorld> this is the cmd
[16:41:15 CEST] <DelphiWorld> ffmpeg -loglevel debug  -fflags +genpts -fpsprobesize 200 -hwaccel vaapi -hwaccel_output_format vaapi -vaapi_device /dev/dri/renderD128 -i "fs.webm"-c:v h264_vaapi -force_key_frames "expr:gte(t,n_forced*5)" -vf "format=nv12|vaapi,hwupload,scale_vaapi=w=320:h=240" -b:v 500k -maxrate 500k -level 31 -threads 0 -qp 19 -bf 4 -c:a libmp3lame  out.mkv
[16:41:18 CEST] <jkqxz> -report or just use shell redirection.
[16:41:41 CEST] <Tom_B> jkqxz: did you just apply it to git? I'll recompile it. Also, the patch you supplied a few days ago for fixing the problem with aspect ratio changes using h264_vaapi works great, will that be added to git repo?
[16:41:42 CEST] <DelphiWorld> used -report
[16:42:09 CEST] <DelphiWorld> and where to get the output from?
[16:42:30 CEST] <jkqxz> It ends up in a log file in the same directory.
[16:42:38 CEST] <DelphiWorld> ah ok
[16:43:30 CEST] <jkqxz> Tom_B:  I just applied a fix, the actual patch was applied a while ago (that assert was hit after a recent merge).
[16:44:31 CEST] <DelphiWorld> jkqxz: http://paste.debian.net/925421/
[16:45:57 CEST] <jkqxz> Tom_B:  Wrt the aspect ratio thing, that was a total hack and has other problems in that form.  Nice that it worked for you, but it's not going to be applied like that.
[16:46:02 CEST] <jkqxz> It needs something more sophisticated to actually check the parameters rather than just rewriting them on every GOP; I might get round to it at some point.
[16:47:33 CEST] <Tom_B> ah ok, thanks. I'll keep manually applying the patch the patch for now then.
[16:49:41 CEST] <jkqxz> DelphiWorld:  Hmm, actually -report doesn't show the libva output.  libva should say some stuff on stderr?  Like "libva info: Trying to open /usr/local/lib/dri/i965_drv_video.so"?
[16:50:39 CEST] <DelphiWorld> no
[16:50:44 CEST] <DelphiWorld> only the error of libva
[16:53:54 CEST] <jkqxz> Ohright, I think you've somehow managed to build without the DRM support.  Can you check HAVE_VAAPI_DRM in your config.h?
[16:54:17 CEST] <DelphiWorld> strange, to be honest i just downloaded a static build
[16:54:26 CEST] <DelphiWorld> do you have a viable source for a static build?
[16:55:09 CEST] <jkqxz> Um, libva isn't going to work properly in a static build.  It relies on dynamic loading of the driver.
[16:55:26 CEST] <DelphiWorld> i see..........
[16:56:32 CEST] <DelphiWorld> i'll install from a centos repo
[16:57:12 CEST] <DelphiWorld> no avconv, no libav, no ffmpeg in the repo...
[17:04:26 CEST] <jkqxz> relaxed:  ^ I think you may have accidentally pulled some VAAPI code into your static builds.
[17:05:16 CEST] <DelphiWorld> ;)
[17:07:34 CEST] <DelphiWorld> i started hating this qsv/vaapi things...
[17:10:31 CEST] <SnakesAndStuff> anyone have a link to their favorite command line arguments for backing up their DVD's/BluRays into a mkv?
[17:14:39 CEST] <DelphiWorld> someone pass the ffmpeg git repo here please?
[17:19:09 CEST] <Tom_B> thanks jkqxz, your deinterlace_vaapi fix works great. Down from ~20% cpu usage using yadif on a 1080i source to ~5% using deinterlace_vaapi :)
[17:19:31 CEST] <DelphiWorld> ThoNohT: you might help me with vaapi usage! :P
[18:03:18 CEST] <unomystEz> I'm splitting my videos into segments using -segment_time -f segment and I suppose that since it's choosing the closest keyframe at which to split, sometimes my splits are longer than I want, is there a way to guarantee that it chooses the previous keyframe?
[18:06:56 CEST] <unomystEz> sorry, I didn't see the ffmpeg-all manpage, I think I can figure it out
[18:21:09 CEST] <DHE> unomystEz: you would have to transcode the video and force the keyframe interval of your choosing.
[18:23:00 CEST] <unomystEz> DHE: yeah I read about that
[18:23:20 CEST] <unomystEz> DHE: I'll play around with it, but for now I just picked a small segment time to not go past the boundary
[20:33:05 CEST] <TikityTik> how can i make libopus make something sound average at 6 kbps?
[20:39:21 CEST] <TikityTik> for an audio file that is 4 minutes long?
[20:41:37 CEST] <BtbN> what?
[20:42:02 CEST] <BtbN> 6kbps is going to sound like it sounds. There is no magic to make it better at a given bitrate
[00:00:00 CEST] --- Sun Apr  2 2017


More information about the Ffmpeg-devel-irc mailing list