[Ffmpeg-devel-irc] ffmpeg.log.20180831

burek burek021 at gmail.com
Sat Sep 1 03:05:01 EEST 2018


[00:00:43 CEST] <kepstin> if you have a lot of effects/filters being applied, it might be that you're not really limited by the video encoder speed so much
[00:05:33 CEST] <Pistos> kepstin: Quite possible, yes.
[00:06:45 CEST] <foo> Does anyone in here use ffmpeg for a podcast of sorts? I'm wondering how much I can do with ffmpeg... eg. equilizer + reverb, or such. Trying to automate a workflow
[00:06:46 CEST] <Pistos> This is an Intel i7, and kdenlive doesn't seem to using much of the horsepower at all.  Anyway, I suppose that's off topic for this channel.
[00:07:14 CEST] <Abbott> I tried trimming a video with ffmpeg -i file.mp4 -ss 00:00:14 -to 00:00:40 cut.mp4 and ffmpeg seems to not make any progress. It prints out fps 0.0 and repeats frame 0 etc.
[00:07:19 CEST] <kepstin> Pistos: sounds like you're probably using an application that's limited to a single thread then
[00:07:48 CEST] <Pistos> kepstin: It claims to support multithreading and multi-coring, but I've read online that it could be limited to other factors as to why it won't use the other cores.
[00:07:51 CEST] <kepstin> Pistos: since the x264 encoder is multithreaded, you can probably use a slower preset for "free" (it'll just use the other idle cpu cores)
[00:07:54 CEST] <Abbott> it works when i copy the streams, but I need to re-encode the video
[00:12:31 CEST] <kepstin> Abbott: you might just be impatient? how long did you wait? the x264 encoder buffers a lot of frames internally while it's encoding, so there's a delay before it starts outputting.
[00:13:13 CEST] <Abbott> oh! I just checked it now after leaving it in the background for a couple minutes and it seems to be encoding now
[00:13:29 CEST] <Abbott> I didn't realize that could happen, i've just always seen fps right away
[00:14:02 CEST] <kepstin> also when you use -ss after -i, ffmpeg has to decode then throw away all video prior to the seek point, this adds a startup delay
[00:14:17 CEST] <kepstin> with only 14 seconds in this shouldn't really be noticable tho :)
[00:16:44 CEST] <Abbott> oh... my actual file starts at like 12 minutes or something, didn't think the start time was relevant so i just put something random in
[00:17:04 CEST] <Abbott> so should i do ffmpeg -ss 00:10:10 -i file.mp4 -to 00:11:00 out.mp4?
[00:17:08 CEST] <Abbott> (next time)
[00:17:32 CEST] <foo> Does look like here's some equalizer settings in ffmpeg, hmm
[00:17:39 CEST] <kepstin> Abbott: that won't work as expected, because when you use -ss before -i, it resets the timestamps to start at 0 (so the value for -to will be wrong)
[00:18:23 CEST] <kepstin> Abbott: something like ffmpeg -ss 00:10:10 -f file.mp4 -t 00:00:50 would give the expected result
[00:19:00 CEST] <Abbott> so -to only works for stuff near the beginning of a file then
[00:19:08 CEST] <Abbott> err, is most useful, rather
[00:19:51 CEST] <Abbott> also, any reason to use -f over -i?
[00:21:35 CEST] <kepstin> -f is a typo
[00:21:37 CEST] <kepstin> use -i :)
[00:49:55 CEST] <Mia> Hello there ---- I'm trying to convert a gif file to mp4 to upload to social media (programatically) but the media channel I use only accepts mp4 files with audio channels, I guess
[00:50:07 CEST] <Mia> I couln't find the option fo "force audio channel" to the export
[00:50:17 CEST] <Mia> since the main file is a gif there is no audio channel in my output
[00:50:23 CEST] <Mia> Any ideas are apprecialted
[00:50:51 CEST] <c_14> add an anullsrc
[00:50:54 CEST] <c_14> -f lavfi -i anullsrc
[00:51:08 CEST] <Mia> what is -f and -i
[00:52:10 CEST] <foo> Mia: what social media site? fun
[00:52:20 CEST] <Mia> foo - instagram
[00:52:41 CEST] <Mia> trying to programatically upload a video file to instagram using the unofficial api
[00:52:56 CEST] <Mia> but looks like the api only accepts mp4 files with certain type
[00:53:21 CEST] <jerichowasahoax> might also need to be a square video
[00:53:28 CEST] <Mia> nope
[00:53:31 CEST] <Mia> tested with non square
[00:53:33 CEST] <Mia>  it works
[00:53:46 CEST] <Mia> all of the tests I did - the ones that wotrk has audio channe_s
[00:53:52 CEST] <Mia> and the ones that fail has no audio channels
[00:54:01 CEST] <Mia> so I'll try my chances with audio channels
[00:54:29 CEST] <Mia> but I'm converting videos fromgif source, so I'm trying to figure out how to add an audio channel to a gif-based-video
[00:54:47 CEST] <Hfuy> Hello.
[00:55:31 CEST] <Hfuy> I have some ProRes quicktimes that have disorderd PTS values in the video. This is causing software such as Premiere to occasionally duplicate frames when, for instance, I place the video files on a timeline.
[00:55:46 CEST] <Hfuy> Is there a way to have ffmpeg rewrite the PTS values for each frame so that they're more consistent?
[00:58:57 CEST] <Mia> c_14, foo, jerichowasahoax I'm uisng this api https://www.npmjs.com/package/fluent-ffmpeg
[01:00:45 CEST] <c_14> I've never used that API before, and it's not supported here. You'd have to go to the developers github or wherever for support. All I can tell you is that adding an anullsrc input would take care of it for you
[01:01:03 CEST] <furq> you'll need to add -shortest as well
[01:01:05 CEST] <johnnny22> howdy guys.
[01:01:21 CEST] <Mia> -shortest? hm
[01:02:16 CEST] <foo> Mia: which unofficial api?
[01:02:30 CEST] <Mia> nodejs one
[01:02:33 CEST] <Mia> there is only one
[01:02:35 CEST] <foo> Mia: I've done a fair amount with twitter, and linkedin, haven't looked at instagram yet. Seems somewhat closed off
[01:02:48 CEST] <Mia> foo, there is an unofficial api
[01:03:04 CEST] <Mia> otherwise, since facebook bought it, they've been limiting the previous instagram api
[01:03:25 CEST] <Mia> so now the api is focused on business accounts, with the official api you're nopot even allowed to get public info via api
[01:03:38 CEST] <Mia> but unofficial api is created by some people who converted the requests to an api it seems
[01:03:39 CEST] <Mia> it works
[01:03:50 CEST] <Mia> (as long as you feed in the right data )
[01:03:54 CEST] <foo> Mia: yeah, I've looked at the python equivalent before
[01:04:17 CEST] <jerichowasahoax> Mia: to answer your previous question, -f specifies a format, and -i specifies an input file
[01:04:20 CEST] <foo> Mia: ... I deleted instagram 2 years ago and am currently syndicating content through twitter/linkedin. I wish LinkedIn API had video support, was thinking about automating it
[01:04:22 CEST] <Mia> So in my case I need to figure out how to convert a gif to an mp4, "sendable" kind of mp4
[01:04:23 CEST] <johnnny22> I'm experiencing the same issue as described here: https://trac.ffmpeg.org/ticket/7126    The difference is that I'm grabbing using x11grab & grabbing the audio from ALSA. When setting loglevel to debug, i see that the number of frames buffered on the decklink sides slowly goes down over long periods of time. Eventually, after maybe 12h or so, it gets to the point where I get the messages
[01:04:23 CEST] <johnnny22> displayed in this ticket.
[01:04:38 CEST] <jerichowasahoax> Mia: so you would add "anullsrc" as an input file with "lavfi" as the format
[01:04:55 CEST] <jerichowasahoax> Mia: apart from that i'm allergic to nodejs so i got no idea
[01:05:07 CEST] <Mia> yeah when I do it with this api, I get an "input not found" error
[01:05:28 CEST] <jerichowasahoax> Mia: can you call subprocesses
[01:05:29 CEST] <furq> you're lucky they gave you such a helpful and intuitive api
[01:05:42 CEST] <Mia> furq, you're right :)
[01:05:58 CEST] <Mia> yes I can jerichowasahoax I preferred to use an api though
[01:06:11 CEST] <furq> i'm guessing that thing just shells out to ffmpeg anyway
[01:06:17 CEST] <Mia> supbrocess becomes a headache in an async world
[01:06:39 CEST] <Mia> yes furq but I believe they handle pretty much everything
[01:06:47 CEST] <johnnny22> Mia, what package ?
[01:06:58 CEST] <Mia> https://www.npmjs.com/package/fluent-ffmpeg this is the one I'm uisn
[01:07:02 CEST] <Mia> *using
[01:07:16 CEST] <johnnny22> yeah, they just create a subprocess
[01:08:08 CEST] <johnnny22> the trick is to call the methods in the right order ;)
[01:09:55 CEST] <CrystalMath> hi everyone
[01:10:14 CEST] <CrystalMath> i wish to stream wave audio from a unix named pipe
[01:10:35 CEST] <CrystalMath> with the lowest possible latency
[01:10:53 CEST] <CrystalMath> for that i'm using ffserver, but i can't seem to get rtsp to work
[01:11:11 CEST] <Mia> johnnny22, what's your suggestion to me in my case :]
[01:11:32 CEST] <Mia> how should I add an audio channel to a gif input (will be converted to mp4 in the end)
[01:11:34 CEST] <johnnny22> Mia, i missed the beginning of your issue
[01:11:54 CEST] <Mia> johnnny22, I'm trying to add an audio channel to a gif based video output
[01:12:12 CEST] <Mia> it should be silent (or no data? not sure which one it is, technically)
[01:12:35 CEST] <Mia> When I convert a gif file to an mp4 file in adobe encoder, they have audio channels, it seems
[01:12:40 CEST] <Mia> and they work fine
[01:12:47 CEST] <Mia> (with my instagram uploader)
[01:13:10 CEST] <Mia> so I assume when the audio channel is missing (ffmpeg default conversion) it makes some things (?) wrong
[01:13:14 CEST] <Mia> upload gets refused
[01:15:13 CEST] <johnnny22> Mia: ffmpeg().addInput('path/to.gif').addInput('path/to.mp3').output('path/tooutput.mp4').on('end', function() { //end handler code here; }).run(); ?
[01:15:34 CEST] <johnnny22> Mia: that might be the 'simple' test to see if that works.
[01:15:44 CEST] <Mia> johnnny22, mhm
[01:16:18 CEST] <johnnny22> gif's don't have audio, you have to add your own audio.
[01:16:34 CEST] <johnnny22> except if they now do :P lol
[01:18:04 CEST] <Mia> I know johnnny22 that's why I was asking if there is a way to add an empty audio channel
[01:18:31 CEST] <johnnny22> anullsrc ?
[01:18:53 CEST] <johnnny22> and, that's what you are looking to know how to do ? :)
[01:19:57 CEST] <Mia> mhm
[01:20:08 CEST] <Mia> coul_dn't find how to do that in the api
[01:21:52 CEST] <johnnny22> Mia: maybe->   ffmpeg().addInput('path/to.gif').addInput('anullsrc').inputFormat('lavfi').output('path/tooutput.mp4').on('end', function() { //end handler code here; }).run();
[01:23:19 CEST] <CrystalMath> so, what i have works with http streaming
[01:23:25 CEST] <CrystalMath> but when i try to use rtsp
[01:23:32 CEST] <CrystalMath> the player doesn't get anything
[01:23:40 CEST] <johnnny22> Mia: you can always add more custom command line argument to that anullsrc input using: ......addInput('anullsrc').inputFormat('lavfi').inputOptions('-option1', '-option2', 'param2').output.....
[01:24:16 CEST] <Mia> addinput("anullsrc") seems to stall everything
[01:24:21 CEST] <Mia> just neverending waits
[01:26:03 CEST] <johnnny22> because it never ends maybe :P
[01:26:06 CEST] <johnnny22> idk
[01:26:20 CEST] <johnnny22> not sure sure at this particular point.
[01:26:31 CEST] <furq> like i said, you need -shortest
[01:26:55 CEST] <johnnny22> that would make sense
[01:27:08 CEST] <johnnny22> or set a duration :)
[01:27:54 CEST] <johnnny22> you can set a duration with .duration(10) somewhere toward the end.. probably after the .output method call.
[01:32:56 CEST] <hojuruku> https://i.imgur.com/M9Coy8H.png Youtube hates VAAPI streams from AMD POLARIS GPU's with VAAPI with their update. I wonder if it's broken in OBS too.
[01:33:17 CEST] <hojuruku> LIBVA_DRIVER=radeonsi ffmpeg -threads 8 -framerate 25 -device /dev/dri/card0 -thread_queue_size 1024 -f kmsgrab -i - -f lavfi -i anullsrc=channel_layout=stereo:r=44100 -ar 44100 -init_hw_device vaapi=amd:/dev/dri/renderD128 -filter_hw_device amd -filter:v hwmap,scale_vaapi=format=nv12 -video_size 1920x1080 -r 25 -c:v h264_vaapi -max_delay 500000 -g 50 -keyint_min 50 -force_key_frames "expr:gte(t,n_forced*3)" -bf 0 -profile:v constrained_baseline
[01:33:18 CEST] <hojuruku> -level:v 4.1 -coder:v cavlc -codec:a aac -b:a 128k -bufsize 512k -b:v 2600k -f flv rtmp://a.rtmp.youtube.com/live2/<rtmp-key-here>
[01:33:47 CEST] <hojuruku> this relates to what JEEB and I talked about a few months ago.
[01:33:56 CEST] <hojuruku> https://bugs.freedesktop.org/show_bug.cgi?id=105277
[01:34:02 CEST] <pi-> poutine: Thanks, that did it!
[01:35:51 CEST] <Mia> okay, I was able to add the audio channel (thanks to johnnny22 furq jerichowasahoax c_14) - but the upload still does not work
[01:36:08 CEST] <Mia> I have two conversions of the same gif file - one in adobe media encoder, the other one in ffmpeg
[01:36:17 CEST] <hojuruku> (banned from freedesktop.org trouble ticket because SJW in charge of community who adopted the you must respect pedophlies creator coveneant policy made by a transgender gay dad OTO occultist pedophile: http://theothermccain.com/2018/05/30/coraline-ada-ehmke-transgender-feminist-satanic-sjw/ - ask me for more. Basically google says you have to love sex with children, or not critize those who do on your own time or you will be banned from any
[01:36:17 CEST] <hojuruku> opensource project by order of satanic pedophiles. In the comments there eric s raymond open source guru debated me.
[01:36:28 CEST] <Mia> I can't spot the difference between thee two, if I upload both of them, maybe someone can help me?
[01:36:38 CEST] <hojuruku> (yes that link is from a wasington times reporter - robert stacy mccain aka #FreeStacy )
[01:36:52 CEST] <Mia> if I can figure out the difference, I'll make my conversion in a way so that it'll work too T__T
[01:37:37 CEST] <hojuruku> https://153news.net/watch_video.php?v=8913
[01:40:42 CEST] <johnnny22> Mia: where are you trying to upload this file ?
[01:40:51 CEST] <Mia> johnnny22, instagram
[01:41:06 CEST] <Mia> I can upload these files to some file upload site and maybe one of you can have a look
[01:41:20 CEST] <Mia> they seem very simialr to me but maybe I'm missing an important technical detail somewhere
[01:41:29 CEST] <johnnny22> Mia: do you get an error ?
[01:41:38 CEST] <Mia> yes, api just rejects my file
[01:41:44 CEST] <Mia> (that's converted with ffmpeg)
[01:41:46 CEST] <furq> Mia: pastebin the output of ffprobe -show_streams
[01:41:52 CEST] <Mia> but it accepts my file that's converted with adobe media encoder
[01:42:05 CEST] <Mia> ah no1pe I don2t get an error anywhere in ffmpeg
[01:42:15 CEST] <Mia> I just feel like I couldn't make the right type of conversion
[01:42:43 CEST] <Mia> but my knowledge isn't enough to determine the difference between two of those mp4 files, generated from the same gif (one using media encoder and other using ffmpeg)
[01:42:46 CEST] <johnnny22> Mia: do as furq propose:  ffprobe -show_streams ffmpeg_result_file.mp4       and then do the same on the adobe_media_encoded.mp4  .. And Pastebin the results of both.
[01:42:58 CEST] <Mia> okay
[01:43:07 CEST] <Mia> sec
[01:43:47 CEST] <furq> you probably need to add -pix_fmt yuv420p
[01:44:03 CEST] <johnnny22> we
[01:44:07 CEST] <johnnny22> we'll soon see ;)
[01:46:52 CEST] <Mia> https://hastebin.com/raw/riqadefeyo johnnny22 furq
[01:47:01 CEST] <Mia> both results one after another
[01:47:06 CEST] <Mia> first one _broken
[01:47:08 CEST] <Mia> second one _works
[01:48:20 CEST] <johnnny22> high vs main , maybe ?
[01:49:01 CEST] <johnnny22> furq might know better what might be the best next step to try.
[01:51:27 CEST] <CrystalMath> i'm now trying to just simply stream a named pipe and it's not working
[01:51:29 CEST] <CrystalMath> i get an error
[01:51:46 CEST] <CrystalMath> what i did was:
[01:51:59 CEST] <CrystalMath> ffmpeg -re -i pipe.wav -f rtp "rtp://localhost:5550"
[01:52:09 CEST] <CrystalMath> and then ffplay rtp://localhost:5550
[01:52:19 CEST] <CrystalMath> and then i cat a .wav file into pipe.wav
[01:52:39 CEST] <CrystalMath> but i got:  Unable to receive RTP payload type 97 without an SDP file describing it
[01:53:09 CEST] <CrystalMath> why didn't ffmpeg send that SDP data? it printed some on the terminal
[01:53:21 CEST] <CrystalMath> i have no idea what ffplay is expecting here
[01:58:35 CEST] <CrystalMath> does anyone know how i can make this work? turns out it's not just with named pipes
[01:58:52 CEST] <CrystalMath> if i even try ffmpeg-re -i some_wave_file.wav -f rtp rtp://localhost:5550
[01:59:00 CEST] <CrystalMath> i will get that error when i try to ffplay
[01:59:08 CEST] <CrystalMath> "Unable to receive RTP payload type 97 without an SDP file describing it"
[01:59:41 CEST] <johnnny22> Mia: Can you try adding .outputOptions('-profile:v main')  after the .output(...) option ?
[01:59:54 CEST] <Mia> doing it now
[02:00:32 CEST] <CrystalMath> is ffmpeg's rtp streaming completely broken?
[02:01:36 CEST] <Mia> didn't help johnnny22
[02:01:46 CEST] <Mia> maybe it's a color range issue
[02:01:53 CEST] <Mia> color range color space etc
[02:01:58 CEST] <Mia> those four values are "unknown"
[02:02:03 CEST] <Mia> that's anothr difference I can see
[02:02:11 CEST] <Mia> I've also tried adding -bf 1
[02:02:23 CEST] <Mia> (to have the same bframes value with the working example)
[02:02:40 CEST] <Mia> I'm now tring to figure out how to set those four other unknown values
[02:03:15 CEST] <johnnny22> Mia, in regards to how to pass more options to the output, check https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#outputoptionsoption-add-custom-output-options
[02:03:33 CEST] <johnnny22> But I don't know what to change. You might have a little field day on this one :)
[02:03:40 CEST] <Mia> Yes, I figured the options part johnnny22
[02:03:44 CEST] <Mia> I can pass more options
[02:03:53 CEST] <johnnny22> glad you got that covered.
[02:04:07 CEST] <Mia> mhm
[02:06:00 CEST] <johnnny22> Mia, maybe some of those arguments on the output  (quite randomly taken from a page): -vcodec mpeg4 -vb 8000k -strict experimental -qscale 0
[02:07:08 CEST] <Mia> testing them all now
[02:07:42 CEST] <johnnny22> Mia: you can also try reducing the audio rate to -ar 44100  .. random (but your file that works also has 48khz
[02:10:08 CEST] <johnnny22> maybe it didn't like the fact that it's silent audio
[02:10:27 CEST] <Mia> hmmm
[02:10:48 CEST] <Mia> but that's what adobe media encoder does as well, I guess?
[02:10:54 CEST] <furq> yeah don't use any of those options
[02:11:01 CEST] <furq> if i had to guess i'd say it doesn't like how low the audio bitrate is
[02:11:06 CEST] <furq> but it should default to 128k cbr anyway
[02:11:53 CEST] <Mia> how can I set it to something "it may like"
[02:13:21 CEST] <johnnny22> I assume it would be "-b:a 128k"
[02:14:35 CEST] <Mia> the API has a parameter to it
[02:14:39 CEST] <Mia> but id didn2t help
[02:14:47 CEST] <Mia> I'm still comparing those [STREAM] outputs
[02:15:11 CEST] <Mia> color_transfer color_soace color_pprimaries --- these values are unknown in the broken file
[02:15:16 CEST] <Mia> but they're set to bt709
[02:15:21 CEST] <Mia> not sure if this is important or not
[02:15:33 CEST] <furq> i doubt it
[02:16:09 CEST] <furq> 128k is the default for the builtin aac encoder but it doesn't look like it does any padding to try to meet it
[02:16:19 CEST] <furq> just for debugging you could use anoisesrc instead of anullsrc
[02:16:37 CEST] <Mia> doing it now
[02:17:04 CEST] <johnnny22> thats a good idea
[02:17:15 CEST] <Mia> sadly didn't work
[02:17:16 CEST] <Mia> :/
[02:17:39 CEST] <furq> oh hang on
[02:17:43 CEST] <furq> maybe it needs -movflags faststart
[02:18:19 CEST] <johnnny22> *giggles*
[02:18:39 CEST] <Mia> Argh...
[02:18:41 CEST] <Mia> didn't work
[02:18:48 CEST] <Mia> every time I hold my breath, and I get an error
[02:18:51 CEST] <Mia> at the end of upload
[02:19:12 CEST] <furq> i take it the error is nothing useful
[02:19:29 CEST] <Mia> no it just says "something went wrong we will fix it asap"
[02:19:42 CEST] <Mia> WHAT WENT WRONG TELL ME DANGIT
[02:19:50 CEST] <Mia> but since the api isn't even official
[02:19:54 CEST] <Mia> I get it
[02:20:01 CEST] <Mia> I mean I get it that I don't.
[02:20:07 CEST] <johnnny22> maybe it doesn't like the time_base=1/15360 ?
[02:20:19 CEST] <Mia> what does it even mean
[02:20:40 CEST] <johnnny22> some math thing with framerate i think..
[02:20:53 CEST] <johnnny22> idk tbh ;)
[02:21:28 CEST] <johnnny22> or maybe it doesn't like the 'und' language :P
[02:22:10 CEST] <johnnny22> i'm out of ideas tbh at this point.
[02:22:49 CEST] <johnnny22> and i have to try to fix my own issue that is identical to : https://trac.ffmpeg.org/ticket/7126 :(
[02:23:05 CEST] <Mia> still, thamk you for your time johnnny22
[02:23:11 CEST] <johnnny22> Oh and get food :D
[02:23:29 CEST] <johnnny22> my pleasure, glad you got the fluent part working at least.
[02:24:27 CEST] <johnnny22> Mia: are you uploading it by hand to instagram or through your code ?
[02:24:37 CEST] <Mia> code
[02:24:50 CEST] <Mia> but the circle_works.mp4 and everything I export through adobe media encoder works
[02:25:02 CEST] <johnnny22> thinking, if you can try uploading it by hand, maybe they'll give more info on what went wrong ?!
[02:25:03 CEST] <Mia> but I need a programattical way to convert gif files to mp4 (that I can upload)
[02:25:15 CEST] <Mia> uploadding things by hand works anyway
[02:25:28 CEST] <johnnny22> even that non-working-file ?
[02:25:31 CEST] <Mia> since they do the gif to mp4 conversion (or whatever to mp4 conversion) on their mobile app
[02:25:40 CEST] <Mia> so they have their conversion
[02:25:50 CEST] <Mia> this is the nonofficial api so I'm tricking instagram, in a way
[02:25:58 CEST] <Mia> that I send them the buffer (sort of)
[02:26:21 CEST] <johnnny22> what about if you try uploading that failing .mp4 , does it work too ?
[02:26:30 CEST] <Mia> instagram has no programmatical way to automate video uploads yet
[02:26:32 CEST] <Mia> mhm it works too
[02:26:38 CEST] <Mia> I can even directly upload a gif file
[02:26:40 CEST] <johnnny22> k
[02:26:48 CEST] <Mia> they convert it to mp4 before anythi,ing, anyway
[02:26:55 CEST] <Mia> so their conversion always works
[02:27:09 CEST] <johnnny22> but you can also upload the .mp4 result file right ?
[02:27:27 CEST] <Mia> whats a mystery to me, is that, there has to be something different with these adobe media encoder outputs and the one I'm trying to convert through ffmpeg
[02:27:37 CEST] <Mia> yes I can upload it as well johnnny22
[02:27:55 CEST] <johnnny22> Mia: maybe the metadata :P lol
[02:28:09 CEST] <johnnny22> maybe the language is the issue (not sure how you'd change that)
[02:28:59 CEST] <johnnny22> I was working on something where it totally broke things cuz the audio track wasn't named after a language, as an example.
[02:29:30 CEST] <johnnny22> but hey, i doubt its that.
[02:29:41 CEST] <Mia> -metadata:s:a:0 language=eng
[02:29:42 CEST] <Mia> it seems
[02:39:05 CEST] <ballefun> I can transcode video and stream to localhost with "ffmpeg -i test.mp4 -c:a aac -ar 44100 -ac 2 -b:a 96k -c:v libx264 -crf 23 -maxrate 1M -bufsize 2M -f rtsp -rtsp_transport tcp rtsp://localhost:8888/live.sdp" and use ffplay to watch it with "ffplay -rtsp_flags listen rtsp://localhost:8888/live.sdp"
[02:39:49 CEST] <Mia> how can I set the time_base option
[02:39:51 CEST] <ballefun> but when I change to another computer on my network ie 192.168.1.200 this does not work
[02:40:58 CEST] <ballefun> I have no firewalls etc. Am I doing something wrong?
[02:41:24 CEST] <poutine> ballefun, that's because you're using localhost
[02:41:44 CEST] <poutine> try using the LAN IP or maybe 0.0.0.0 instead
[02:42:07 CEST] <poutine> Oh wait I just read your command
[02:42:20 CEST] <poutine> So you're trying to do the ffmpeg portion on one computer and the ffplay on another?
[02:42:27 CEST] <ballefun> yes
[02:42:35 CEST] <ballefun> but first on my network
[02:42:54 CEST] <ballefun> all computers are Debian 9. No firewalls etc
[02:42:57 CEST] <poutine> Ok so say on 192.168.0.53, you would do the ffplay command as such: ffplay -rtsp_flags listen rtsp://192.168.0.53:8888/live.sdp
[02:43:15 CEST] <poutine> and then the ffmpeg like: ffmpeg -i test.mp4 -c:a aac -ar 44100 -ac 2 -b:a 96k -c:v libx264 -crf 23 -maxrate 1M -bufsize 2M -f rtsp -rtsp_transport tcp rtsp://192.168.0.53:8888/live.sdp
[02:43:42 CEST] <poutine> I believe with the ffplay portion, you could also do: ffplay -rtsp_flags listen rtsp://0.0.0.0:8888/live.sdp
[02:43:53 CEST] <poutine> and the ffmpeg would be the same in that case as well
[02:44:09 CEST] <ballefun> poutine, hmm Ill try with 0.0.0.0 and listen there
[02:44:33 CEST] <poutine> 0.0.0.0 is all IPs on that machine, could be LAN, public, whatever
[02:45:14 CEST] <ballefun> poutine, good test at least. Later I will be using this over the net though to html5 browsers.
[02:46:06 CEST] <ballefun> poutine, btw thx for the feedback =)
[02:52:37 CEST] <ballefun> poutine, that worked with 0.0.0.0. Thx tons. Is this the way I should be transcodeing over the net to html5?
[03:40:41 CEST] <ballefun> ok looks like  I need to use ffserver if I am going to transcode video and stream http to html5 clients. Is that right?
[03:41:53 CEST] <ballefun> Would like to  be able to watch the videos on my NAS without having to download them
[03:46:40 CEST] <furq> ballefun: if they're already mp4 then you can just watch them with any httpd
[03:46:48 CEST] <furq> otherwise use hls and hls.js
[03:48:38 CEST] <ballefun> furq, I am transcoding them like ffmpeg -i test.mp4 -c:a aac -ar 44100 -ac 2 -b:a 96k  -c:v libx264 -crf 23 -maxrate 1M -bufsize 2M -f rtsp -rtsp_transport tcp rtsp://ip_to_target:8888/live.mp4
[03:48:40 CEST] <ballefun> as of now
[03:49:37 CEST] <ballefun> furq, but I would like to use a html5 browser to watch them. ffplay works with the above example
[03:51:21 CEST] <furq> i take it this is for streaming over the internet and you don't have much upload bandwidth
[03:51:31 CEST] <ballefun> yes that oo
[03:51:52 CEST] <ballefun> but I have a 24 core xeon server =P
[03:52:04 CEST] <furq> well yeah
[03:52:06 CEST] <furq> https://www.ffmpeg.org/ffmpeg-formats.html#hls-2
[03:52:08 CEST] <furq> just use that
[03:52:19 CEST] <furq> any browser can play it back with hls.js
[03:52:24 CEST] <ballefun> is that like DASH?
[03:52:26 CEST] <furq> or natively on mobile
[03:52:28 CEST] <furq> something like that yeah
[03:53:03 CEST] <ballefun> with 24TB of media already on the NAS ... idk
[03:53:15 CEST] <furq> you can do it on demand if you use something like nginx-rtmp
[03:53:33 CEST] <ballefun> nothing like that with apache2?
[03:53:36 CEST] <furq> i mean you can do it on demand anyway if you ssh into the box
[03:53:42 CEST] <furq> but nginx-rtmp is probably easier
[03:54:03 CEST] <furq> you can just put nginx-rtmp behind apache if you want
[03:54:12 CEST] <ballefun> that was an idea.
[03:54:25 CEST] <furq> it doesn't strictly need to serve any http
[03:54:30 CEST] <furq> apache can serve the hls chunks
[03:54:45 CEST] <ballefun> thought hls only worked with apple stuff?
[03:54:58 CEST] <furq> it works with everything if you use hls.js
[03:55:09 CEST] <furq> youtube uses it for livestreaming on account of iOS refusing to support dash
[03:57:05 CEST] <ballefun> I have been using DASH and it is awesome. The problem with is that I have tons and tons of video. Most will never be watched as this is only shared with my wifes mother etc.
[03:57:46 CEST] <furq> nginx-rtmp does dash as well if you already have a player setup
[03:57:47 CEST] <foo> Anyone in here into podcasting?
[03:58:00 CEST] <CrystalMath> does anyone know what's wrong with the way i'm using ffmpeg?
[03:58:11 CEST] <furq> but yeah there's a bunch of ways you could go about generating the chunks on demand
[03:58:19 CEST] <CrystalMath> or are my messages too far up?
[03:58:35 CEST] <furq> and hls/dash are pretty much your only options as far as streaming to a browser
[03:58:38 CEST] <furq> other than just serving the mp4s
[04:01:49 CEST] <CrystalMath> are my messages even getting through?
[04:01:56 CEST] <CrystalMath> well that was a useless question
[04:02:09 CEST] <ballefun> furq, how does plex get this done? I wrote this for NextCloud and was hopping I can incorporate something with transcodeing https://help.nextcloud.com/t/updated-aug-15-2018-script-100-auto-install-on-debian-apache-mpm-event-php-fpm-socket-redis-socket-on-both-local-and-locking-cache-lru-data-eviction-letsencrypt-ssl-a/22054
[04:02:20 CEST] <ballefun> oh wow what a url...sry
[04:07:19 CEST] <CrystalMath> is everyone intentionally ignoring me for some reason?
[04:08:20 CEST] <ballefun> CrystalMath, what was your question?
[04:08:43 CEST] <CrystalMath> i get an error when trying to use rdp
[04:09:11 CEST] <CrystalMath> "Unable to receive RTP payload type 97 without an SDP file describing it"
[04:10:18 CEST] <CrystalMath> basically this happens whenever i try to use rdp
[04:10:47 CEST] <ballefun> what are you using to listen with?
[04:10:51 CEST] <CrystalMath> ffplay
[04:11:56 CEST] <ballefun> maybe you gave some more details people would try to help.
[04:12:12 CEST] <CrystalMath> i posted the command
[04:12:18 CEST] <CrystalMath> hold on a sec...
[04:12:33 CEST] <CrystalMath> ffmpeg-re -i some_wave_file.wav -f rtp rtp://localhost:5550
[04:12:51 CEST] <CrystalMath> and ffplay rtp://localhost:5550
[04:13:08 CEST] <CrystalMath> the wave file is 44.1KHz 2 channels 16-bit unsigned
[04:13:11 CEST] <ballefun> try 0.0.0.0:5550 and listen
[04:13:24 CEST] <CrystalMath> it's on the same machine
[04:13:35 CEST] <CrystalMath> but ok
[04:14:12 CEST] <CrystalMath> same error
[04:16:22 CEST] <ballefun> Try ffmpeg -re -f lavfi -i aevalsrc="sin(400*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp://127.0.0.1:1234
[04:16:32 CEST] <ballefun> and then listen with
[04:16:34 CEST] <ballefun> ffplay rtp://127.0.0.1:1234
[04:16:38 CEST] <ballefun> does that work?
[04:16:52 CEST] <ballefun> should hear a tone
[04:19:39 CEST] <CrystalMath> i get a very different error
[04:19:46 CEST] <ballefun> ok what?
[04:19:56 CEST] <CrystalMath> getaddrinfo(localhost): Name or service not known
[04:20:03 CEST] <CrystalMath> but i put localhost
[04:20:06 CEST] <CrystalMath> i'll try 127.0.0.1
[04:20:25 CEST] <CrystalMath> it works
[04:20:27 CEST] <CrystalMath> with 127.0.0.1
[04:20:56 CEST] <CrystalMath> with any port
[04:21:18 CEST] <ballefun> Think I have seen this before a long time ago. In your conf there should be aline for localhost with ACL
[04:21:41 CEST] <CrystalMath> i hear the tone with 127.0.0.1
[04:21:50 CEST] <poutine> ballefun, I don't know what you're trying to do, what kind of clients do you have? any reason not to use hls/mpeg dash?
[04:22:42 CEST] <ballefun> poutine, I was using DASH before. Loved it but my NAS only has so much space. I have many TB of video on there.
[04:22:48 CEST] <CrystalMath> ballefun: but when i try using -i some_wave_file.wav it still gives me that error
[04:25:30 CEST] <ballefun> CrystalMath, try using another file. Might be something iffy with file.wav. Are you on a windows machine?
[04:26:23 CEST] <CrystalMath> no, GNU/linux
[04:26:57 CEST] <CrystalMath> all the files i tried cause the same error so far
[04:29:34 CEST] <CrystalMath> i tried lots of different files, WAV, OPUS, MP3, FLAC...
[04:29:39 CEST] <CrystalMath> the result is always the same
[04:31:28 CEST] <CrystalMath> can anyone reproduce the problem?
[04:33:07 CEST] <poutine> CrystalMath, most people have an /etc/hosts entry that looks like:
[04:33:13 CEST] <poutine> 127.0.0.1 localhost
[04:33:24 CEST] <poutine> doesn't sound like you have that or have some weird configuration
[04:34:34 CEST] <CrystalMath> 127.0.0.1 localhost
[04:34:39 CEST] <CrystalMath> is the first line of /etc/hosts
[04:34:51 CEST] <CrystalMath> i can resolve localhost in pretty much every program
[04:35:18 CEST] <CrystalMath> even my DNS server resolves "localhost" to 127.0.0.1
[04:39:35 CEST] <CrystalMath> poutine: sometimes localhost works, when i try it... it was probably some UDP problem or something...
[04:39:51 CEST] <CrystalMath> the only error that is consistent is "Unable to receive RTP payload type 97 without an SDP file describing it"
[04:49:49 CEST] <ballefun> furq poutine why was ffserver removed? I see it is still part of ffmpeg that is in Debian stable.
[04:50:10 CEST] <furq> it's gone as of 3.4 iirc
[04:50:15 CEST] <ballefun> yeah
[04:50:18 CEST] <ballefun> but why?
[04:50:34 CEST] <furq> it never worked well and nobody wanted to keep it updated for api changes
[04:50:53 CEST] <ballefun> api changes are a real pain...
[04:50:56 CEST] <furq> it's been more or less unmaintained other than the bare minimum to use new apis for years
[04:51:24 CEST] <ballefun> makes sense
[04:51:56 CEST] <ballefun> CrystalMath, I don't really know what to tell you.
[04:52:12 CEST] <furq> there's a gsoc project to develop a new one but it's not close to being merged yet afaik
[04:52:42 CEST] <ballefun> is there some other open source project that can do that trick for now?
[04:55:29 CEST] <ballefun> furq, ahh stupid me. Says right here https://trac.ffmpeg.org/wiki/ffserver "mkvserver_mk2"
[04:56:07 CEST] <CrystalMath> maybe i should try ffserver
[06:08:57 CEST] <mijofa> When doing a transcode from video file into a HLS stream, is it possible to somehow set the total duration before actually completing the transcoding?
[06:09:16 CEST] <mijofa> Looks to me like HLS doesn't even have real support for that, so I'm guessing not
[07:26:22 CEST] <johnnny22> anyone experienced this https://trac.ffmpeg.org/ticket/7126  and resolved it ?
[10:42:16 CEST] <computer2000> how can i resize a video to height 1? i get error "not divisible by 2"
[10:43:11 CEST] <BtbN> I'm not a fan of the gsoc project calling it ffserver again. That's going to cause a mess, it should have a different name.
[10:45:23 CEST] <JEEB> I would recommend voicing yer opinion on it then, if it's such an easily change-able thing
[10:49:36 CEST] <computer2000> how can i resize a video to height 1? i get error "not divisible by 2"
[10:50:00 CEST] <BtbN> You can't. It has to be divisible by at least 2, depending on the codec even 8 or 16
[10:56:49 CEST] <furq> computer2000: you use a pixel format that supports it, presumably yuv444p
[10:56:54 CEST] <furq> bearing in mind that you almost certainly don't want to do this
[11:06:45 CEST] <Ke> I think the example is so extreme, I am pretty sure there is a special use case
[11:06:56 CEST] <Ke> I won't judge
[11:33:34 CEST] <olspookishmagus> hello, was this created using dot? https://trac.ffmpeg.org/raw-attachment/wiki/AudioChannelManipulation/6mono_5point1.png
[11:34:04 CEST] <olspookishmagus> aka graphviz
[11:45:39 CEST] <BtbN> looks like Dia to me
[11:51:53 CEST] <olspookishmagus> I need to contact that llogan
[12:02:32 CEST] <olspookishmagus> I emailed Mr. Logan - now I wait.
[16:48:07 CEST] <lays147> Hello everyone! I have this command line for a ffmpeg job: https://paste.kde.org/psmoytbf6 However the output videos are with a bitrate of 200Kbps instead of the value that was set on the line. Anyone can point me what could be wrong?
[16:49:55 CEST] <DHE> where is your output file?
[16:51:02 CEST] <lays147> Its appended to that list later on the code
[16:51:10 CEST] <lays147> with audio mapping
[16:53:26 CEST] <lays147> Did more digging here, we have a concat step that follows that command to concat a list of videos, could be the problem be there since no -b:v is set?
[16:53:54 CEST] <DHE> if you append multiple output videos, then yes you will need to repeat your output options for each including codec and bitrates
[16:58:12 CEST] <lays147> DHE: thanks
[16:58:27 CEST] <lays147> I think that's the problem, on our concat we just add the videos and the map for audio
[17:00:59 CEST] <kepstin> depending on the codec, file formats use, you can probably concatenate the videos without re-encoding
[17:03:09 CEST] <DHE> if you want the same parameters in multiple output formats, I suggest looking at the 'tee' muxer
[17:03:27 CEST] <DHE> because otherwise ffmpeg will do the transcoding work for each output which is going to be wasted effort
[17:07:49 CEST] <furq> are you sure you want -r 29
[17:59:29 CEST] <lays147> furq: ?
[18:02:20 CEST] <BtbN> you're using dev/null as video input
[18:02:33 CEST] <BtbN> That will be trivially to encode with minimal bitrate
[18:02:54 CEST] <lays147> BtbN: the '/dev/zero' part?
[18:02:58 CEST] <BtbN> yes
[18:03:36 CEST] <lays147> so I should remove it right?
[18:03:54 CEST] <BtbN> then you won't have any video at all anymore. What do you actually want to do?
[18:04:58 CEST] <lays147> Ok, I have a list of videos that will pass for the command line that I posted before. After this 'normalization', I will concat them to create a flat file in a step after.
[18:05:15 CEST] <BtbN> Keep in mind that you only ever set a max bitrate
[18:05:19 CEST] <BtbN> If it can do with less, it will
[18:05:58 CEST] <lays147> I think that /dev/zero part was to add silence to audio tracks, because our videos need to be exported with 8 audio tracks, so if we dont have 8 tracks on the original video, the missing layers will be added silence
[18:06:10 CEST] <lays147> BtbN: I see
[18:06:41 CEST] <lays147> But we have the max and min set:
[18:06:42 CEST] <lays147>             '-b:v',
[18:06:44 CEST] <lays147>             '50000k',
[18:06:45 CEST] <lays147>             '-maxrate',
[18:06:47 CEST] <lays147>             '50000000',
[18:06:48 CEST] <lays147>             '-bufsize',
[18:06:50 CEST] <lays147>             '3835k',
[18:06:51 CEST] <lays147>             '-minrate',
[18:06:53 CEST] <lays147>             '50000000',
[18:07:22 CEST] <lays147> i think that the problem may be on the concat step, that we only append the list of videos to be concatenated, and we are missing params as DHE told earlier
[18:08:02 CEST] <lays147> concat function: https://paste.kde.org/pa2p4like
[18:12:51 CEST] <DHE> but this still only looks like one output file
[18:14:20 CEST] <lays147> DHE: the first command line is run inside a for for each file that I have in a json that my application receives. Each run gives me a file that is stored in a workdir, and they are used later on the concat step.
[18:47:12 CEST] <kepstin> lays147: it sounds like you probably want to write your temp files as mpeg-ts or something so you can just use 'cat' to concatenate them.
[18:47:23 CEST] <_Mike_C_> Hey BtbN, I put in print statements to print the values of nb_ready, nb_pending and ctx->async_depth... they are all 0
[18:47:48 CEST] <BtbN> It never took a single frame you gave it then somehow
[18:48:20 CEST] <_Mike_C_> Is it possible its not being created/initialized correctly?  The send frame always returns EAGAIN
[18:48:27 CEST] <_Mike_C_> (after the first few sends)
[18:50:10 CEST] <BtbN> Must be something like that, yes.
[18:50:34 CEST] <BtbN> Does it return 0 any other time?
[18:50:47 CEST] <_Mike_C_> return 0 from which function
[18:50:52 CEST] <BtbN> send_frame
[18:51:33 CEST] <_Mike_C_> it returns 0 the first two times, then returns EAGAIN
[18:57:02 CEST] <lays147> kepstin: something like that, but we are doing on two separate steps because we don't know ffmpeg that much to create one command line to normalize the videos and concat them in one turn.
[19:00:25 CEST] <BtbN> weird
[19:00:40 CEST] <_Mike_C_> BtbN, ok something is definitely going wrong in the encoder creation.  The pointer to the nvencoder object is negative garbage value, and the pointer to the nvenc_dload_funcs is negative garbage as well
[19:02:25 CEST] <BtbN> It's an opaque value from the driver
[19:02:29 CEST] <BtbN> It can be really anything
[19:02:35 CEST] <_Mike_C_> Its so confusing to me because it works in the command line, what more can I do but to request the encoder from avcodec_find_encoder_by_name, avcodec_alloc_alloc_context3, and avcodec_open2
[19:02:50 CEST] <_Mike_C_> but its negative, shouldn't it be a pointer to the actual dll functions?
[19:02:55 CEST] <BtbN> no
[19:03:00 CEST] <BtbN> it's an arbitrary handle
[19:04:29 CEST] <_Mike_C_> I suppose I'm a little confused with how the encoder operates then.  I have a little experience with using the actual SDK, and you have to store a function pointer to the functions (similar to what it seems like the variable represents) and then call the encode functions off of that pointer
[19:04:41 CEST] <BtbN> That's done elsewhere.
[19:06:31 CEST] <_Mike_C_> ok, where do you suggest I look next?
[19:09:25 CEST] <lays147> Which is the best method to concat? using with our without filter_complex?
[19:09:59 CEST] <kepstin> best method to concat is the one that performs the minimal amount of processing to get the desired result
[19:10:39 CEST] <kepstin> e.g. if the videos are all homogenous  in codec and settings, you should use a concat method that doesn't require re-encoding
[19:14:34 CEST] <_Mike_C_> ok wait, sorry I was reporting the nb_ready numbers wrong.  It turns out the async_depth is 3, and the nb_ready is 2 (hence the 2 frames it takes before it returns "full") and thats why the function always returns false
[19:15:14 CEST] <_Mike_C_> BtbN 2 ready, 0 pending, 3 depth.  what does the async_depth represent?
[19:16:48 CEST] <BtbN> The delay until it will allow outputing frames, even though the encoder is technically ready. It's a performance optimization
[19:17:21 CEST] <_Mike_C_> So basically my problem is that the encoder is not letting me input enough frames to trigger that optimization
[19:18:25 CEST] <_Mike_C_> So I guess I should search the send frame and see why its not letting me send more than 2 frames
[19:18:58 CEST] <_Mike_C_> since it will only output once its received 3
[19:19:29 CEST] <BtbN> that's highly unlikely, there is logic to prevent that: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/nvenc.c;h=e180d7b99380fa79f85cad7e5ccd9b9f74f8f7f4;hb=HEAD#l807
[19:21:03 CEST] <_Mike_C_> Well that appears to be what I'm getting.  I can submit two frames to the encoder before it returns EAGAIN, the recieve pkt says nb_ready is 2... but async_depth is set to 3
[19:21:22 CEST] <BtbN> What is nb_surfaces set to?
[19:21:29 CEST] <_Mike_C_> nb_pending is 0
[19:22:01 CEST] <_Mike_C_> r
[19:22:03 CEST] <_Mike_C_> 4**
[19:22:12 CEST] <_Mike_C_> nb_surfaces is 4
[19:22:32 CEST] <BtbN> That's how many surfaces you should have available. Weird, so there is no reason it should give you EAGAIN after just two frames
[19:23:14 CEST] <_Mike_C_> Ok, guess I'll dive into the send frames function and see whats going on
[19:23:20 CEST] <_Mike_C_> Thanks so much for all this help
[19:23:56 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/nvenc.c;h=e180d7b99380fa79f85cad7e5ccd9b9f74f8f7f4;hb=HEAD#l1526 this has to return NULL for send_frame to return EAGAIN
[19:24:06 CEST] <BtbN> that's the only path that leads to it returning EAGAIN
[19:24:47 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/nvenc.c;h=e180d7b99380fa79f85cad7e5ccd9b9f74f8f7f4;hb=HEAD#l1362 and it puts nb_surfaces amount of surfaces in there
[19:38:12 CEST] <Li> I'm searching how to merge subtitle within a video and found this ffmpeg -i infile.mp4 -i infile.srt -c copy -c:s mov_text outfile.mp4
[19:38:28 CEST] <Li> No clue with mov_text is!?
[19:40:10 CEST] <_Mike_C_> BtbN, so the av_fifo_size(ctx->unused_surface_queue) starts at 32
[19:40:23 CEST] <_Mike_C_> a frame is sent, it goes down to 16
[19:40:30 CEST] <_Mike_C_> then 0 and returns null
[19:41:00 CEST] <_Mike_C_> Oops, sorry I missed some print outs
[19:42:39 CEST] <_Mike_C_> Ok I see some weird stuff happening, I'm gonna look at it for a bit
[19:45:07 CEST] <Li> no fucking help
[21:00:45 CEST] <juny> hi
[21:02:45 CEST] <juny> I am trying to learn how to use ffmpeg to combine mp3 files in AWS Lambda. coming from pydub repo. Pydub does provide a good API but if I just want to combine mp3, should I just use ffmpeg directly?
[21:02:46 CEST] <ChocolateArmpits> hello
[21:03:19 CEST] <ChocolateArmpits> juny, for concatenation read this https://trac.ffmpeg.org/wiki/Concatenate
[21:17:17 CEST] <juny> thanks. after skimming through the doc, I think using the pydub library would make things a lot easier for me.
[21:19:26 CEST] <juny> in that case, to use ffmpeg binary in aws lambda, I would need to package ffmpeg together with my code, how can I build binary code of ffmpeg? I haven't done it before. Is it retrievable from my local machine.  I ran ls -l /usr/local/bin | grep ffmpeg and could find its local path.
[21:24:13 CEST] <_Mike_C_> BtbN, I'm back again.  I smoothed out the error that was causing the encoder to never give me a packet back.  Now my issue is that the pkt it gives me has nothing assigned to it.  I've gone into the function and I can only assume that the memcpy function at 1844 in nvenc.c isn't copying data.
[21:25:09 CEST] <_Mike_C_> I printed out values, both the bitstreamBufferPtr is valid and the pkt->data pointer is valid.  bitstreamSizeInBytes reads 31529 bytes long, but it doesn't look like anything resides inside pkt->data before and after the call to memcpy
[22:26:20 CEST] <xn0r> Hey, can someone point me at the filters needed to convert sbs into colorcode 3d? For amber/blue anaglyph 3d glasses.
[22:30:29 CEST] <c_14> stereo3d filter probably
[22:31:40 CEST] <c_14> stereo3d=in=sbsl:out=arbg probably but you'd have to check the various formats for the ones you want
[22:35:26 CEST] <durandal_1707> stereo3d=sbsl:aybc
[22:39:04 CEST] <durandal_1707> or stereo3d=sbsl:aybd
[23:24:39 CEST] <xn0r> thanks, will try
[23:33:54 CEST] <_Mike_C_> Have a good weekend
[00:00:00 CEST] --- Sat Sep  1 2018



More information about the Ffmpeg-devel-irc mailing list