[Ffmpeg-devel-irc] ffmpeg.log.20180301

burek burek021 at gmail.com
Fri Mar 2 03:05:01 EET 2018


[00:12:20 CET] <GamleGaz> does anyone have a guess what is up with the audio in this?
[00:12:21 CET] <GamleGaz> https://drive.google.com/file/d/1cLYke_4CELRUkU3JO58iS4eWyxJ_rxq0/view
[00:25:06 CET] <alexpigment> GameleGaz: the track inside the file is like htat
[00:25:32 CET] <alexpigment> GameleGaz: it got sped up during some process before it got in the file
[00:41:39 CET] <GamleGaz> yep! a bad malloc, thanks
[00:44:13 CET] <lyncher> I'm encoding AAC audio to mux into a mpegts
[00:44:32 CET] <lyncher> but I'm getting the mux error: AAC bitstream not in ADTS format and extradata missing
[00:45:12 CET] <lyncher> how can I insert adtsenc.c features in my libavcodec workflow?
[01:30:24 CET] <cluelessperson> I'm trying to screenshot my desktop 1 time every second in ffmpeg
[01:30:34 CET] <cluelessperson> does anyone know the command?
[01:41:44 CET] <lyncher> in libavcodec how can I change aac audio to adts?
[02:49:21 CET] <kota1> c_14: Thanks man, that's perfect
[04:13:02 CET] <sim590> I want to start multiple ffmpeg instances at once. How can I do that?
[04:16:02 CET] <sim590> I think that my problem is that I didn't tell </dev/null to ffmpeg in my script and tried to pass </dev/null from outside.
[08:15:11 CET] <hendry> is there hardware acceleration support for Intel kaby lake in ffmpeg? i.e. speed up the h264 process or is it not worth bothering with?
[10:14:58 CET] <pagios> any recommened digital media signage software opensource project supporting chromecast as a client to stream content to tvs?
[10:16:20 CET] <dragmore88> hi! Anyone know if ffmpeg can parse the AC3 track in a TS file and check that its ok regards to crc and framesync ?
[10:22:04 CET] <rav> Hi, I am muxing a video only webm file (vp8 encoded). When i play it, for first few seconds it plays well. But after that slowly it pixelates and then finally turns into a completely noisy and blurry video. What can be the reason for this ?
[10:27:29 CET] <furq> dragmore88: -err_detect crccheck -i foo.ts -map 0:a -f null -
[10:27:47 CET] <furq> no idea about framesync though
[10:31:34 CET] <Chuck_> Hello, I have a question regarding bilinear scaling that I post to stackoverflow. Can you please check it out since it is a lot to write it in here. Thanks.   https://stackoverflow.com/questions/49045788/does-ffmpeg-apply-a-blur-filter-after-scaling
[10:31:48 CET] <dragmore88> furq: Im a bit clueless here, but our Harmonic Packager as added some crap to the eac3 track that some of our 12000 assets contain... trying to parse to find which ones..
[10:33:34 CET] <furq> Chuck_: are you setting -sws_flags bicubic
[10:33:38 CET] <furq> or bilinear, rather
[10:33:40 CET] <furq> the default is bicubic
[10:34:17 CET] <Chuck_> Yes, I am not using the ffmpeg tool but the libraries though. I am using sws_bilinear flag
[10:34:36 CET] <Chuck_> What is the difference between sws_bilinear and sws_fastbilinear?
[10:39:40 CET] <Chuck_> furq:
[11:07:49 CET] <Chuck_> Hello, I have a question regarding bilinear scaling that I post to stackoverflow. Can you please check it out since it is a lot to write it in here. Thanks.   https://stackoverflow.com/questions/49045788/does-ffmpeg-apply-a-blur-filter-after-scaling
[11:14:48 CET] <manishv> i am having error while compiling the source after i use ./configure in the source directory i get nasm/yasm not found or too old. Use --disable-x86asm for a crippled build.
[11:18:06 CET] <Chuck_> manishv: Do you want to compile it with vs?
[11:18:37 CET] <manishv> vs?
[11:19:28 CET] <Chuck_> Visual Studio
[11:20:06 CET] <manishv> i want to compile it using ubuntu terminal.
[11:22:44 CET] <Chuck_> I have never tried it in ubuntu (im here just to try to get some answers for a bug i have..)
[11:22:49 CET] <Chuck_> Did you install yasm?
[11:22:51 CET] <Chuck_> apt-get install yasm
[11:27:25 CET] <manishv> thanks that worked
[14:09:01 CET] <nneff> when using avcodec_send_frame(...) to produce a MP4/H.264 video, do I need to (1) feed the encoder a frame for every tick of the framerate, or (2) is it acceptable to skip some ticks if there was no change in the frame?
[14:09:35 CET] <nneff> I get mixed results with (2): sometimes it works (video ok), sometimes it doesn't (static video).
[14:11:23 CET] <DHE> nneff: you can send any PTS values you want. I would actually encourage only sending discrete frames as long as the container supports variable FPS
[14:12:02 CET] <DHE> motion compensation codecs like h264 tend to be negatively impacted if you send duplicate frames
[14:16:17 CET] <intrac> I have a set of still images that I want to turn into a simple slideshow
[14:16:33 CET] <intrac> I'd like ffmpeg to repeat each input frame 25 times (@25p = 10 seconds per image)
[14:17:04 CET] <intrac> what is the best way to do this?
[14:18:00 CET] <intrac> I tried forcing the input fps to 0.004 and output to 25, but ffmpeg fails with: Too many packets buffered for output stream 0:0.
[14:19:24 CET] <nneff> DHE: "you can send any PTS values you want": that's what I understood. Yet with framerate=25 gop_size=12 but only one new frame each second, the resulting video is static.
[14:24:17 CET] <DHE> if the stream's time_base is 1/25 then you should be incrementing the pts by 25 on every frame
[14:24:35 CET] <DHE> the codec itself is largely framerate-agnostic. especially if you're not running in bitrate mode
[14:35:34 CET] <furq> intrac: the input framerate should be 0.1
[14:35:50 CET] <furq> input and output framerate are independent of each other
[14:41:05 CET] <intrac> furq: ah, right. that makes sense. but unfortunately I still get a "Conversion failed!" with the same error
[14:41:10 CET] <intrac> ffmpeg -r 0.1 -f image2 -pattern_type glob -i '2160p*.png' -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -pix_fmt yuv420p -crf 18 -r 25 -c:v libx264 -c:a libmp3lame output.mp4
[14:41:37 CET] <intrac> but if I remove the second "-f lavfi" input, it encodes ok
[14:52:15 CET] <relaxed> intrac: pretty sure you want -framerate 0.1
[14:52:40 CET] <relaxed> ffmpeg -h demuxer=image2
[14:57:24 CET] <Chuck_> Hello, can someone please explain me how bilinear scaling works? It seems the results are somewhat blurred instead of being the result of only the interpolation operation. (See my question in stackoverflow with pictures:   https://stackoverflow.com/questions/49045788/does-ffmpeg-apply-a-blur-filter-after-scaling)
[15:01:01 CET] <jkqxz> Chuck_:  Looks like imagemagick has scaled to an 11x11 image and then added a duplicate row and column at the bottom/left, while ffmpeg has made a 12x12 image directly.  What's unexpected there?
[15:06:42 CET] <intrac> relaxed: still the same error. still caused by the generated anullsrc audio
[15:06:50 CET] <intrac> works ok without the nullsrc
[15:10:34 CET] <relaxed> intrac: pastebin the command and output
[15:11:42 CET] <relaxed> also try with just "-f lavfi -i anullsrc"
[15:13:23 CET] <furq> intrac: try increasing -max_muxing_queue_size
[15:18:53 CET] <intrac> setting -max_muxing_queue_size to 200 does the job :)
[15:19:07 CET] <intrac> thanks
[16:29:49 CET] <blap> we need to remove the jews from the planet
[16:42:09 CET] <King_DuckZ> hey, I've proposed my company to release my current project under the gpl, and they're currently deciding between that and the apache licence - now I'm writing a c++ wrapper around ffmpeg lib and I'm wondering if that could give me any leverage in pushing for gpl3 for our project
[16:43:40 CET] <King_DuckZ> the library is lgpl, so there's no legal obligation on us, is that correct?
[16:43:47 CET] <King_DuckZ> even for a wrapper?
[16:43:59 CET] <atomnuker> well, ffmpeg is lgpl which is more restrictive than apache but can be configured as gpl (which is required if you want to link to x264)
[16:44:42 CET] <atomnuker> the legal obligation is to provide source code for any modifications and make it user replaceable
[16:47:21 CET] <King_DuckZ> atomnuker: I was also confused, but then it turns out gpl is more liberal, not more restrictive https://mondiaspora.net/posts/61dd3c40f3a50135593b5ffd45e91fad as in, it maximises freedom across users
[16:48:15 CET] <atomnuker> yeah, its more free for users but less so for companies
[16:48:25 CET] <atomnuker> *and
[16:49:15 CET] <King_DuckZ> either way, the wrapper won't modify any of the ffmpeg code, but I guess the "we can enable x264" could give me some advantage there
[16:51:13 CET] <DHE> sounds like releasing as GPL to allow mixing ffmpeg and x264 with your app is the way to go
[16:52:08 CET] <King_DuckZ> hmm x265 too, from what I see
[16:54:52 CET] <King_DuckZ> hmmmmmm very interesting, we already build ffmpeg with --enable-gpl and --enable-libx264, it's just that this tool has never been distributed outside of the company
[16:56:35 CET] <King_DuckZ> but that's an excellent thing, it means the moment we do, it's a non-choice :) although it can be compiled without ffmpeg, so maybe.... uhhh this is why I never became a lawyer :s
[18:05:24 CET] <gh0st3d> Hey everyone... I'm using these commands (https://pastebin.com/xv1VMuJu) to merge a provided video with a generated video... Is there a way to add an audio track to the full merged file without having to re-transcode the file?
[18:06:26 CET] <gh0st3d> The current merge takes about 6s and I'd love to be able to accomplish having the audio from the first file continue into the second video without increasing the time of the merge by much. My gut tells me it's not possible
[18:23:54 CET] <alexpigment> gh0st3d: yeah, you probably want to do an additional ffmpeg command with two inputs (the newly merged video) and the source audio (or video that contains it)
[18:24:22 CET] <alexpigment> then use the -map command to map the source streams to the final output
[18:25:04 CET] <gh0st3d> So to make sure I understand, essentially merge the videos as silent videos, then do a merge with the audio file?    And that secondary merge would be relatively quick?
[18:25:43 CET] <alexpigment> the use of merge is a bit unclear to me
[18:25:51 CET] <alexpigment> the term merge, i mean
[18:26:09 CET] <alexpigment> where is the audio coming from that you want to use?
[18:26:16 CET] <alexpigment> a video file or a standalone audio track?
[18:27:00 CET] <gh0st3d> Ah, sorry. So combine to two silent videos into one (which takes 4-6 seconds resulting in a 1 minute video) and then combine that output video with the audio track... How long would you expect the combining with the audio track to take?
[18:27:14 CET] <alexpigment> almost no time
[18:27:25 CET] <gh0st3d> Perfect, I'll give that a try. Thank you!
[18:28:06 CET] <alexpigment> ffmpeg -i [video input] [audio input] -map 0:0 -map 1:0 -c copy output.final
[18:28:08 CET] <alexpigment> something like that
[18:32:52 CET] <alexpigment> forgot my second -i before the audio input above, but hopefully you get the idea ;)
[18:33:02 CET] <alexpigment> i suppose you could use a pipe in between instead
[19:41:53 CET] <King_DuckZ> I'm looking at the muxing.c example and at the line with add_stream(&video_st, oc, &video_codec, fmt->video_codec); I suppose video_st and video_codec are some sort of return value, right?
[19:42:42 CET] <King_DuckZ> if so, is there a reason why video_codec is not a member of OutputStream?
[20:07:20 CET] <King_DuckZ> I don't understand that code, why open_video gets an AVFormatContext if it doesn't need one?
[20:12:49 CET] <Chuck_> Does ffmpeg bilinear scale use some kind of approach that is edge adaptive?
[20:21:29 CET] <jkqxz> Chuck_:  It's just bilinear.  How would anything to do with edges make sense?
[20:25:21 CET] <King_DuckZ> why close_stream() also takes eth AVFormatContext and doesn't use it? and why close_stream() closes everything but the stream itself? doesn't st need to be cleaned up too?
[20:54:43 CET] <Johnjay> ffmpeg is able to tell me the peak db level in lpcm format
[20:54:53 CET] <Johnjay> but i don't see the control to do that in audacity
[20:55:07 CET] <Johnjay> i'm trying to measure the peak audio at one part of the file
[21:01:18 CET] <Johnjay> weird there's an option in audacity to analyze the spectrum and it's about -42dB, but in ffmpeg it's saying mean vol is -43, max is -14.8
[21:25:50 CET] <Nik_gro> Cheers everyone. Is there anyone here with semi pro knowledge of ffmpeg who could possibly help me figure out why i can't get it to run in home assistant ?
[21:33:13 CET] <kiroma> How do I compile ffmpeg with Cuda9?
[21:33:59 CET] <kiroma> I've got the entire SDK installed but when I do --enable-cuda configure says `cuda requested but not found`
[21:36:16 CET] <c_14> check config.log
[21:38:36 CET] <kiroma> `/tmp/ffconf.grmrF8wy/test.c:1:10: fatal error: windows.h: No such file or directory`
[21:39:33 CET] <c_14> that the last error?
[21:40:09 CET] <kiroma> Yes, there are only two errors, both of them are missing windows.h
[21:40:24 CET] <kiroma> And I'm on Linux.
[21:41:22 CET] <kiroma> Oh no wait mistake, there are more errors.
[21:43:23 CET] <kiroma> Most of them are from lib detection though I presume.
[21:43:43 CET] <c_14> there should be one at the (almost) very end that's preceded by a check_cuda string or so
[21:45:32 CET] <kiroma> Can't find one.
[21:50:53 CET] <zerodefect> I have use cases where I'd like to do things like combining/merging two mono tracks from a clip into stereo. Anyone here used the ff_audio_mix_* family of C-functions? I can't seem to any working examples. The 'AVAudioResampleContext' struct is quite hairy :S Am I even looking at the best/correct functions? I'm reluctant to use libavfilter
[21:51:27 CET] <JEEB> at least libavfilter utilizes libswresample behind the scenes
[21:51:35 CET] <JEEB> so  you don't have to wonder about that
[21:51:43 CET] <JEEB> of course lavfi is its own special cupcake
[21:51:53 CET] <JEEB> but it has worked for me in my limited use cases
[21:52:28 CET] <JEEB> (and it lets you use AVFrames)
[21:52:41 CET] <JEEB> which are used in lavc
[21:53:04 CET] <durandal_1707> wow
[21:53:42 CET] <zerodefect> Thanks @JEEB. Admittedly, the problem I find with most (not all) filters is that the inputs/outputs are not dynamic.
[21:54:01 CET] <JEEB> you can just flush and re-create a filter chain then
[21:54:17 CET] <JEEB> although I'm pretty sure lavfi is OK with the input changing
[21:54:28 CET] <JEEB> like, I don't get a failure if I need to do input->stereo
[21:54:34 CET] <JEEB> and I first get stereo
[21:54:40 CET] <JEEB> and then later audio track switches to 5.1
[21:54:45 CET] <JEEB> that gets handled a'OK
[21:55:23 CET] <JEEB> and I would be pretty sure that you'd have to recreate your stuff with swresample as well
[21:55:32 CET] <JEEB> or well, re-init
[21:56:36 CET] <zerodefect> Do you have a particular filter that you tend to use?
[21:56:58 CET] <zerodefect> amerge?
[21:57:46 CET] <JEEB> I used to utilize that with MXF inputs I think :P
[21:58:01 CET] <JEEB> where you get umpteen PCM tracks
[21:58:05 CET] <JEEB> which are all mono
[21:58:12 CET] <zerodefect> Yes, that is the use case I have :)
[21:58:29 CET] <JEEB> my condolences
[21:58:34 CET] <zerodefect> Would like to combine mono into pairs
[21:58:44 CET] <zerodefect> :) Cheers
[21:59:07 CET] <durandal_1707> amerge or join
[22:00:01 CET] <zerodefect> Ah, hadn't seen join.
[22:03:05 CET] <zerodefect> My gripe with the avfilters stems from trying to use overlay filter. I couldn't seem to dynamicallly change the x,y position.  Not a bug or fault per se but more a limitation.  I should probably be more open to incorporating them into my code.
[22:09:38 CET] <JEEB> I thought there was some way of changing those parameters, but you might need to implement something in the filter if it doesn't support everything you need dynamically
[22:09:49 CET] <JEEB> or you just flush and re-create :D
[22:38:27 CET] <ChocolateArmpits> So rtsp timeout doesn't really work if there's an additional input that's not faulting. If rtsp input gets disrupted ffmpeg continues anyways with any further reconnections impossible
[22:39:30 CET] <ChocolateArmpits> Strange workaround was to place -stream_loop with any value as an input option for rtsp. The input then somehow faults if filters are used for some reason and no connection can be established
[23:14:10 CET] <shtomik> Hi to all, guys, how to use this functions "int avdevice_list_devices(struct AVFormatContext *s, AVDeviceInfoList **device_list);" ? https://pastebin.com/Fsd9xGLq What am I doing wrong? Thanks!
[23:20:58 CET] <utack> is there any place i can put the "-ss" search to avoid ffmpeg applying the entire filter chain on the unused parts? right now i use -i whatever -ss time -vf something, and it seems to apply the entire flter chain to the discarded input slowing it down
[23:21:38 CET] <sfan5> ffmpeg no longer does that since several versions
[23:21:42 CET] <sfan5> IIRC
[23:21:54 CET] <ChocolateArmpits> utack, place it before the input
[23:21:57 CET] <sfan5> you can try -ss <seek point> -i ... -ss 0
[23:22:01 CET] <utack> ah ok, before
[23:22:06 CET] <utack> makes sense, thanks
[23:39:22 CET] <kiroma> Okay I've downloaded CUDA 8 dev files and --enable-cuda keeps failing.
[23:50:17 CET] <BtbN> kiroma, the stuff that needs the full SDK is behind enable-cuda-sdk
[23:54:18 CET] <kiroma> Oh so what do I need to just compile nvenc?
[23:54:50 CET] <BtbN> http://git.videolan.org/?p=ffmpeg/nv-codec-headers.git
[23:58:36 CET] <kiroma> Why was it split?
[23:58:48 CET] <BtbN> To not have a bunch of nvidia headers in ffmpeg.
[00:00:00 CET] --- Fri Mar  2 2018


More information about the Ffmpeg-devel-irc mailing list