[Ffmpeg-devel-irc] ffmpeg.log.20170902

burek burek021 at gmail.com
Sun Sep 3 03:05:01 EEST 2017

[00:40:45 CEST] <diverdude> hello. Can i somehow using ffmpeg simulate a video stream with a certain resolution and a certain framerate that is streamed via ethernet such that i can use that to develop a player that can play data from this stream?
[00:41:44 CEST] <c_14> "via ethernet"
[00:42:03 CEST] <c_14> like what, raw video slammed into an ethernet frame?
[00:46:57 CEST] <c_14> the answer is probably yes btw as long as your answer to my question isn't yes
[00:47:41 CEST] <diverdude> sorry didnt see your response
[00:48:12 CEST] <diverdude> so yes, as a raw stream
[00:48:46 CEST] <diverdude> c_14: can you maybe show me how i can simulate this?
[00:49:23 CEST] <c_14> Do you mind if there's tcp/udp in the way?
[00:49:57 CEST] <c_14> Even udplite would work
[00:50:13 CEST] <c_14> But if you want raw tcp you'll need something to generate the TCP frame and segment the video data into that yourself
[00:50:23 CEST] <c_14> s/raw tcp/raw ethernet/
[00:50:31 CEST] <c_14> s/TCP/Ethernet/
[00:50:35 CEST] <diverdude> hmmm ok good point
[00:50:37 CEST] <c_14> I'm too tired for this
[00:51:02 CEST] <c_14> What problem are you trying to solve with this?
[00:51:16 CEST] <diverdude> i really wanted to mimic a camera. moment, will look at specs
[00:51:18 CEST] <klaxa> test-source for a network player?
[00:51:41 CEST] <c_14> most likely the camera is outputting via tcp/udp/rtp
[00:51:45 CEST] <c_14> probably even http
[00:52:02 CEST] <c_14> If you're _reeeally_ lucky, it's flash
[00:52:04 CEST] <c_14> <_<
[00:53:05 CEST] <klaxa> i would guess something like this should simulate it well enough? ffmpeg -f lavfi -r 30 -i testsrc -s 1280x720 -listen 1 -f matroska tcp://0:8080
[00:53:21 CEST] <diverdude> ahh sorry, actually its using USB3... ok
[00:53:26 CEST] <klaxa> maybe don't use matroska but some other container that the camera uses
[00:53:40 CEST] <klaxa> sounds like a case for v4l2 :x
[00:53:44 CEST] <c_14> yeah
[00:54:01 CEST] <diverdude> i wanted to try and mimic this camera: https://www.ptgrey.com/grasshopper3-123-mp-mono-usb3-vision-sony-pregius-imx253
[00:54:08 CEST] <c_14> If the camera is detected by v4l2 you can fake a v4l2 device using ffmpeg by using v4l2loop or whatever it was called
[00:54:24 CEST] <klaxa> >Seite nicht gefunden
[00:54:25 CEST] <klaxa> wot
[00:54:50 CEST] <diverdude> ahh...its the geo stuff. Its called grasshopper3 from ptgrey.com
[00:55:27 CEST] <klaxa> changing the country to the us of a worked
[00:55:31 CEST] <klaxa> wew what a crappy website
[00:55:42 CEST] <diverdude> yeah i agree..totally crappy website
[00:56:00 CEST] <diverdude> but the cameras are at least quite good :)
[00:58:12 CEST] <c_14> Oh god
[00:58:27 CEST] <c_14> FlyCapture
[00:58:34 CEST] <diverdude> ok, so i have to use an v412loop? Is this just a command i run?
[00:58:47 CEST] <c_14> I have memories of this ****
[00:58:50 CEST] <diverdude> c_14: you dont like FlyCapture?
[00:58:54 CEST] <c_14> afaik it won't be detected by v4l2
[00:59:07 CEST] <c_14> I don't like it because it isn't compatible with system software
[00:59:13 CEST] <c_14> It does everything on its own
[00:59:49 CEST] <c_14> I mean, you can try checking if your pc detects it with v4l2 but it probably won't
[01:00:17 CEST] <c_14> If it doesn't there's no real way to emulate the camera using FFmpeg
[01:00:22 CEST] <c_14> not without hacking your way through FlyCap anyway
[01:00:56 CEST] <diverdude> c_14: ah right....thats a good point
[01:01:34 CEST] <klaxa> is v4l2 available on windows even? did we ever clear up what OS you are on even?
[01:01:46 CEST] <c_14> Well
[01:01:49 CEST] <c_14> if you're on windows
[01:01:50 CEST] <diverdude> ok...but i think i am not really interested in emulating that acurate....i just want to emulate the stream such that i can see if i can playback the data fast enough and process the incoming data fast enough
[01:01:51 CEST] <c_14> Give up now?
[01:02:19 CEST] <klaxa> save up to 2 years of time loss \o/
[01:02:37 CEST] <c_14> I mean, if that's what you care about
[01:02:43 CEST] <c_14> Just create a dump of the stream into a file
[01:02:48 CEST] <c_14> and use that to test the decoding speed
[01:03:09 CEST] <c_14> That way you don't have to worry about camera access until you know you're fast enough
[01:05:06 CEST] <diverdude> i can see how what i am saying is making less and less sense....probably because i have not formulated myself very well.... i am trying to build a program that can handle a specific framerate and a specific resolution. The program grabs the "generic" videostream which for example could come from that ptgrey camera - it takes the stream and displays it on a canvas where user can press start, stop and draws on top of the canvas while
[01:05:33 CEST] <diverdude> c_14: true...i dont have any camera yet though...thats why i want to simulate :)
[01:06:19 CEST] <c_14> If you know the codec and resolution, ffmpeg can generate test video for you to work with
[01:07:34 CEST] <diverdude> c_14: right ok....not sure about the codec actually.  i think the video is just raw frames
[01:07:49 CEST] <diverdude> and then i need to apply some codec to these raw frames
[01:07:50 CEST] <c_14> ffmpeg can do rawvideo too
[01:08:32 CEST] <diverdude> i want to do 4096 x 3000 @ 30FPS raw video grayscale video
[01:09:50 CEST] <klaxa> ffmpeg -f lavfi -r 30 -i testsrc -pix_fmt gray -c rawvideo -s 4096x3000 test.mkv
[01:09:58 CEST] <c_14> ffmpeg -f lavfi -i testsrc:s=4096x3000:r=30:d=60 -pix_fmt gray10le out.mkv
[01:10:07 CEST] <klaxa> that might be better
[01:10:16 CEST] <c_14> mostly ninjad so I didn't finish the end part
[01:10:34 CEST] <klaxa> d=60 is duration 60 seconds?
[01:10:39 CEST] <c_14> I picked 10bit gray, there's also 12bit and 9bit
[01:10:50 CEST] <klaxa> 8 as well
[01:10:51 CEST] <c_14> wtf is 9bit video
[01:11:09 CEST] <c_14> Oh, there is also an 8bpp version
[01:11:10 CEST] <klaxa> 3 bit per color :P
[01:11:22 CEST] <klaxa> in grayscale... wait a second...
[01:11:24 CEST] <c_14> Nah, it's 9bpp
[01:11:48 CEST] <klaxa> because you're such a good customer we throw in a bit for free?
[01:11:53 CEST] <diverdude> c_14: good point...i see ADC of this camera is 10 and 12 bit...is it normal to have a 10bit resolution of a monochrome video?
[01:12:06 CEST] <c_14> Well, it's not monochrome
[01:12:07 CEST] <c_14> It's grayscale
[01:12:14 CEST] <klaxa> monochrome would be 1 bit
[01:12:15 CEST] <diverdude> sorry, true
[01:12:18 CEST] <diverdude> yes
[01:12:27 CEST] <c_14> I mean
[01:12:33 CEST] <c_14> There's a lot of Shades of Gray
[01:12:35 CEST] <c_14> Maybe 50
[01:12:37 CEST] <c_14> Maybe more
[01:12:48 CEST] <c_14> (forgive the pun)
[01:12:50 CEST] <klaxa> 8 bit grayscale is really not that much
[01:12:57 CEST] <diverdude> c_14: hehehehe
[01:12:58 CEST] <diverdude> :D
[01:13:35 CEST] <diverdude> hmm ffmpeg -f lavfi -i testsrc:s=4096x3000:r=30:d=60 -pix_fmt gray10le out.mkv   --->[lavfi @ 0x7f8793000000] No such filter: 'testsrc:s'
[01:13:51 CEST] <c_14> eh, testsrc=
[01:13:57 CEST] <c_14> the first one is always an equals
[01:14:01 CEST] <c_14> I keep messing that up
[01:14:45 CEST] <diverdude> Unknown pixel format requested: gray10le.
[01:15:13 CEST] <klaxa> huh, that's weird
[01:15:15 CEST] <durandal_1707> only nut supports all rawvideo formats
[01:17:46 CEST] <diverdude> i have ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
[01:18:38 CEST] <c_14> try out.nut instead of out.mkv
[01:19:15 CEST] <diverdude> im still getting Unknown pixel format requested: gray10le.
[01:19:30 CEST] <durandal_1707> what player he wants to use?
[01:19:36 CEST] <klaxa> his own
[01:20:06 CEST] <c_14> diverdude: Is it listed in `ffmpeg -pix_fmts' as IO?
[01:22:30 CEST] <diverdude> hmm
[01:22:47 CEST] <diverdude> the following gray is listed: IO... gray                   1             8 IO... gray16be               1            16 IO... gray16le               1            16
[01:23:10 CEST] <c_14> And none of the other ones?
[01:24:07 CEST] <diverdude> no
[01:24:14 CEST] <diverdude> i did a grep on gray
[01:24:20 CEST] <diverdude> on that command you gave me
[01:24:26 CEST] <diverdude> and only those 3 came out
[01:25:14 CEST] <diverdude> c_14: this is the total output http://paste.ubuntu.com/25447834/
[01:26:19 CEST] <c_14> 3.2 should be old enough for gray10
[01:27:03 CEST] <c_14> I mean, you can try updating to recent git master or 3.3 and checking
[01:28:13 CEST] <diverdude> i will try
[01:30:04 CEST] <diverdude> hmm ok upgraded to 3.3.3 and stilli get the same
[01:30:19 CEST] <diverdude> Unknown pixel format requested: gray10le.
[01:30:39 CEST] <diverdude> should i somehow install that filter?
[01:30:46 CEST] <diverdude> i am on a mac btw
[01:32:56 CEST] <c_14> There's nothing to install post-factor, it's either there or it isn't.
[01:33:01 CEST] <c_14> not sure why it isn't though
[01:35:00 CEST] <diverdude> could i use gray16be instead=
[01:35:02 CEST] <diverdude> ?
[01:35:12 CEST] <diverdude> would that create a 16bit signal?
[01:35:43 CEST] <c_14> I mean, you can but then the video would be "completely" different than with 10bit content
[01:36:09 CEST] <c_14> Though I guess you can abstract that away so that your program should be able to handle either
[01:41:44 CEST] <diverdude> c_14: true...i guess if it can handle a 16bit video it can also handle a 10bit
[01:43:21 CEST] <diverdude> what is difference btwn. gray16be and gray16le ?
[01:43:31 CEST] <c_14> endianess
[01:43:39 CEST] <c_14> be is big-endian, le is little-endian
[01:43:57 CEST] <diverdude> ah yeah
[01:43:57 CEST] <c_14> just pick your system-native one
[01:43:59 CEST] <diverdude> true
[01:47:14 CEST] <diverdude> c_14: so lets say i get this camera and stream video via usb, i then grab all the frames in my c++ program and do some analysis on them and forward each frame to an ffmpeg encoder which then forwards encoded frames to some videoplayer as well as to a file that will contain the entire video for later playback.... would that be a sane way to setup such a pipeline?
[01:47:20 CEST] <acos> Wow this chat is finally busy.
[01:47:56 CEST] <c_14> diverdude: yeah, seems sane enough
[01:57:31 CEST] <diverdude> hmm when i run : ffmpeg -f lavfi -i testsrc=4096x3000:r=30:d=60 -pix_fmt gray16le out.nut   i get a warning issued: Incompatible pixel format 'gray16le' for codec 'mpeg4', auto-selecting format 'yuv420p'    does this mean that a color video is actually streamed?
[01:58:23 CEST] <diverdude> c_14: when i run : ffmpeg -f lavfi -i testsrc=4096x3000:r=30:d=60 -pix_fmt gray16le out.nut   i get a warning issued: Incompatible pixel format 'gray16le' for codec 'mpeg4', auto-selecting format 'yuv420p'    does this mean that a color video is actually streamed?
[01:58:29 CEST] <klaxa> diverdude: add: -c rawvideo
[01:59:15 CEST] <c_14> what klaxa said
[01:59:58 CEST] <diverdude> hmmm still the same: http://paste.ubuntu.com/25447957/
[02:00:15 CEST] <c_14> after the -i
[02:01:23 CEST] <c_14> (and the filename, but before the output filename)
[02:01:30 CEST] <c_14> options go before the "file" they modify
[02:01:42 CEST] <c_14> In this case you want to set the output video codec so it goes before the output file and after the input file
[02:02:03 CEST] <diverdude> ahh yeah that worked
[02:04:12 CEST] <diverdude> c_14: so you mentioned earlier that flycapture was horrible because it has its own api etc etc. So am thinking that if i have to capture frames using some proprietary api and i want to encode using eg. ffmpeg, will i then not have to copy each frame into an ffmpeg buffer? memcp is pretty expensive as i know, maybe that will create problem when dealing with so large amounts of data as 12MP at 30FPS ?
[02:05:42 CEST] <klaxa> if you want to use the libraries, your frames will already be in memory anyway, you should be able to just pass them to ffmpeg
[02:06:18 CEST] <klaxa> if you want to use the command line interface of ffmpeg, just pipe it and your OS should do the "optimizations"
[02:06:35 CEST] <damata> Hi people, if ffserver is no longer update, how can i stream on http with ffmpeg ?
[02:07:14 CEST] <c_14> hls or dash
[02:07:35 CEST] <c_14> or you write your own program that leverages the libav* libraries
[02:14:04 CEST] <damata> first i have try to set a rtp stream, but i thing my router dont support it because i cant play the stream on my smartphone...
[02:17:05 CEST] <klaxa> *shamelessly self-advertising* https://github.com/klaxa/mkvserver_mk2
[02:17:19 CEST] <damata> its strange because i can listen the stream on my laptop my not on my phone...
[02:17:26 CEST] <damata> can someone help me ?
[02:17:34 CEST] <klaxa> takes a file and turns it into a matroska-stream
[02:17:43 CEST] <klaxa> oh, audio?
[02:17:47 CEST] <klaxa> maybe just use icecast?
[02:18:36 CEST] <damata> last time i try icecast i end up with like 5 seconds delay, my objective is "zero" selay
[02:18:52 CEST] <klaxa> ah
[02:19:30 CEST] <klaxa> if you are on linux, maybe look into pulseaudio network setup?
[02:19:41 CEST] <klaxa> although it's usually not really viable over wifi
[02:19:59 CEST] <klaxa> i think there is a pulse client for android
[02:20:36 CEST] <damata> i'am working on a project that stream the audio from tv for smartphones ! iam using Raspberry to stream the audio
[02:21:27 CEST] <diverdude> any idea if there is some player i can use to watch the nut file i just created using ffmpeg on mac?
[02:21:43 CEST] <klaxa> mpv
[02:21:49 CEST] <damata> with ffserver i can make a http unicast stream with 2 seconds latency for phones and 0 latency for laptop... i have no ideia who to make it work better on the phones
[02:22:13 CEST] <klaxa> https://github.com/dront78/PulseDroid
[02:22:25 CEST] <klaxa> afaik pi's *usually* run pulse anyway
[02:22:35 CEST] <acos> Phones are slower than laptops ?
[02:22:56 CEST] <klaxa> huh, this is 7 years old, not sure if it still works :x
[02:23:57 CEST] <damata> @acos i think so, they both are connect by wifi i dont know why i have more delay on the phone...
[02:24:26 CEST] <klaxa> are you using a "normal" mediaplayer?
[02:24:33 CEST] <damata> vlc
[02:24:33 CEST] <klaxa> if so, it's probably buffering
[02:24:53 CEST] <damata> i set the internet buffers to 0, in my laptop and phone too
[02:25:37 CEST] <klaxa> i wrote some android stuff that used udp to just push raw pcm frames, that was pretty much real-time but it suffered stuttering
[02:25:43 CEST] <klaxa> but you can't fix that because it's wifi
[02:37:52 CEST] <diverdude> does ffmpeg libraries support video overlay? Basically drawing on top of a video being played back
[03:33:42 CEST] <damata> ffmpeg -f alsa -ac 1 -ar 44100 -i hw:1,0 -acodec mp2 -ab 32k -ac 1 -f rtp rtp://    , there is something wrong with this comand ?
[03:56:20 CEST] <atomnuker> yes, 32k mpeg2 audio will sound like crap
[04:01:12 CEST] <acos> LOL what a bit rate
[04:20:18 CEST] <damata> ok but the rtp is correct ? i cant play this on my phone, he open the stream but dont play it
[04:23:57 CEST] <furq> does your phone play mpeg4
[04:24:02 CEST] <furq> you probably want -c:v libx264
[04:24:26 CEST] <damata> iam uisng vlc on my phone
[04:27:05 CEST] <damata> i try ffmpeg -f alsa -ac 1 -ar 44100 -i hw:1,0 -c:v libx264 -ab 32k -ac 1 -f rtp rtp://  but still no sound on my phone
[05:08:39 CEST] <furq> damata: -c:a aac
[05:13:59 CEST] <damata> c:i can open the stream om my pc using sdp file but no in my phone
[08:48:19 CEST] <diverdude> Should I expect to be able to encode 12.3MP at 30FPS in realtime?
[08:50:22 CEST] <c3-Win> What codec? What 'quality' setting for said codec? How much encoding power do you have relative to the codec?
[09:01:47 CEST] <alexpigment> BtbN: are you around yet?
[09:01:59 CEST] <alexpigment> I got caught up with a call at work yesterday
[09:02:31 CEST] <alexpigment> Anyway, hopefully you can see the problem when remuxing NVENC into MP4 and ffprobe showing the frame rate as 60000/1001
[09:03:55 CEST] <alexpigment> I'm going to do a comparison in staxrip tomorrow and make sure it's correct (which it was last time I checked). I know that this seems like a player-specific issue, but I think it's an issue with FFMPEG's NVENC interlace implementation that causes some players to view it as 59.94 instead of 29.97
[09:04:34 CEST] <alexpigment> so far WMP and VLC on my end are problematic and MediaInfo shows it as 59.94
[09:04:44 CEST] <alexpigment> Kodi plays it correctly
[09:05:07 CEST] <alexpigment> x264 is fine in all players as well as MediaInfo
[09:09:22 CEST] <acos> Y'all seen this ? https://youtu.be/vbydG78Al8s
[09:12:38 CEST] <diverdude> c3-Win: hmm coding power means how much its being compressed?
[09:18:21 CEST] <diverdude> acos: i wonder how long it took to create DarkLabel ?
[09:18:36 CEST] <acos> It seems so cool
[09:18:46 CEST] <acos> It's not my software but still. It's Korean
[09:19:25 CEST] <diverdude> acos: yeah, i want to create something like it
[09:19:54 CEST] <acos> Good luck diverdude
[09:20:26 CEST] <diverdude> acos: you wanna help me? :) We can make it as open source project
[09:21:13 CEST] <acos> Ya would be fun
[09:23:58 CEST] <diverdude> acos:  i think so too. I am thinking we could utilize qt5 as GUI
[09:24:44 CEST] <diverdude> acos: how much experience do you have with ffmpeg?
[09:31:50 CEST] <acos> Not much at all sadly.
[09:31:59 CEST] <acos> Whenever I try to use it I get errors.
[09:44:49 CEST] <mozzarella> I chuckled
[09:44:59 CEST] <mozzarella> you made me chuckle
[09:46:15 CEST] <diverdude> mozzarella: me?
[09:46:26 CEST] <mozzarella> no, acos
[09:46:36 CEST] <acos> Yayyyyy ty mozzarella
[10:03:52 CEST] <mozzarella> acos: which OS are you using? is it archlinux?
[10:04:52 CEST] <acos> Windows 7x64 and 8.1x64 or centos or Ubuntu  if needed
[11:33:41 CEST] <BtbN> alexpigment, like I said, I can put complete garbage in the framerate field, and nothing changes.
[11:33:47 CEST] <BtbN> So it's _not_ the framerate.
[11:49:26 CEST] <BtbN> alexpigment, also, if you're muxing to anything but .ts, you are likely hit by some bug in ffmpeg.c
[12:10:42 CEST] <alexpigment> Btbn: yeah the ts files from ffmpeg also are reported as 59.94 when encoded via NVENC
[12:11:02 CEST] <alexpigment> anyway, there's a bug here. when I get some time to look into it tomorrow, I'll log it up in trac
[12:12:08 CEST] <BtbN> "There is some bug here somewhere" won't be a very useful report
[12:12:25 CEST] <BtbN> There's a good chance it's just broken players
[12:12:26 CEST] <alexpigment> BtbN: a bug that affects certain players is still a bug
[12:12:57 CEST] <alexpigment> if you don't realize that, then you haven't been a developer for very long
[12:12:58 CEST] <BtbN> Your best bet is to report it to nvidia
[12:13:08 CEST] <alexpigment> It doesn't affect NVENC from Staxrip
[12:13:17 CEST] <BtbN> They might be fixing it in post-processing
[12:13:22 CEST] <alexpigment> They might be
[12:13:31 CEST] <BtbN> Did you ask them?
[12:13:36 CEST] <alexpigment> So they probably *fixed a bug*
[12:13:38 CEST] <alexpigment> No
[12:13:51 CEST] <BtbN> I will _not_ introduce bitstream postprocessing to fix driver bugs
[12:14:00 CEST] <BtbN> If it turns out to be the case, it's nvidias bug
[12:14:19 CEST] <alexpigment> You may be right. I just know that ffmpeg is the only place I'm seeing this problem
[12:15:14 CEST] <alexpigment> Anyway, I'm not doing work-work tomorrow, so I'll probably have more time to look into it and log my findings on trac
[12:28:08 CEST] <BtbN> The only other idea I had was forcing it to write the VFR flag into the stream.
[12:28:11 CEST] <BtbN> Did nothing as well
[12:28:31 CEST] <BtbN> This is most definitely not something affecting nvenc, but something with the ffmpeg muxers.
[13:32:07 CEST] <BtbN> alexpigment, can I have a file converted with staxrip nvenc?
[13:32:18 CEST] <BtbN> It is using some _very_ weird chinese software to do the conversion.
[13:33:06 CEST] <BtbN> Namely, this thing: https://github.com/rigaya/NVEnc
[13:36:11 CEST] <BtbN> My current observation is that nvenc mangles the framerate I pass to it. The framerate passed to its API is 2997/100, which is correct. The correct framerate to write to the VUI would be 5994/100. But it writes 60000/1001 instead. Which is close, but not quite it.
[13:50:43 CEST] <kepstin> huh, i wonder if there's some hardcoded fixup. because 60000/1001 is the correct field rate for interlaced ntsc
[13:50:46 CEST] <grublet> BtbN: it's accurate to 1 millionth
[13:51:01 CEST] <grublet> and that error is most likely just accounting for rounding
[13:51:26 CEST] <BtbN> The original and a x264 encoded file have the SPS VUI timinig info set to 5994/100
[13:51:33 CEST] <BtbN> The nvenc one has 60000/1001.
[13:51:45 CEST] <BtbN> The original and x264 one play smooth in WMP, and nvenc one stutters
[13:51:53 CEST] <grublet> those are effectively identical ratios
[13:51:53 CEST] <BtbN> And that framerate is the only difference I can possibly find
[13:51:58 CEST] <grublet> no clue why it stutters
[13:52:27 CEST] <grublet> im not familiar with nvenc so someone else will have to help on this
[13:55:41 CEST] <hendry> this: ffmpeg -y -i canon.mkv -movflags +faststart -pix_fmt yuv420p -c:v libx264 -acodec aac 2017-09-02/canon.mp4 # seems to take quite long on my machine... can it made faster? i.e. less compressing work ?
[13:56:26 CEST] <c_14> use a faster preset
[13:56:30 CEST] <BtbN> do you even need to re-encode it?
[13:56:36 CEST] <BtbN> Or do you just want to move the moov atom?
[13:56:51 CEST] <BtbN> Or remux it, rather
[13:59:30 CEST] <hendry> BtbN: FCPX 10.3 doesn't seem to grok .mkv
[13:59:45 CEST] <hendry> BtbN: just want to get this .mkv as quickly as possible into FCPX
[14:00:57 CEST] <BtbN> so remux it to mp4, without re-encoding
[14:09:07 CEST] <kepstin> hendry: just give this a try: ffmpeg -i canon.mkv -c copy -movflags +faststart canon.mp4
[14:09:26 CEST] <kepstin> if it works, great, if not, there's some options to speed up the encoder
[14:17:08 CEST] <damata_> can some one tell me why my ffserver only accepts 3 or less clients. when i connect the 4 client the server output error 503 , this is my ffserver config :https://pastebin.com/LTNLYNEB
[14:21:57 CEST] <kepstin> damata_: ffserver isn't really supported, and so there probably isn't anyone who can tell you what's gone wrong.
[14:35:36 CEST] <hendry> kepstin: Could not find tag for codec pcm_s16le in stream #1, codec not currently supported in container
[14:35:39 CEST] <hendry> Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
[14:36:04 CEST] <hendry> BtbN: thought my initial line was effecting remuxing
[14:46:04 CEST] <kepstin> hendry: looks like you'll want to encode the audio, yeah, but you might not need to re-encode the video
[14:46:22 CEST] <kepstin> could try -c:v copy -c:a aac
[14:52:37 CEST] <damata_> is there something easy to install to replace ffserver ?
[14:53:43 CEST] <klaxa> people usually recommend using hls or rtmp with nginx-rtmp
[14:58:05 CEST] <c_14> there's also dash these days
[15:45:26 CEST] <BtbN> alexpigment, another difference I found is that nvenc does not by default output a picture timing SEI. Which contains information about the picture struct (TOP_FIELD, BOTTOM_FIELD, TOP_BOTTOM, BOTTOM_TOP, and so on)
[15:45:49 CEST] <BtbN> It seems bad/wrong to not output that for interlaced encodes, I'll push a patch soon to enable it
[16:29:06 CEST] <BtbN> alexpigment, I found the problem.
[16:29:11 CEST] <BtbN> And it seems impossible to fix.
[16:29:32 CEST] <BtbN> You can workaround it though.
[16:29:50 CEST] <BtbN> Output to a raw .h264 file with ffmpeg nvenc, and then remux that to your final container.
[16:29:59 CEST] <BtbN> That way it generates proper timestamps
[16:30:42 CEST] <BtbN> nvenc emits two output "frames" per input frame in interlaced encoding mode. But outputs them in a single packet. And timestamps in ffmpeg are per packet.
[16:30:52 CEST] <BtbN> So the two fields share a timestamp, which explains the 30 FPS effect
[16:37:28 CEST] <BtbN> StaxRip avoids the problem because it calls an external binary for the nvenc, which outputs raw avc. And then splits that avc itself.
[16:38:00 CEST] <BtbN> I have absolutely no idea how to fix this in ffmpeg. Might be plain not possible.
[16:38:15 CEST] <acos> Ooooo
[16:39:38 CEST] <kepstin> well, one way to "fix" it would be to have the ffmpeg nvenc wrapper parse the packets it gets back and split/repacketize them. But that would be pretty messy :/
[16:39:56 CEST] <BtbN> it can't
[16:40:11 CEST] <BtbN> That would mean outputting two packets at once
[16:40:20 CEST] <BtbN> Which isn't possible
[16:40:35 CEST] <JEEB> so you get a single buffer with N>1 video packets?
[16:40:43 CEST] <JEEB> as in, NAL units
[16:40:48 CEST] <BtbN> in interlaced encoding mode, yes
[16:40:51 CEST] <JEEB> yikes
[16:40:58 CEST] <BtbN> nvenc does field mode encoding
[16:41:01 CEST] <BtbN> one "frame" per field
[16:41:07 CEST] <JEEB> picture, but yes
[16:41:15 CEST] <BtbN> so each field needs its own timestamp
[16:41:15 CEST] <kepstin> hmm, I thought that the newer encoder api in ffmpeg supported different input vs output frames
[16:41:17 CEST] <JEEB> that's why all the specs use the word "picture"
[16:41:27 CEST] <BtbN> nvenc is using the old API
[16:41:29 CEST] <JEEB> kepstin: yea but in this case you'd have to have a parser
[16:41:35 CEST] <BtbN> And migrating it would be quite a task
[16:41:47 CEST] <JEEB> because you've got N>1 NAL units in a single buffer
[16:41:53 CEST] <BtbN> And would then still leave you with a mess of h264 and hevc parsing to do the splitting
[16:41:55 CEST] <JEEB> and yes, migration to new APIs is "fun"
[16:42:04 CEST] <JEEB> I wonder if you could stick the h264 parser
[16:42:05 CEST] <JEEB> there
[16:42:08 CEST] <kepstin> JEEB: well, ffmpeg already has a parser, the api migration is the hardest part :/
[16:42:08 CEST] <JEEB> or something
[16:42:19 CEST] <JEEB> kepstin: yes, or sticking the parser there *after* the decoder
[16:42:32 CEST] <JEEB> uhh
[16:42:33 CEST] <BtbN> The parser would then have to make up timestamps
[16:42:33 CEST] <JEEB> encoder I mean
[16:42:53 CEST] <BtbN> I don't think ffmpeg supports putting a bsf after the encoder
[16:43:02 CEST] <kepstin> would be in the encoder, not after the encoder, i think
[16:43:12 CEST] <BtbN> Then you still have the same problem
[16:43:13 CEST] <JEEB> yes, that's the other alternative
[16:43:22 CEST] <JEEB> but as noted, same kind of issue more or less :3
[16:52:31 CEST] <kepstin> hmm, on a only tangentially related note, I kinda want to rewrite the fps filter using the request_frame() api so it doesn't buffer millions of frames to fill in large timestamp gaps.
[16:55:26 CEST] <kepstin> hmm, well, I really don't know how to make that work :/
[16:56:25 CEST] <JEEB> probably durandal_1707 can help you?
[16:58:01 CEST] <durandal_1707> kepstin: activate instead
[16:59:48 CEST] <kepstin> ah, that's what I looking for. No wonder I got confused.
[17:02:29 CEST] <kepstin> hmm, my checkout was out of date, there are actually a few filters using the activate callback now. I guess vf_blend might work as a reference
[17:04:36 CEST] <kepstin> oh, hmm, that uses framesync and has multiple inputs, a bit more complex than fps
[17:04:54 CEST] <durandal_1707> kepstin: no, vf zoompan is best example
[17:07:17 CEST] <BtbN> I think I'll migrate nvenc to the new API
[17:07:32 CEST] <BtbN> And when that is done, try to do some crazy bitstream processing
[17:07:47 CEST] <BtbN> Shouldn't really be too hard. Just look for 0x000001
[17:16:29 CEST] <kepstin> durandal_1707: thanks. This doesn't actually look like it'll be that hard. Who knows if I actually produce anything useful this weekend, tho :)
[17:16:34 CEST] <ZexaronS> Hello
[17:17:36 CEST] <ZexaronS> I'm trying to do a batch script to remove some parts of a file, output extracts and then reconcat the extracted part, if something like that is out there already ?
[17:20:28 CEST] <JEEB> BtbN: if nvenc always outputs like that then sure, it should work
[17:55:39 CEST] <BtbN> JEEB, it's always {SEI, SEI, SEI, ...} n x SLICE
[18:43:55 CEST] <alexpigment> BtbN: Man, I just woke up - great news to hear :)
[18:44:10 CEST] <BtbN> Well, this is not going to be fixed soon. If at all
[18:44:30 CEST] <alexpigment> I kinda thought you were of the mindset that this was an Nvidia issue and you weren't going to look into it further
[18:44:47 CEST] <BtbN> It kind of is an nvidia issue. In that their API is weird
[18:44:54 CEST] <BtbN> It gives no way to properly handle this
[18:45:02 CEST] <BtbN> As it just throws two output pictures in one packet at you
[18:45:03 CEST] <alexpigment> Well, at any rate, there's hope that it might get fixed at some point I suppose
[18:45:25 CEST] <BtbN> The current FFMpeg API nvenc uses makes it impossible to fix it
[18:46:18 CEST] <alexpigment> You said that migrating to the new API "shouldn't be that hard". Are there any other roadblocks?
[18:46:26 CEST] <alexpigment> (Aside from your time, of course)
[18:46:31 CEST] <BtbN> Someone needs to do that
[18:46:34 CEST] <BtbN> it's still quite a task
[18:46:37 CEST] <alexpigment> Gotcha
[18:46:52 CEST] <BtbN> And even if that's done, you need to parse the h264 and hevc bitstream
[18:46:54 CEST] <BtbN> and split it apart
[18:47:00 CEST] <BtbN> and make up intermediate timestamps
[18:47:13 CEST] <alexpigment> right
[18:47:40 CEST] <alexpigment> Is there anything that can be learned from the players that play the file correctly?
[18:48:04 CEST] <BtbN> They just ignore timestamps
[18:48:07 CEST] <alexpigment> I.E. is it possible that they're doing something interesting to process the file that could be also done in ffmpeg?
[18:48:12 CEST] <alexpigment> Ah
[18:48:14 CEST] <BtbN> and strictly play according to the FPS
[18:49:53 CEST] <alexpigment> Well at the very least, I'll get something logged up in trac today as I said I would
[18:50:34 CEST] <alexpigment> And it can sit there until someone decides 1080i is not dead yet and wants to fix it ;)
[18:50:59 CEST] <alexpigment> (or it can sit there until 1080i is actually dead, I suppose)
[18:59:34 CEST] <BtbN> You can work it around easily
[18:59:38 CEST] <BtbN> if you're not live-streaming
[19:03:52 CEST] <alexpigment> How so?
[19:03:58 CEST] <alexpigment> I must have missed that above
[19:04:13 CEST] <alexpigment> Live streaming 1080i is going to be very rare I suspect
[19:05:31 CEST] <alexpigment> Oh, demuxing and remuxing?
[19:05:53 CEST] <alexpigment> I tried that yesterday. It still gave me a file that was read as 59.94fps
[19:06:07 CEST] <alexpigment> Maybe I need to output to a particular container and demux in a particular way?
[19:07:13 CEST] <alexpigment> Oh, output to raw first
[19:07:21 CEST] <alexpigment> Ok, I'll try this on my end
[19:13:30 CEST] <alexpigment> I'm still seeing the same problem on my end. I'm going out to raw h264 first, then I'm muxing back to mp4
[19:14:26 CEST] <alexpigment> I tried ts, but the tsmuxer in ffmpeg is a bit janky (or I've never figured out how to use it correctly, perhaps), so I have to then remux it with Tsmuxer to get something that will play in WMP
[19:14:43 CEST] <alexpigment> Either way, it's still being read as ~60fps in MediaInfo
[19:36:52 CEST] <alexpigment> BtbN: https://trac.ffmpeg.org/ticket/6633
[19:37:13 CEST] <BtbN> No idea what to do about that.
[19:37:19 CEST] <alexpigment> I didn't put any of the information you listed above because that didn't seem appropriate to do
[19:37:29 CEST] <alexpigment> But if you'd like to add any information to the comments, feel free to do so
[19:50:09 CEST] <ZexaronS> alexpigment i hope interlaced dies already
[19:50:56 CEST] <ZexaronS> as well as noneven frame rates
[19:52:21 CEST] <BtbN> Yeah. Just bumping up the feature check to 2, so it will bail out if nvenc does not support frame interleaved encoding, is very tempting
[19:52:29 CEST] <WebWalker3D> I'm attempting to setup drupal to use ffmpeg.  From what I can tell, I have it installed correctly *ffmpeg - using the atrpms repo
[19:53:32 CEST] <WebWalker3D> I keep getting a strange error, Error applying options to the filter. Error opening filters!
[19:53:45 CEST] <WebWalker3D> and  Unable to parse option value "(null)" as sample format
[19:54:10 CEST] <WebWalker3D> that's after running: /usr/bin/ffmpeg -i /var/www/html/sites/default/files/mediaRecorder_59a9abb367ee3.ogg /var/www/html/sites/default/files/mediaRecorder_59a9abb367ee3.ogg.mp3
[19:54:35 CEST] <WebWalker3D> Google and forums after 2 days have failed me >.<
[19:55:17 CEST] <relaxed> WebWalker3D: pastebin the console output
[19:56:03 CEST] <WebWalker3D> relaxed:  https://pastebin.com/PcW7kZap
[19:56:34 CEST] <WebWalker3D> I get an empty file created though
[19:58:02 CEST] <relaxed> WebWalker3D: version 2.2.1 is pretty old, try https://www.johnvansickle.com/ffmpeg/
[19:58:37 CEST] <alexpigment> ZexaronS: unfortunately, the Blu-ray people decided to make it a standard rather than wait a few months until 1080p60 was feasible. So 1080i is sadly the best option for non-24fps Blu-ray
[19:58:44 CEST] <WebWalker3D> relaxed:  I spent hours trying to compile ffmpeg before saying screw it and finding a repo
[19:59:17 CEST] <alexpigment> Of course, players after 2010 will play 1080p60 per the AVCHD 2.0 standard, but you can't put out a Blu-ray to the masses and say "this might not work on your player"
[19:59:38 CEST] <relaxed> WebWalker3D: they're static binaries, download and move the ffmpeg binary to your bin dir
[19:59:57 CEST] <ZexaronS> alexpigment, oh, I don't bother with bluray, unless the drive costs 20$
[20:00:20 CEST] <alexpigment> yeah, I focus mostly on Blu-ray and broadcast
[20:00:55 CEST] <alexpigment> 1080i is very much entrenched in both markets
[20:01:07 CEST] <ZexaronS> that's business I guess, oh I'm just archiving stuff personally, on HDDs, moved my own collections to x265 progressive
[20:01:10 CEST] <grublet> interlacing is such an annoyance
[20:02:01 CEST] <ZexaronS> stuff at 29 fps i just recoded to 30, at 23.bleh i just put to 24, 59 to 60
[20:02:33 CEST] <alexpigment> ZexaronS: moved everything to x265? that sounds like a huge pain. What was the impetus to spend all that time and potentially lose quality in the process? Just space?
[20:03:50 CEST] <ZexaronS> I took days of an did something else while it was recoding one thing for 12 hours
[20:03:54 CEST] <ZexaronS> off*
[20:04:05 CEST] <alexpigment> I suppose that's fair
[20:04:24 CEST] <alexpigment> I usually leave everything in the source format unless I have to convert it for compatibility
[20:04:31 CEST] <alexpigment> Saves me time and saves quality :)
[20:05:03 CEST] <alexpigment> Then again, I probably have 40TB of data right next to me, so there's a tradeoff ;)
[20:05:26 CEST] <ZexaronS> time solves it, more time, it'll retain quality, but indeed it's a pain x265 encoder quality the implementation decides speed not what ISO specifies "should take 50% more computation" that's all worthless if the code isn't as optimized as with x264
[20:05:47 CEST] <alexpigment> Very true
[20:06:27 CEST] <alexpigment> Also, you're kinda limited to playing back the content only on hardware that specifically has hardware h265 decoding
[20:06:50 CEST] <alexpigment> I suppose if you always know the hardware you're playing it on, that's not a huge issue
[20:09:13 CEST] <ZexaronS> Well the compatability thing is, I'm archiving so it survives another 20-30 years, who knows what can happen, already troubles with MPC-HC, better to move this to safer format versus uncovering old HDDs 20 years from now just to find them in the original formats and in future you might not be able to even find or run an old app anymore since things are just moving so fast and stuff gets erased from history
[20:09:57 CEST] <alexpigment> You make a valid point, certainly
[20:10:29 CEST] <ZexaronS> so I thought, if it's in x265, there should still be decode/play support 20 years later
[20:10:32 CEST] <alexpigment> If you've got random FLVs and MOVs and AVIs in various codecs, it makes sense to move to something more accepted and standardized
[20:10:59 CEST] <alexpigment> We'll see. If technology stopped right now though, h264 would be a safer bet
[20:11:15 CEST] <alexpigment> But yes, it's probably safe to assume that H265 will continue to be a common standard going forward
[20:11:25 CEST] <ZexaronS> and if it doesn't have interlaced/odd things, then it would be a bit easier
[20:11:33 CEST] <alexpigment> I agree with that
[20:12:27 CEST] <alexpigment> I guess I just always stick with a format that has a physical component
[20:12:59 CEST] <alexpigment> Because 20-30 years, we might not know how to deal with interlacing or non-integer frame rates on systems, but I'll be able to find a Blu-ray player and a TV that supports it
[20:13:06 CEST] <alexpigment> So no problem there ;)
[20:13:32 CEST] <alexpigment> which goes back to exactly why I have to deal with 1080i all the damn time
[20:13:49 CEST] <alexpigment> It's the highest resolution (spatial and temporal) that this short-sighted format allows
[20:14:03 CEST] <alexpigment> And of course almost all TV is in 1080i
[20:14:10 CEST] <alexpigment> so I archive TV recordings to Blu-ray
[20:14:21 CEST] <alexpigment> And in most cases that requires zero transcoding
[20:14:30 CEST] <ZexaronS> H265 might not be popular, but if it's ISOIEC then govts/agencies/national archives would probably have those long-term systems with that support, so something should still survive, but then you have hardcore community memebers who keep all that old stuff safe and sound, the problem is, that could be sprinkled all over the world and unless they put it on internet I wouldn't know about it
[20:16:07 CEST] <ZexaronS> the nasa mcdonalds apollo tapes story is an example, like only 1 company in US still knew how to operate the machines and was able to get original sources digitalized http://www.thelivingmoon.com/47john_lear/02files/Lunar_Orbiter_Tapes_Found.html
[20:16:11 CEST] <alexpigment> Yeah true
[20:17:10 CEST] <alexpigment> This looks like a pretty cool read
[20:17:43 CEST] <alexpigment> Also, I'm a bit of an archivist, so I have a lot of hardware that I'm hoarding for various scenarios
[20:18:18 CEST] <alexpigment> I have like 4 Panasonic AG-1980p VCRs (arguably the best ever made. 2 equivalent level JVCs
[20:18:32 CEST] <alexpigment> a D-VHS high definition VCR
[20:18:48 CEST] <alexpigment> several multi-format (PAL, SECAM, NTSC) VCRs
[20:19:00 CEST] <alexpigment> Beta players
[20:19:21 CEST] <alexpigment> lots of dvd recorders that I don't really use anymore except as a pass through for my capture setup
[20:19:33 CEST] <ZexaronS> It's quite mysterious when you think about it, i knew about this story for several years, and I revisit it like 2 times per year as it always pops up in a discussions, like this one, and I never said this but I have another idea now ... why would the tapes be placed there and machines to play them ...
[20:20:09 CEST] <alexpigment> Yeah, it's really ridiculous when you read it as a headline
[20:20:50 CEST] <ZexaronS> I think they were ordered to destroy/throw em in trash, and the low level folks just put it in the mcdonalds ... whithout saying anything, hoping decades in future someone will find it
[20:21:04 CEST] <ZexaronS> that's one theory
[20:24:04 CEST] <alexpigment> Maybe this particular McDonalds was a front/coverup for the illuminati...
[20:24:08 CEST] <alexpigment> (joking of course)
[21:14:42 CEST] <BtbN> alexpigment, well, not I actually hit a driver bug.
[21:15:00 CEST] <BtbN> There is an API which tells you the slice offsets. Which correctly tells me there are two slices when doing an interlaced encode.
[21:15:10 CEST] <BtbN> But the offset for the second one does not get populated.
[21:34:39 CEST] <Alid> Hellllooooo
[21:34:54 CEST] <Alid> I need help
[21:35:46 CEST] <Alid> Is there anyone here?
[21:41:20 CEST] <Alid> Can someone help me?I'm an Android developer.
[21:42:03 CEST] <JEEB> just ask instead of doing metaquestions
[22:13:49 CEST] <kerio> there's always someone somewhere that can help you
[22:19:23 CEST] <AliD1> I can compile ffmpeg for android by this file with no errors:
[22:19:33 CEST] <AliD1> https://github.com/Buzgus/MyExploits/blob/master/ffmpeg_builder_Ok.sh
[22:20:44 CEST] <AliD1> but when I add --enable-libmp3lame flag I get error.
[22:20:54 CEST] <AliD1> Error Message Is:
[22:21:07 CEST] <JEEB> that configure line is so out of date and overcomplicated it's not even funny
[22:21:24 CEST] <AliD1> ERROR: libmp3lame >= 3.98.3 not found
[22:21:24 CEST] <AliD1> If you think configure made a mistake, make sure you are using the latest
[22:21:24 CEST] <AliD1> version from Git.  If the latest version fails, report the problem to the
[22:21:24 CEST] <AliD1> ffmpeg-user at ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
[22:21:24 CEST] <AliD1> Include the log file "config.log" produced by configure as this will help
[22:21:25 CEST] <AliD1> solve the problem.
[22:21:25 CEST] <AliD1> Makefile:2: config.mak: No such file or directory
[22:21:26 CEST] <AliD1> Makefile:67: /common.mak: No such file or directory
[22:21:51 CEST] <JEEB> you have to build LAME first, and if it's more than a single line post it in a pastebin like service in the future
[22:21:54 CEST] <JEEB> and link here
[22:22:16 CEST] <AliD1> I have lame sources
[22:22:39 CEST] <JEEB> AliD1: you can stop overriding the sysroot as soon as you create a standalone toolchain with the make_standalone_toolchain.py script :P
[22:22:44 CEST] <AliD1> and I compiled it to .a file by using Android NDK
[22:23:02 CEST] <JEEB> (simplifies a fuckload of things because the tools know the sysroot by default then)
[22:23:36 CEST] <JEEB> AliD1: you configured and set a prefix and installed the things under a prefix? in that case you should have a .pc file created by LAME in <prefix>/lib/pkgconfig
[22:24:04 CEST] <JEEB> if that is so, PKG_CONFIG_LIBDIR=<prefix>/lib/pkgconfig <configure line for FFmpeg>
[22:24:21 CEST] <JEEB> that way pkg-config will help you pick the correct parameters to build FFmpeg with LAME :P
[22:27:53 CEST] <AliD1> I also get a warning about pkg-config:
[22:27:58 CEST] <AliD1> WARNING: /home/ali/Android/Sdk/ndk-bundle/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-pkg-config not found, library detection may fail.
[22:28:46 CEST] <JEEB> yes, it means that a specific pkg-config for the cross prefix was not found. it will still attempt to use the global one
[22:29:00 CEST] <JEEB> also goddamnit you people not putting your toolchains into PATH :D
[22:29:10 CEST] <JEEB> you're doing this shit so fucking hard to yourselves
[22:29:23 CEST] <JEEB> > not making a standalone toolchain > not putting the toolchain into your PATH
[22:29:55 CEST] <JEEB> it seems like someone did a bad guide at some point about using it like that, and after that everyone just parrots that shit :)
[22:30:07 CEST] <JEEB> oh well, hope you're happy
[22:31:47 CEST] <AliD1> Bor,I'm not a linux lover.I'm forced to use it because of my asshole computer(RAM:2.8  :((((((((((((( ) so I don't know what to do and I just Copy and Paste :)
[22:35:31 CEST] <JEEB> the concept of PATH is common between lunix and windows, among other things
[22:36:44 CEST] <JEEB> anyways, you need to build LAME with the build system with a set prefix, so you can install the headers and libraries into a specific place together with the pkg-config .pc file that contains the required compiler/linker parameters for LAME
[22:37:19 CEST] <JEEB> and then you have to set PKG_CONFIG_LIBDIR to that location (directory) of that installed pc file when configuring something that's using it
[22:37:54 CEST] <AliD1> there is no .pc between compiled lame files...
[22:38:10 CEST] <JEEB> then you built it in a bad way
[22:38:37 CEST] <AliD1> I have many object codes and a .a file
[22:39:12 CEST] <JEEB> it sounds like you used a hacked up NDK specific build thing, instead of the LAME configure script and friends which is the standard build system
[22:39:55 CEST] <JEEB> because when configuring LAME properly you configure the prefix with --prefix=/path/to/your/build/prefix/
[22:40:06 CEST] <JEEB> and then you do make for build and make install to install it to the configured prefix
[22:40:13 CEST] <JEEB> the last step should also generate a .pc file
[22:40:30 CEST] <AliD1> I used this help:
[22:40:31 CEST] <JEEB> so that you would have a lame .pc file under /path/to/your/build/prefix/lib/pkgconfig
[22:40:32 CEST] <AliD1> http://zhgeaits.me/android/2016/06/17/android-ffmpeg.html
[22:40:59 CEST] <JEEB> LOL
[22:40:59 CEST] <JEEB> yea
[22:41:08 CEST] <JEEB> that completely goes around LAME's build system
[22:41:13 CEST] <JEEB> you wanted shit, you got shit
[22:43:05 CEST] <AliD1> Also I found this:
[22:43:08 CEST] <AliD1> https://github.com/gaurav5670/Compile-ffmpeg-for-android-through-command-line
[22:43:10 CEST] <JEEB> this is how you generate a normal toolchain with NDK
[22:43:11 CEST] <JEEB> https://github.com/mpv-android/buildscripts/blob/master/download.sh#L44..L65
[22:43:30 CEST] <JEEB> then you add the bin/ directory of that toolchain into PATH
[22:43:58 CEST] <JEEB> then suddenly you don't have to a) use the sysroot overrides all the time b) specify the full path to tools
[22:44:13 CEST] <JEEB> so suddenly the normal build systems start working much better/simpler for you :P
[22:45:04 CEST] <AliD1> Bro thank you for this help but I can't download (wget dl.google.com) because of SANCTIONS!
[22:45:30 CEST] <JEEB> you already have the NDK archive around anyways
[22:45:55 CEST] <JEEB> don't just copy-paste but understand what the fuck those things mean
[22:46:38 CEST] <AliD1> But also you should know that Internet speed is fucking us in Iran.It's 20kb/s
[22:47:16 CEST] <JEEB> you already have NDK around so I don't think that matters right now
[22:47:26 CEST] <JEEB> since you just use the script included with your NDK download
[22:47:42 CEST] <JEEB> afterwards after you add the standalone toolchain's bin/ directory to PATH you should be able to call `arm-linux-androideabi-gcc` or `arm-linux-androideabi-clang`
[22:47:57 CEST] <JEEB> without specifying the long path :P
[22:48:31 CEST] <AliD1> Ok.I'm going to test that.I will come back here soon.
[22:48:53 CEST] <JEEB> thus the minimum FFmpeg configuration becomes (for ARMv7) `--prefix=/your/prefix --arch=armv7 --cpu=armv7-a --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi-`
[22:49:07 CEST] <JEEB> which is much shorter than what that random script had :P
[22:56:08 CEST] <AliD1> I got this error:https://paste.ee/p/mBjpY
[22:57:14 CEST] <JEEB> why are you running that script since what you need is just using the script that you already have in your NDK directory?
[22:57:38 CEST] <JEEB> not the script I linked, I specifically highlighted lines of it that are useful to you
[23:00:31 CEST] <AliD1> Must I save the selected part of code and save it as .sh file then run it?
[23:00:54 CEST] <JEEB> no
[23:00:59 CEST] <AliD1> ?
[23:02:27 CEST] <JEEB> you have under `build/tools/` in your NDK a script called `make_standalone_toolchain.py`, run it and check the parameters you can give to it. the script lines I pointed to create two toolchains and you can also use them as hints
[23:17:06 CEST] <AliD1> Bro,Check this:https://paste.ee/p/nW4qr
[23:18:06 CEST] <JEEB> yea, you don't stop at your first error which is that for API version of 14 you can't build a arm64 toolchain. it seemed to get the armv7 one done, though?
[23:18:25 CEST] <AliD1> yes
[23:18:27 CEST] <JEEB> (do really check how old of an API version you need)
[23:18:45 CEST] <JEEB> https://source.android.com/source/build-numbers
[23:19:16 CEST] <JEEB> basically the newer API sysroot you use, the newer android version is required by your built things, but on the other hand you get various bug fixes by google :P
[23:19:20 CEST] <JEEB> (and new APIs/features)
[23:19:35 CEST] <AliD1> my minimum is 14 because of my project codes
[23:19:41 CEST] <JEEB> ok
[23:19:47 CEST] <JEEB> or well, maximum
[23:19:53 CEST] <JEEB> you mean?
[23:19:59 CEST] <AliD1> No
[23:20:37 CEST] <AliD1> about 100% of people will be able to use my app
[23:20:43 CEST] <JEEB> yes
[23:20:57 CEST] <JEEB> I meant that you have a reason why you are targeting that old API version
[23:21:26 CEST] <AliD1> I have no problem with higher APIs.
[23:21:29 CEST] <JEEB> it's just also important to not set the thing too low because you lose features/bug fixes
[23:22:03 CEST] <AliD1> There is no problem.I can use 21
[23:22:22 CEST] <JEEB> yea, it just means that devices under 21 can't use the binaries :)
[23:22:39 CEST] <JEEB> I use 21 myself because my opengl code would just not run on 4.4.x devices or older
[23:25:08 CEST] <AliD1> Great.Thank you for your helps.Bro,Is this Ok?Have I all files?https://paste.ee/p/tCzEx
[23:25:45 CEST] <JEEB> that's still not the one you created
[23:26:03 CEST] <JEEB> anyways, I'm out
[23:26:10 CEST] <AliD1> yes I can have arm64 too
[23:27:15 CEST] <AliD1> Now,what should I do?
[23:27:29 CEST] <AliD1> Compiling Lame?
[00:00:00 CEST] --- Sun Sep  3 2017

More information about the Ffmpeg-devel-irc mailing list