[Ffmpeg-devel-irc] ffmpeg.log.20171013

burek burek021 at gmail.com
Sat Oct 14 03:05:01 EEST 2017


[00:49:56 CEST] <dylanmach1> hello, i am having trouble compiling and making a static bin of ffmpeg on macOS from the git
[01:46:54 CEST] <DocHopper> JEEB: Working on changing some of FFMPEG's features, and now I need to start compiling.  I'm looking at the guide here: https://trac.ffmpeg.org/wiki/CompilationGuide/MinGW but it is over my head.  Any more basic resources?
[02:19:46 CEST] <nanke> Hello, may I ask one question about ffmpeg transcoding ?
[02:29:00 CEST] <DHE> just ask your question
[02:34:19 CEST] <nanke> Thank you.  I am working with ffmpeg for transcoding live streams. On some live sources I have audio loss after stream being played about few hours, sometimes 3-4, sometime after 10 hours. Also sometimes audio is ok, but picture gone... If somebody can help me to solve this, I will pay for time spent on it. Thanks
[03:43:04 CEST] <Johnjay> nanke: well there goes my bluff
[03:43:12 CEST] <Johnjay> normally when i don't know the answer I say "I'll help you if you PAY me"
[03:43:18 CEST] <Johnjay> since nobody would ever do that. XD
[05:45:25 CEST] <Djuice> hello can anyone help me with this error:
[05:45:27 CEST] <Djuice>   -> Running ffmpeg configure script. Please wait...
[05:45:27 CEST] <Djuice> ERROR: libflite not found
[05:47:22 CEST] <Djuice> flite1 1.4-2 (8, 2.29) [installed]
[05:52:41 CEST] <atomnuker> don't enable flite
[05:54:20 CEST] <Djuice> what does flite do
[05:55:31 CEST] <atomnuker> text to voice synth
[05:59:58 CEST] <Djuice> thanks atomnuker
[06:44:36 CEST] <pankuu> how can i compress mp4 video size
[06:45:58 CEST] <pankuu> can anyone help me
[08:17:34 CEST] <Johnjay> ah I just saw that guy asking about minGW compilation
[08:18:19 CEST] <Johnjay> It's actually pretty easy as long as you get all the packages downloaded
[08:43:49 CEST] <rabbe> does WebRTC work in iphone/ipad now?
[08:57:05 CEST] <rabbe> how can i get VLC to read rtmp like ffplay -fflags nobuffer ?
[09:16:55 CEST] <stevenliu> rabbe: reference->input/codec->network
[09:18:51 CEST] <thebombzen> if zscale can't find a path between colorspaces, it'll segfault. is this a wrapper issue or a known zimg bug?
[09:20:39 CEST] <furq> uh
[09:20:48 CEST] <furq> zscale segfaults a lot but i've not had it segfault there before
[09:21:06 CEST] <furq> are you passing aligned data
[09:21:11 CEST] <thebombzen> ffmpeg.c, idk
[09:21:22 CEST] <furq> what's the filterchain
[09:22:53 CEST] <thebombzen> ffmpeg -f lavfi -i testsrc -vf zscale,format=yuv420p -f null -
[09:23:04 CEST] <thebombzen> testsrc is rgb, it'll segfault unless you specify the matrix
[09:23:07 CEST] <thebombzen> this command segfaults
[09:23:22 CEST] <thebombzen> this does not: ffmpeg -f lavfi -i testsrc -vf zscale=matrix=709,format=yuv420p -f null -
[09:23:36 CEST] <furq> fun
[09:23:55 CEST] <thebombzen> however, if you specify the matrix just to be safe, it'll segfault if the in colorspace isn't tagged
[09:24:01 CEST] <furq> the first one doesn't segfault here
[09:24:06 CEST] <furq> it just prints no path between colorspaces a lot
[09:24:17 CEST] <thebombzen> I build zimg from git master, hm.
[09:24:21 CEST] <thebombzen> built
[09:24:26 CEST] <furq> so do i but i've not rebuilt in a while
[09:25:38 CEST] <thebombzen> nope, still segfaults if I use the Arch version
[09:25:45 CEST] <thebombzen> ffmpeg is git master as well
[09:28:32 CEST] <thebombzen> I'll run it through valgrind
[09:37:17 CEST] <rabbe> stevenliu: huh? :) you mean in VLC settings?
[09:37:29 CEST] <stevenliu> yes
[09:38:08 CEST] <rabbe> under input / codecs -> demuxers -> rtp / rtsp?
[09:38:31 CEST] <thebombzen> furq: valgrind reports it's an alignment issue
[09:38:31 CEST] <thebombzen> https://0x0.st/CQx.log
[09:38:31 CEST] <rabbe> or rtp
[09:38:34 CEST] <stevenliu> maybe just chose low
[09:38:37 CEST] <stevenliu> choose low
[09:39:02 CEST] <furq> weird
[09:39:33 CEST] <thebombzen> I'll ask in #ffmpeg-devel
[09:39:52 CEST] <furq> yeah whatever that is must be new in zimg or ffmpeg
[09:40:05 CEST] <furq> it doesn't happen in my freebsd ports build with the latest releases of both
[09:40:30 CEST] <rabbe> which one is it, RTP or RTP/RTSP?
[09:47:41 CEST] <stevenliu> mhmm, just try whatevery
[09:47:42 CEST] <stevenliu> :D
[09:48:13 CEST] <stevenliu> We dose not use VLC play rtmp, we use ijkplayer, flashplayer,ffplay
[10:43:13 CEST] <rabbe> is it a good idea trying to force speed = 1x for realtime streaming?
[10:53:06 CEST] <stevenliu> No
[10:55:18 CEST] <rabbe> >1 gives me slow motion?
[10:55:50 CEST] <rabbe> in the receiving end (using ffplay -fflags nobuffer rtmp://...)
[10:55:51 CEST] <stevenliu> We do low latency way: cache 3 - 4 GOP on RTMPServer, When player request a rtmp stream, we send all the GOP to the player, but delete all the Audio data, and modify all the Video Data timestamp to 0,and the newest frame at the RTMPServer we modify 0 + 1/TB and send to player, then it low latency and first picture is very fast.
[10:58:17 CEST] <rabbe> is that all using ffmpeg or some other software?
[11:00:42 CEST] <rabbe> i want to ffmpeg to nginx and then receive on mobile device using vlc or something (preferably just the browser, but vlc is ok i guess). as realtime as i can get
[11:01:35 CEST] <rabbe> i also need to record, maybe i can use the nginx functionality for that, but the video should be uploaded to a certain place somehow
[11:02:31 CEST] <rabbe> realtime receivers will be inside the same wireless network
[11:02:42 CEST] <rabbe> upload should be to internet
[11:03:55 CEST] <rabbe> other tech than nginx is ok.. i'm open for suggestions
[11:03:59 CEST] <stevenliu> My solution is   ffmpeg+SRS+ffplay     mobile phone:(javacv/yasea), PC (OBS/FFmpeg), Server: SRS, Player: ffplay/flashplayer/ijkplayer
[11:04:50 CEST] <rabbe> javacv/yasea is javascript?
[11:05:22 CEST] <stevenliu> be used to make Game(COC,‹c
[11:06:15 CEST] <rabbe> i dont need rtmp either.. but maybe rtmp is good?
[11:06:32 CEST] <stevenliu> no, javacv and yasea is SDK on mobile phone, be used to get camera, screen frame and encode to h264, get microphone audio to aac, and publish the data to rtmpserver
[11:06:41 CEST] <rabbe> webrtc is supposed to be good also?
[11:07:02 CEST] <rabbe> i need as little to be installed on the mobile devices as possible
[11:07:15 CEST] <stevenliu> if you want low latency and little people to see the stream, maybe WebRTC+Janus/licode is better way
[11:07:18 CEST] <rabbe> audio is not important
[11:07:34 CEST] <rabbe> yeah, not too many viewers
[11:07:51 CEST] <stevenliu> Maybe you can try use WebRTC+Janus/licode
[11:08:03 CEST] <rabbe> difficult to set up?
[11:08:07 CEST] <stevenliu> looks like video meeting
[11:08:18 CEST] <rabbe> one-to-many style
[11:08:29 CEST] <stevenliu> there have lots of documentation and blog to share the install way
[11:09:16 CEST] <stevenliu> one to many style, the many's number is how many? 3? 30? 300? 3000? 30000? 300000?
[11:09:17 CEST] <rabbe> free solution?
[11:10:41 CEST] <rabbe> i guess no more than 3 viewers max.. one for TV maybe, then ipad or similar. then they will use the recorded material
[11:11:07 CEST] <stevenliu> Ah, you can try WebRTC,
[11:11:16 CEST] <stevenliu> That's free
[11:11:25 CEST] <stevenliu> you can find all the project on github
[11:11:44 CEST] <rabbe> you mean webrtc-experiments?
[11:11:49 CEST] <rabbe> .com or something
[11:12:06 CEST] <rabbe> that seemed to have some recordRTC
[11:12:40 CEST] <stevenliu> https://github.com/webrtc
[11:12:59 CEST] <furq> rabbe: if you want to be able to play it in the browser with low latency then webrtc is pretty much the only thing
[11:13:07 CEST] <furq> rtmp needs flash and hls/dash have horrendous latency
[11:15:22 CEST] <rabbe> yeah.. rtmp is working ok now if i use ffplay, but in browser i don't know.. i guess i could use vlc on mobile devices, but i haven't been able to get the same performance as ffplay -nobuffer yet
[11:15:54 CEST] <stevenliu> browser? mobile phone browser?
[11:15:55 CEST] <rabbe> but does iphone handle webrtc now with ios11?
[11:16:05 CEST] <rabbe> i mean, anyone tried it?
[11:16:41 CEST] <rabbe> yup, either mobile browser or vlc on mobile.. u have any other solutions?
[11:17:27 CEST] <rabbe> i did see some rtmp player on appstore.. don't know if it exists for android
[11:18:12 CEST] <rabbe> but if webrtc is a better protocol than rtmp, i'd rather use that if it works
[11:18:34 CEST] <stevenliu> yes
[11:19:01 CEST] <rabbe> but does webrtc specify what kind of codecs should be used?
[11:19:12 CEST] <JEEB> yes, it's a limitation on top of RTP
[11:19:24 CEST] <JEEB> "use RTP with this and this and this in this way and you're ok to call it WebRTC"
[11:19:48 CEST] <rabbe> ok
[11:20:59 CEST] <rabbe> also i will need some way of turning on and off recording.. like some rest api
[11:21:24 CEST] <rabbe> solution suggestions are welcome, guys
[11:23:06 CEST] <stevenliu> In China, lots of the platform use rtmp to share screen to play game, one to manyplayers number > 10000	, and lots of the platform just develop webui, all the player and server are using CDN, all the multimedia solution is standard service of CDN service....   WebRTC is used when connect one to one,  to girls play games use WebRTC, and one girl shared them image to many people use rtmp
[11:24:08 CEST] <rabbe> okay
[11:25:42 CEST] <stevenliu> WebRTC to WebRTC to RTMP , this is the way
[11:27:56 CEST] <JEEB> the problem with RTMP is that it will not be developed further
[11:28:15 CEST] <JEEB> we can just standardize on X over HTTP for christ's sake if want something like that over TCP.
[11:28:18 CEST] <JEEB> :)
[11:28:33 CEST] <rabbe> so webrtc seems to be the way to go here?
[11:28:37 CEST] <JEEB> it depends
[11:28:43 CEST] <JEEB> WebRTC is for low-latency UDP scenarios
[11:28:51 CEST] <JEEB> because, well, RTP is UDP
[11:29:26 CEST] <JEEB> then you have a lot of infrastructure built around other TCP based protocols, which is why RTMP is often utilized for some things. it's not a technical thing, it's a "we already have this stuff around" kind of thing
[11:30:16 CEST] <rabbe> yeah, the mobile device + tv (somehow) should receive the latest camera (gopro) image.. but the recording should be smooths
[11:30:46 CEST] <rabbe> i guess mobile receiving could be choppy, as long as its showing realtime
[11:31:02 CEST] <stevenliu> yes, RTMP is not be developed, but that is good protocol :D
[11:31:05 CEST] <rabbe> but recording should be a nice video
[11:32:11 CEST] <stevenliu> or try to use websocket ?
[11:32:18 CEST] <BtbN> WebRTC is a mess
[11:32:21 CEST] <JEEB> stevenliu: I don't think the protocol itself is good. it's just a TCP based thing that people have tooling for. I don't consider anyone actually using the messages in it, for example
[11:32:25 CEST] <JEEB> BtbN: yup
[11:32:26 CEST] <BtbN> good luck getting a stable solution implemented
[11:32:41 CEST] <JEEB> not enough tooling and the clients don't tell you why something failed
[11:32:59 CEST] <stevenliu> mhmm haha
[11:33:13 CEST] <stevenliu> that is why akamai won't support flv and rtmp CDN
[11:33:37 CEST] <JEEB> or well, stevenliu - what I say is that I don't think the RTMP protocol is good or bad - it's really irrelevant since you're pushing stuff over TCP, that's all basically
[11:33:47 CEST] <BtbN> They won't support non-http CDN because they already have one for http, and frankensteining everything else on top of HTTP is cheaper for them
[11:33:49 CEST] <JEEB> you could be doing the same without the RTMP layer if you had the tools and players
[11:34:07 CEST] <stevenliu> yes,  JEEB said is right
[11:34:37 CEST] <JEEB> I've been doing live with VLC's HTTP output for example. it doesn't scale, but I can just connect to it with any HTTP client
[11:34:51 CEST] <JEEB> and I just push MPEG-TS over the pipe
[11:34:52 CEST] <JEEB> done
[11:34:54 CEST] <BtbN> If said http client supports the video format
[11:35:04 CEST] <BtbN> and browsers do not support anything that can be streamed in a useful manner
[11:35:06 CEST] <JEEB> yes, of course. I was mostly playing with mpv/VLC
[11:35:11 CEST] <JEEB> yes, I know browsers are shit in that sense :D
[11:35:20 CEST] <stevenliu> whatever
[11:35:22 CEST] <JEEB> XmlHTTPRequest requiring you to buffer the whole shit
[11:35:38 CEST] <JEEB> not sure if websockets is better in that sense
[11:41:45 CEST] <stevenliu> whatever protocol, we use rtmp or flv is just think the people have beed installed flashplayer on pc, so use it, and the people don't want install VLC plugin, so use it,  now , lots of people and platform use mobile phone, we use rtmp or flv maybe just not want to change or need time, i think that need some time, and we can use http+ts to support it , maybe just history reason :D
[11:45:23 CEST] <stevenliu> the people want use hevc, but the flv cannot add it, they add it into flv themselves change that to private protocol, and make it unify, or use http+ts is better :D, Whatever, the stream publisher and player is be used by themselves, so that is not important what protocol they use.
[11:47:17 CEST] <rabbe> lets say the computer having the capture card will "just" need to capture the video to disk at certain times, and provide only one viewer with a live stream, can i do that with ffmpeg alone? maybe send via udp?
[11:47:53 CEST] <rabbe> if i create that stream, the 'server' can also catch from that stream?
[11:50:35 CEST] <JEEB> stevenliu: yup, someone just has to make a sane pick :)
[11:50:46 CEST] <JEEB> and make the first clients, as flash is going down
[11:52:11 CEST] <stevenliu> :D  agreed
[11:52:13 CEST] <notdaniel> dash+hls :P
[11:52:18 CEST] <stevenliu> Noooo
[11:52:37 CEST] <stevenliu> Dash and HLS have long lantency
[11:55:21 CEST] <stevenliu> use to TV broadcast maybe better, but support "9U¤’" is not good then http+ts or flv :D, and the http+ts or flv is bad than WebRTC, but the WebRTC cannot support hugh players number, so if not use PC Browser, http+ts or mkv maybe better way?
[12:26:17 CEST] <JEEB> notdaniel: yea that's not actual streaming. which is what you can want in various cases
[15:14:53 CEST] <Nacht> Good lord, building x265 takes ages :/
[15:17:50 CEST] <klaxa> just like encoding a video with x265
[15:17:55 CEST] <Nacht> So true
[15:19:36 CEST] <Nacht> Anyone know why I keep getting this -> ERROR: x265 not found using pkg-config
[15:19:55 CEST] <Nacht> First installed it trough apt-get, now just build it myself. I def got it installed
[15:20:27 CEST] <JEEB> make sure you have pkg-config installed and that you have x264 in your pkg-config search path
[15:20:31 CEST] <JEEB> pkg-config --cflags x264
[15:20:33 CEST] <JEEB> pkg-config --cflags x265
[15:20:48 CEST] <JEEB> and if it isn't, PKG_CONFIG_PATH=/your/prefix/lib/pkgconfig
[15:21:11 CEST] <JEEB> (that appends that directory to the pkg-config search path)
[15:22:16 CEST] <Nacht> Cheers, ill have a look at that
[15:26:35 CEST] <jojva> Hi. I have a stream of YUV frames of some resolution (e.g. 1920x1080)  and want to encode them to a smaller resolution (e.g. 640x360). Do I have to use sws_scale for this, before I call avcodec_encode_video2, or is there a cleaner/faster way?
[15:29:01 CEST] <JEEB> jojva: if your stuff is already an AVFrame, then libavfilter's scale filter is what you want
[15:29:14 CEST] <JEEB> also for encoding if you are writing new code utilize the new API
[15:29:26 CEST] <JEEB> jojva: this stuff https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[15:29:36 CEST] <JEEB> has the same API for audio, video and subtitles
[15:29:45 CEST] <JEEB> or wait, no. not subittles
[15:29:52 CEST] <JEEB> since those are not AVFrames yet
[15:29:53 CEST] <JEEB> :<
[15:30:05 CEST] <jojva> JEEB: I can use the scale filter from code?
[15:30:11 CEST] <JEEB> yes, of course
[15:30:18 CEST] <jojva> And I can't use the new API, I'm working on an embedded system
[15:30:24 CEST] <jojva> JEEB: ok, thx ;)
[15:30:25 CEST] <JEEB> what does that have to do with it :D
[15:30:38 CEST] <JEEB> unless you are stuck with an old old version of FFmpeg
[15:30:41 CEST] <JEEB> &19
[15:30:43 CEST] <jojva> exactly
[15:30:50 CEST] <JEEB> my condolences then :P
[15:30:58 CEST] <JEEB> I find the send/receive API nice
[15:31:16 CEST] <Nacht> man, I have no clue where my pkg-config is installed :/
[15:31:32 CEST] <Nacht> ah wait, I fixed my locate
[15:32:39 CEST] <jojva> JEEB: Yep I'm actually working on several version of FFmpeg at the same time, with horrible #ifdef to handle avcodec_encode_video and avcodec_encode_video2. It's a mess but I don't have a choice. And yep I love the new API it's clean I've tried it at home. :)
[15:36:40 CEST] <jojva> JEEB: another question. What's the difference between libavfilter's scale filter and sws_scale?
[15:36:54 CEST] <rabbe> if i send out an rtmp stream and get ~2x speed, i get slow motion at the receiving end (ffplay -fflags nobuffer). how can i fix this?
[15:36:56 CEST] <JEEB> jojva: libavfilter is less error prone if you are already dealing with AVFrames
[15:37:09 CEST] <JEEB> swscale is used in the background for the scale filter
[15:37:16 CEST] <JEEB> it just that *you* don't have to write that code
[15:37:43 CEST] <JEEB> instead you can write generic libavfilter code which will also help you with other stuff should you later require it
[15:38:23 CEST] <jojva> well i've never handled libavfilter and i know sws_scale quite well. So i think the easy solution for me is the latter one.
[15:44:10 CEST] <JEEB> jojva: alright :)
[15:44:20 CEST] <JEEB> if you know it then sure
[15:45:49 CEST] <Nacht> wth...
[15:46:01 CEST] <Nacht> pkg-config --cflags x265 return my path to x265
[15:46:09 CEST] <Nacht> yet it still claims its not there
[15:46:44 CEST] <JEEB> check if it can link from ffbuild/config.log
[15:46:52 CEST] <JEEB> that should show the exact error it's having
[15:49:09 CEST] <Nacht> threading.cpp:(.text+0x93): undefined reference to `pthread_join'
[15:50:14 CEST] <Farmer-Fred> Hey y'all, I'm wondering if ffmpeg can convert video files to .AMV?
[15:50:38 CEST] <JEEB> Nacht: are you per chance linking to a static build?
[15:51:09 CEST] <Nacht> I am using --pkg-config-flags="--static" \
[15:51:29 CEST] <JEEB> then the pc file just lacks -lpthread :P
[15:51:33 CEST] <JEEB> or something similar
[15:52:03 CEST] <Nacht> I'm just following the standard Ubuntu Compilation tutorial, with added options
[15:52:16 CEST] <Nacht>   --enable-openssl    --enable-indevs
[15:52:22 CEST] <JEEB> yea, it just means that either x265 does somethign wrong or doesn't support your use case
[15:52:33 CEST] <Nacht> bleh
[15:52:48 CEST] <Nacht> its weird tho, I did build it with x265 in the past.
[15:52:54 CEST] <Nacht> I wonder why it isnt working now then :/
[15:53:01 CEST] <JEEB> anyways, just add the flag to Libs in the pc file
[15:53:03 CEST] <JEEB> for x265
[15:53:04 CEST] <JEEB> :P
[15:53:08 CEST] <JEEB> or switch to shared builds
[15:53:51 CEST] <Nacht> What's the difference choosing static or shared ?
[15:55:46 CEST] <JEEB> shared libraries vs static. you might have to set LD_LIBRARY_PATH=/your/prefix/lib for the things to load up
[15:55:47 CEST] <DHE> static binaries don't require external libraries like /lib/libz.so or other system libraries
[15:56:06 CEST] <DHE> but the executable is signfnicantly larger
[15:56:21 CEST] <Nacht> Ah. I see
[15:56:44 CEST] <Nacht> So like building OpenSSL along, so you're not forced to use an ancient version (looking at you CentOS)
[16:08:25 CEST] <Nacht> cheers JEEB, the -lpthread did the thing
[16:08:54 CEST] <JEEB> basically, x265 depends on it but doesn't signal that it does
[16:09:11 CEST] <JEEB> thus fixing the pc file is the best thing, and reporting the issue to x265
[16:31:09 CEST] <Nacht> Ugh, I hate building FFMPEG
[16:31:15 CEST] <Nacht> takes ages, and it always ends up error'ing
[16:34:20 CEST] <klaxa> really? it only takes a few minutes on my good laptop
[16:34:59 CEST] <Nacht> It was just running for 30mins
[16:35:04 CEST] <Nacht> Stopped on vpx :/
[16:36:53 CEST] <Nacht> Might be due to Windows Bash.
[16:37:08 CEST] <Nacht> It's good, but it's not equal to an actual Ubuntu install
[16:37:21 CEST] <JEEB> it seems like it gets better with 1709
[16:39:14 CEST] <Nacht> Now getting: (vpx_codec.c.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
[16:39:39 CEST] <JEEB> needs PIC in the library you depend on
[16:39:44 CEST] <JEEB> (not FFmpeg)
[16:40:10 CEST] <Nacht> It's the vpx lib
[17:15:05 CEST] <kepstin> Nacht: you're trying to link a static library into a shared library, and the static library was not compiled in the correct way for that. Either build ffmpeg as static, or link to the dynamic version of the vpx lib.
[18:10:09 CEST] <DocHopper> Is there a simple method for compiling the FFMPEG source code for windows?
[18:12:16 CEST] <kepstin> ... last time I did it, I cross-compiled it from a linux box :) I think you'll want to grab msys2 with mingw-w64 nowadays, building ffmpeg inside the msys2 shell should be fairly straightforwards.
[18:13:30 CEST] <SonicTheHedgehog> DocHopper: https://trac.ffmpeg.org/wiki/CompilationGuide  see the Windows sections
[18:15:57 CEST] <DocHopper> SonicTheHedgehog: I've already looked at that and it's beyond my understanding.  I think I might have found a youtube video that explains.
[18:16:56 CEST] <DocHopper> SonicTheHedgehog: That guide includes the phrase "Once you installed all the necessary packages (MinGW is the only strict requirement for building FFmpeg, git is required to update your FFmpeg source), you need to open a MinGW shell, change directory to where you checked out the FFmpeg sources, and configure and make FFmpeg the usual way."
[18:17:09 CEST] Action: SonicTheHedgehog nods
[18:17:22 CEST] <DocHopper> 8 words to explain the part I have no idea how to do.
[18:17:58 CEST] <SonicTheHedgehog> in a nutshell, MinGW is a compatibility subsystem that makes Windows behave like Linux in some ways, and you need to the compilation from there
[18:18:14 CEST] <DHE> that sounds more like cygwin
[18:19:16 CEST] <DocHopper> Oh, so I just run the make file in a mini limux environment?
[18:19:32 CEST] <DHE> I think cygwin provides a POSIX interface while mingw is more like simply porting gnu apps (including gcc) to windows.
[18:19:54 CEST] <DHE> I've built full, proper GUI windows apps under mingw that don't have additional dependencies.
[18:19:58 CEST] <SonicTheHedgehog> yes, that's totally true, but to avoid confusing DocHopper any further I left all that out
[18:20:08 CEST] <DHE> fair enough
[18:21:12 CEST] <DocHopper> SonicTheHedgehog: Thanks for watching out for me.  I take steps one at a time. ^_^
[18:21:22 CEST] Action: SonicTheHedgehog nods
[18:22:18 CEST] <DocHopper> Never thought this project would get so complex, or that to get FFMPEG to ignore an error I would have to edit the source code and recompile.
[18:23:00 CEST] <SonicTheHedgehog> Hm, what error are you getting? Maybe there's a way around that that doesn't require you to compile your own binary?
[18:23:54 CEST] <DocHopper> I would love to find that.  I need ffmpeg to load a text file for an overlay that is changing 2x a second.  if the file is being edited when ffmpeg is trying to read, it stops.
[18:27:07 CEST] <SonicTheHedgehog> DocHopper: https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=718
[18:29:03 CEST] <DocHopper> SonicTheHedgehog:  I'm looking at the flags in the last post?
[18:30:30 CEST] <SonicTheHedgehog> DocHopper: yeah, the last post suggests using a tee
[18:34:19 CEST] <DocHopper> SonicTheHedgehog: I'll give you a brief sitrep.  I'm pulling video from 2 webcams on a latte panda, merging it with a data stream from a connected serial device (Which is in a text file created and updated using python), and saving the file as a .ts so I can stream it live OTA to a remote location.
[18:34:32 CEST] <SonicTheHedgehog> I see
[18:35:22 CEST] <DocHopper> I'm not sure how a tee would help.
[18:35:42 CEST] <SonicTheHedgehog> Hmmmmmmm
[18:48:14 CEST] <DocHopper> SonicTheHedgehog: Is there an FFMPEG bot I can add this video too if people ask about compiling in the future?  https://www.youtube.com/watch?v=3yhkX0uaQGk
[18:49:36 CEST] <BtbN> a video? About compiling? oO
[18:52:11 CEST] <DocHopper> BtbN: It's what I needed.
[18:52:40 CEST] <BtbN> A video is the worst possible way for compile instructions.
[18:53:14 CEST] <redrabbit> yeah
[18:53:34 CEST] <BtbN> well, a pure audio recording would probably be worse
[18:53:44 CEST] <redrabbit> lol
[18:53:54 CEST] <redrabbit> that would be like evil
[18:57:42 CEST] <DocHopper> Well, it's better than the official FFMPEG documentation for someone who has no idea what they're doing.
[19:03:40 CEST] <BtbN> It's just ./configure && make for starters.
[19:08:33 CEST] <kerio> BtbN: what about
[19:08:36 CEST] <kerio> a sculpture
[19:08:44 CEST] <thebombzen> >compilation instructions
[19:08:46 CEST] <thebombzen> >video
[19:08:49 CEST] <thebombzen> lmao
[19:11:32 CEST] <Fenrirthviti> There isn't even any audio, it's just subtitles
[19:15:58 CEST] <ayum> Hi, what's the difference between cuda and nvenc? I know nvenc is encoder, and there is a cuvid which is decoder, but what's cuda?
[19:16:39 CEST] <ayum> I tried compile ffmpeg with --enable-cuda and --enable-cuvid, but it says Unknown option "--enable-cuda"
[19:16:55 CEST] <ayum> I am using ffmpeg 3.0.2 version
[19:20:58 CEST] <DHE> cuda is the nvidia equivalent to opencl, but there's cuda glue for the cuvid and nvenc interfaces
[19:21:22 CEST] <DHE> so you can do things like decode and post-process frames on the GPU, then re-encode with nvenc all internally
[19:22:45 CEST] <kepstin> note that 'cuda' usually refers to stuff running on the main gpu processor, but 'cuvid' and 'nvenc' refer to dedicated video encoder/decoder logic on the chip separate from the gpu cores.
[19:23:48 CEST] <ayum> okay, thanks. If I have used nvenc codec to encoding, do I need use cuda?
[19:24:28 CEST] <BtbN> 3.0 is too old for the cuda filters
[19:24:47 CEST] <ayum> I will update to latest version and try it again
[19:25:10 CEST] <kepstin> ayum: you can use the encoder hardware on nvidia cards without needing to use any of the other cuda functionality, iirc.
[19:25:43 CEST] <ayum> @kepstin, If use cuda, can I transcoding more channels?
[19:26:58 CEST] <kepstin> ayum: not sure what you mean. The number of encoder streams an nvidia card can handle is set by arbitrary driver limits on geforce cards, and the capabilities of the nvenc on quadro cards, cuda has nothing to do with it.
[19:27:38 CEST] <ayum> so I can just use nvenc, it's enough, right?
[19:27:57 CEST] <kepstin> if you just want to use the hardware encoder, yes.
[19:29:17 CEST] <ayum> @kepstin, I known that some nvidia gpu card has session limitation, I remember GTX 980 can only open 2 session at same time.  can you recommend a nvidia gpu card which has no session limitation or at least can transcoding  more channels.
[19:29:49 CEST] <ayum> yes, my plan is use transcoding only, no need use cuda filters I think
[19:29:56 CEST] <kepstin> ayum: the session limit is done by drivers on the non-professional cards. If you want no session limits, get a midrange or higher quadro card.
[19:30:26 CEST] <ayum> @kepstin, thanks
[19:30:47 CEST] <kepstin> note that if you're only using the encoder, the actual gpu performance doesn't matter
[19:31:17 CEST] <kepstin> most of the cards of a particular chip generation will have the same video encoder performance (minor differences depending on clocks)
[19:31:51 CEST] <BtbN> Well, usually the lower end cards have a better decoder/encoder
[19:31:55 CEST] <BtbN> Because they come out later
[19:32:04 CEST] <BtbN> That's true for the 950 and 1050
[19:32:07 CEST] <kepstin> ayum: https://developer.nvidia.com/nvidia-video-codec-sdk#NVENCPerf has some charts showing session counts on different cards
[19:32:33 CEST] <BtbN> That chart is pointless though
[19:32:40 CEST] <BtbN> As you are limited to two streams anyway
[19:32:49 CEST] <BtbN> unless you have a Quadro or Tesla card
[19:32:58 CEST] <kepstin> that chart only lists quadro and tesla cards, yes
[19:33:51 CEST] <ayum> got it, I will check above professional card price
[19:36:38 CEST] <FMatteo101101> Hello
[19:37:25 CEST] <FMatteo101101> I am merging 2 rtmp streams....one main stream and one stream with a translators voice to the stream
[19:37:42 CEST] <kepstin> looking at the prices, a stack of quadro p2000 cards is probably the most cost-efficient way to get a bunch of nvenc sessions, if that's what you want.
[19:37:52 CEST] <FMatteo101101> I am doing this on a nginx server
[19:38:34 CEST] <FMatteo101101> it's working fine,....until the translators connection is lost for some reason
[19:39:19 CEST] <FMatteo101101> then I have an issue reconnecting the translator,....and viewing the stream while the translator is offline
[19:39:29 CEST] <FMatteo101101> can someone help me on this issue?
[19:40:28 CEST] <DocHopper> SonicTheHedgehog: Thanks for your help, I successfully compiled ffmpeg from the snapshot, now it's time to start modifying things!
[19:40:33 CEST] <BtbN> you'll have to re-start the merging process.
[19:40:58 CEST] <FMatteo101101> is there a way to automate this process?
[19:42:40 CEST] <DHE> maybe just a shell script?
[19:42:49 CEST] <DHE> while true; do ffmpeg ..... ; sleep 1 ; done
[19:43:01 CEST] <DHE> (works on most shells on unix)
[19:45:17 CEST] <SonicTheHedgehog> DocHopper: yw, sorry had to step away for a while, but glad you got things working! :)
[19:49:36 CEST] <FMatteo101101> has anyone ever managed a live stream with a translator on the other side of the world?
[19:51:54 CEST] <c3r1c3-Win> FMatteo101101: Via TV broadcast, not the internet. It was fairly straight-forward, if really expensive, but doing it over the internet should be straight-forward (assuming they have an excellent connection).
[19:52:25 CEST] <FMatteo101101> well I got it working.....
[19:52:45 CEST] <FMatteo101101> but when the translator looses the connection I have issues reconnecting
[19:55:15 CEST] <DHE> sounds like you might want something more reliable than stock ffmpeg. like making an application of your own using ffmpeg that will tolerate the loss and start either streaming stock audio, or streaming silence, or something like that until it reconnects
[19:55:35 CEST] <DHE> I'm not specifically aware of something existing that does that now
[19:55:57 CEST] <FMatteo101101> DHE probably you are right
[19:57:16 CEST] <FMatteo101101> any ideas of someone that could help me in this task?
[19:57:51 CEST] <c3r1c3-Win> FMatteo101101: Some video-able coders... and they tend to cost $$$.
[19:59:19 CEST] <FMatteo101101> well....if it's reasonable pricing I could consider it
[22:16:52 CEST] <tomonori_> Hi, I have a quick sync problem in the latest 3.3.4 version ffmpeg, it will show "Warning in encoder initialization: partial acceleration (4)" warning, and it seems use CPU to encoding. but In the old 3.2.4 version, quick sync encoder is works great. I want to know is it a bug in 3.3 version?
[22:17:36 CEST] <tomonori_> Same problem with this ticket. https://trac.ffmpeg.org/ticket/6347
[23:10:16 CEST] <alexp> tomonori_: i ran into that same problem before
[23:10:47 CEST] <alexp> in my case, it was because a monitor wasn't plugged in for the integrated graphics
[23:11:09 CEST] <alexp> the difference is that earlier versions *didn't report a problem* but behaved at exactly the same speed
[23:11:54 CEST] <alexp> for my case, I use a test command to see if encoding to qsv passes or fails, but since it was only a "warning", it would technically pass and use the not-worth-using partial acceleration
[23:12:11 CEST] <alexp> so I just edited the source file to fail instead of warn
[23:12:56 CEST] <tomonori_> @alexp, I don't have other graphical card. only integrated card
[23:13:16 CEST] <tomonori_> @alexp, and if you check the CPU usage, you will found actually it using CPU encoding
[23:13:22 CEST] <alexp> right
[23:13:25 CEST] <alexp> and slowly, at that
[23:13:32 CEST] <alexp> have you tried updating your graphics driver?
[23:13:55 CEST] <tomonori_> @alexp, yes, In old version 3.2.8, the quick sync works good, CPU used only 3%, but in 3.3 CPU usage 30%
[23:14:19 CEST] <tomonori_> @alexp, sorry, not.
[23:14:52 CEST] <alexp> "sorry, not" = "i didn't update my graphics driver"?
[23:15:27 CEST] <tomonori_> @alexp, as you known, intel media SDK depends the libva, libdrm and libmfx, If I update driver, perhaps it will cause the intel media SDK broken.
[23:15:49 CEST] <tomonori_> sorry, my bad english
[23:15:52 CEST] <alexp> ok, well it's possible this is just specific to linux
[23:15:57 CEST] <tomonori_> I didn't upgrade the driver
[23:16:03 CEST] <alexp> it works fine under windows, although that message is definitely shown in some cases
[23:16:25 CEST] <tomonori_> yes, so I am still using 3.2.8 version
[23:16:31 CEST] <alexp> k
[23:17:22 CEST] <alexp> unfortunately, i'm not the person who can really help. i just wanted to give you some basic ideas about why it could happen. if it worked in a previous version with the same driver, then perhaps it's a bug. or perhaps it's accessing the driver differently now and requires a driver update
[23:17:26 CEST] <tomonori_> I think it's the problem in intel media SDK? because the error message is show after call MFXVideoENCODE_Init()
[23:18:13 CEST] <tomonori_> @alexp, thanks, I will try update driver in the later
[00:00:00 CEST] --- Sat Oct 14 2017


More information about the Ffmpeg-devel-irc mailing list