[Ffmpeg-devel-irc] ffmpeg.log.20170409

burek burek021 at gmail.com
Mon Apr 10 03:05:02 EEST 2017


[12:05:59 CEST] <primaeval> I have a question about hls. When reading an hls m3u8 with multiple stream variants each stream's first packet is read rather than relying on the m3u8 contents to pick the stream with the highest bandwidth. This can take 12 seconds or so. Is it a bug or a feature?
[12:10:56 CEST] <JEEB> probably a feature of the fact that it just uses the mpeg-ts demuxer behind the scenes
[12:11:07 CEST] <JEEB> which then probes all the streams available in there, including which container it is
[12:11:29 CEST] <JEEB> or well, I think it's not due to the MPEG-TS demuxer but indeed due to the fact that it requires probing of stuff that is not mpeg-ts
[12:11:57 CEST] <JEEB> (HLS does let you have raw audio as well as fragmented MP4 nowadays, and no real way of noting which it is)
[12:12:32 CEST] <JEEB> I guess the whole probing thing could be minimized better but nobody has cared enough so far
[12:15:04 CEST] <primaeval> The reason I'm asking is that the code in that Kodi Krypton now can take 12 seconds to start an HLS stream whereas in Kodi Jarvis it took less than 2 seconds. The code to manually read the m3u and choose the variant has been removed and is now relying on ffmpeg to play the stream.
[12:17:04 CEST] <JEEB> feel free to make a bug report on the trac
[12:18:58 CEST] <primaeval> Thanks. One has been opened up but I'm not sure it is a bug. My opinion is that ffmpeg is being robust in trying to work out the stream type for transcoding. It wasn't really designed as a fast streaming engine.
[12:19:12 CEST] <JEEB> yes
[12:19:28 CEST] <JEEB> or well, the systems underneath do the probing
[12:19:31 CEST] <JEEB> of each stream
[12:19:39 CEST] <primaeval> Here is my Kodi trac bug report. http://trac.kodi.tv/ticket/17422
[12:20:10 CEST] <primaeval> Here is the ffmpeg trac bug report. https://trac.ffmpeg.org/ticket/6295
[12:20:28 CEST] <JEEB> Kodi could of course tweak the probe|analyze sizes and durations
[12:20:34 CEST] <JEEB> since it is utilizing the API
[12:20:44 CEST] <JEEB> also man, did I just write that name :V
[12:23:51 CEST] <primaeval> I've only spent a few hours looking at the ffmpeg code. Do you know if there is a way to bypass the segment reading and just use the m3u header?
[12:24:49 CEST] <JEEB> probably not
[12:24:59 CEST] <JEEB> but you can just minimize the probing time/size
[12:25:15 CEST] <JEEB> libavformat/hls*.c would be the related stuff for HLS specifically
[12:26:50 CEST] <primaeval> How do you speed up the probing? I tried a few options in the command line ffprobe but didn't find the right one.
[12:27:13 CEST] <JEEB> analyzeduration/probesize and friends
[12:28:23 CEST] <primaeval> In your opinion do you think Kodi should still be reading the m3u header itself rather than ffmpeg and just asking ffmpeg to play the variant?
[12:29:10 CEST] <JEEB> it might be the quicker way to get rid of the complexity, but the best way in the end would be to find a resolution that makes libavformat more usable for this use case.
[12:31:22 CEST] <primaeval> I read the HLS Live Streaming specs and it looked like it should be up to the client to choose the variant because there could be options like language or camera angle. That seemed to me that Kodi should be classed as the client because the user knows which language and camera angle they want.
[12:31:46 CEST] <JEEB> yes, those are the things that lavf exports to the API client
[12:32:04 CEST] <JEEB> including then also things like what the stream actually is etc
[12:32:31 CEST] <JEEB> so in case of lavf the API client then picks its choices according to the information it receives from lavf
[12:33:00 CEST] <JEEB> what lavf does is just give the client information regarding the streams of data in the input(s)
[12:35:53 CEST] <primaeval> So you don't think lavf should make the decisions about language and camera angle itself? It should be up to Kodi?
[12:36:23 CEST] <JEEB> it should currently also be up to Kodi
[12:36:37 CEST] <JEEB> as lavf by itself does not pick anything
[12:37:08 CEST] <JEEB> you can request "give me the "best" A|V|S track" from it, but that is already a decision made by the client
[12:40:14 CEST] <primaeval> So it should really be a 2 stage process: ask av_probe_input_format first, then Kodi chooses the variant and lavf plays it with avformat_open_input ?
[12:41:16 CEST] <JEEB> I'm pretty sure probing is done after opening input :P
[12:41:55 CEST] <JEEB> also now that I think about it, how does the whole thing with streams work with HLS...
[12:42:01 CEST] <JEEB> as in, in lavf
[12:42:04 CEST] <primaeval> You can see I didn't write the code. ;)
[12:42:27 CEST] <JEEB> because usually the loop is read_packet => check the stream id and other fields to decide what to do => discard or utilize
[12:43:01 CEST] <JEEB> but... to get all packets you'd have to get all the alternative streams from the input...
[12:43:14 CEST] <JEEB> yea, this stuff goes way too meta for me on a Sunday :D
[12:43:24 CEST] <JEEB> I'm happily oblivious of these multi-stage formats
[12:43:28 CEST] <JEEB> (in lavf)
[12:43:54 CEST] <bencoh> :D
[12:44:14 CEST] <primaeval> In hls_read_header in hlc.c it reads the first ts segment from each stream variant.
[12:44:42 CEST] <JEEB> pretty sure it does something format-unspecific ;)
[12:44:47 CEST] <JEEB> because hls is not only MPEG-TS
[12:47:44 CEST] <primaeval> What would happen if hls_read_header only relied on reading the m3u and took the BANDWIDTH field as its decision maker? Would it break something down the line?
[12:48:37 CEST] <JEEB> as long as you can open the right demuxer in the end
[12:48:45 CEST] <JEEB> for the input data itself
[12:51:31 CEST] <primaeval> Does that rely on any information you can't get in the header? What you get from the m3u is #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1012300,CODECS="mp4a.40.2,avc1.77.30",RESOLUTION=704x396
[12:51:50 CEST] <JEEB> yes, that doesn't contain the container
[12:52:08 CEST] <JEEB> it does tell you that you're most likely getting AAC and AVC video and you have some profile information in the integer
[12:52:14 CEST] <primaeval> So it really has to probe the streams?
[12:52:37 CEST] <JEEB> unless you are ready to fail with files without extensions, yes
[12:52:46 CEST] <JEEB> because you could do extension based guessing
[12:52:52 CEST] <JEEB> but at least you would limit the probing to the selected alternative
[12:54:04 CEST] <primaeval> So it could be optimised a bit by picking the highest bandwidth from the m3u then just probing that stream?
[12:54:45 CEST] <JEEB> yes
[12:58:00 CEST] <primaeval> Do you think there would be something missing in the case where there are multiple streams with the same bandwidth, if it didn't probe?
[12:58:08 CEST] <primaeval> Here is an example.
[12:58:12 CEST] <primaeval> #EXTINF:0, tvg-id="ndr.de" tvg-logo="ndr.png" tvg-name="NDR HAMBURG"  group-title="Standard",NDR HAMBURG http://ndr_fs-lh.akamaihd.net/i/ndrfs_hh@119223/master.m3u8
[12:58:29 CEST] <primaeval> gives
[12:58:31 CEST] <primaeval> #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3776000,RESOLUTION=1280x720,CODECS="avc1.64001f, mp4a.40.2" http://ndr_fs-lh.akamaihd.net/i/ndrfs_hh@119223/index_3776_av-p.m3u8?sd=10&rebase=on #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3776000,RESOLUTION=1280x720,CODECS="avc1.64001f, mp4a.40.2" http://ndr_fs-lh.akamaihd.net/i/ndrfs_hh@119223/index_3776_av-b.m3u8?sd=10&rebase=on
[12:59:00 CEST] <primaeval> The first variant is much quicker to load.
[12:59:55 CEST] <JEEB> you could probe them with ffprobe and see if there is any real difference
[13:00:25 CEST] <JEEB> -show_streams and -show_programs with -of json should give you a readable thing if you pipe stderr to a file
[13:02:34 CEST] <primaeval> Sometimes the streams with the same bandwidth might be with audio for the hard of hearing. That is something only something like user preferences in Kodi would pick up on. Don't you think?
[13:09:47 CEST] <primaeval> There is a third option in Kodi's case: leaving the variant decision up to the addon or pvr component.
[13:12:32 CEST] <fritsch> how would the addon decide better?
[13:12:46 CEST] <JEEB> primaeval: well the idea is that all of the decisions would be made by the API user (client)
[13:12:57 CEST] <JEEB> so as long as the metadata is somewhere it should just be exported through lavf
[13:13:55 CEST] <JEEB> also since most clients don't do as proper probe as lavf, I bet that the flags for accessibility aren't in the MPEG-TS itself
[13:14:06 CEST] <JEEB> or if they are, they most likely are also in the playlist
[13:16:21 CEST] <primaeval> @fritsch An addon might know more about the source. For example our iplayerwww addon knows which stream is for the hard of hearing.
[13:19:43 CEST] <primaeval> What do you think I should write in the Kodi trac report? 1. Do nothing. 2. ffmpeg will change hls behaviour. 3. Put the m3u reading code back into Kodi. 4. other
[13:20:50 CEST] <JEEB> well the HLS demuxer can be optimized, so if kodi wants to move to utilizing lavf more, they should flag it on lavf's side as an issue, and/or optimize the probe parameters
[13:21:04 CEST] <JEEB> it all depends on kodi's priorities, really
[13:22:25 CEST] <primaeval> Do you mind if I paste some of this conversation back into the trac report?
[13:22:36 CEST] <JEEB> nope
[13:22:47 CEST] <JEEB> all of my idiotic rambling is more or less publicly logged here, anyways
[13:22:49 CEST] <primaeval> Thanks.
[13:23:35 CEST] <primaeval> All this so I can do channel surfing again without having to wait 12 seconds between changing channels. ;)
[13:24:54 CEST] <primaeval> What needs to be done to trigger the lavf hls optimisation? Is it something you need to request?
[13:25:13 CEST] <JEEB> - someone needs to know - someone needs to care
[13:25:15 CEST] <JEEB> it's open sores
[13:26:06 CEST] <primaeval> I care, but I'm not sure you would trust me messing around in the ffmpeg source code just yet.
[13:27:12 CEST] <JEEB> hey, it was only a few years ago when I first posted a patch :) everyone starts somewhere
[13:27:25 CEST] <JEEB> and caring about something is the best way to get results
[13:32:34 CEST] <primaeval> True. I had been avoiding Kodi Krypton due to this problem for months, assuming someone else would fix it. Sometimes you just have to clean up the kids mess yourself. ;)
[13:40:48 CEST] <primaeval> I've got to look after the kids now. Many thanks for your help.
[15:19:16 CEST] <TikityTik> Are you not able to set audio bitrate with libvorbis?
[15:25:46 CEST] <c_14> No, you can
[16:17:52 CEST] <primaeval> Hi. JEEB: I think I've worked out what is going on with hls. When you ask ffmpeg to open an hls m3u it is like opening a satellite mux. All the streams are supposed to be able to play in parallel. Therefore ffmpeg needs to interogate all variants in case you want to record or redistribute more than one of them at once. The m3u isn't a sequential playlist but really a parallel mux. It has to have all the streams ready to play in ca
[16:18:15 CEST] <primaeval> Maybe due to user choice of language or bandwidth limitations. Am I on the right track?
[16:18:39 CEST] <dystopia_> normally there are multiple m3u8's
[16:19:01 CEST] <dystopia_> a master one, which then links all the other ones
[16:19:24 CEST] <dystopia_> basically there should be 1x m3u8 per resolution
[16:19:49 CEST] <primaeval> Yes the hls header m3u contains a list of variant m3us for different bandwidths etc.
[16:21:02 CEST] <JEEB> primaeval: yes lavf gives you the possibility of using all of the things at once, but it's possible to optimize things for other use cases
[16:21:06 CEST] <primaeval> dystopia: Kodi is asking ffmpeg to play a hls m3u header but it can take 10 seconds to probe all the variants.
[16:21:52 CEST] <primaeval> It looks like ffmpeg is doing the right thing by probing all the streams and Kodi really should only ask to play one.
[16:22:42 CEST] <JEEB> I mean, this has just been implemented in a way that the lavf framework most easily lets you do, basically :P parse the different variants and then pass probing for all of them since that's already in the framework
[16:23:12 CEST] <JEEB> if you want to trust the playlist more I don't think most people will object to that
[16:23:35 CEST] <DHE> HLS is meant to be played by a specifically HLS-aware player. Said player should first choose which variant it wants to play, then begin playing it and it alone
[16:23:48 CEST] <DHE> (barring options like external audio streams or whatever else HLS offers)
[16:24:01 CEST] <primaeval> Someone thought that the probing should really be done in avformat_find_stream_info rather than avformat_open_input. Is that right?
[16:27:02 CEST] <primaeval> DHE: That is what I thought. There are some higher level choices that the player should make rather than leaving it up to ffmpeg.
[16:28:01 CEST] <JEEB> "rather than leaving it up to FFmpeg"
[16:28:09 CEST] <JEEB> kodi is not leaving it up to FFmpeg
[16:28:13 CEST] <JEEB> it's an API user
[16:28:42 CEST] <JEEB> and what you're finding is that there is a problem for your use case in how lavf is either used or lavf itself
[16:30:06 CEST] <JEEB> but I think I've gone over this already, the quick and dirty way is to parse the master playlist yourself, since that doesn't require improving lavf for this use case if just minimizing the probing isn't enough. and then usually the longer way is to make lavf just work. it really depends on how kodi has set its stance regarding "own stuff" vs "using lavf"
[16:31:43 CEST] <primaeval> I was just answering DHE. I think we concluded that Kodi should really ask ffmpeg to just play the substream by reading the m3u itself.
[16:32:17 CEST] <JEEB> uhh
[16:32:21 CEST] <JEEB> you're still misunderstanding I think
[16:32:35 CEST] <primaeval> Probably. :)
[16:33:18 CEST] <JEEB> yes, kodi should be picking the variants and the streams from those variants etc, which is already 100% possible. and kodi should already be doing it. I don't get where this idea of "FFmpeg doing the selection" comes from
[16:34:25 CEST] <JEEB> now, you have found the usage of lavf in Kodi problematic, which is either a usage problem or a lavf problem. most likely a lavf problem since it indeed tries to get the information on all the available streams and variants as far as I can understand, instead of implicitly trusting the master playlist
[16:34:38 CEST] <primaeval> The ffmpeg selection bit is when Kodi asks for ffmpeg to open the top level hls header m3u. Then ffmpeg chooses the highest bandwidth stream.
[16:34:44 CEST] <JEEB> ...
[16:34:56 CEST] <JEEB> lavf does not do that by itself
[16:35:02 CEST] <JEEB> let me give you an example from my 2013 code
[16:35:37 CEST] <JEEB> https://github.com/jeeb/matroska_thumbnails/blob/master/src/matroska_thumbnailer.cpp#L139..L162
[16:35:45 CEST] <primaeval> It still has to pick one variant to play doesn't it?
[16:35:51 CEST] <JEEB> KODI has to
[16:35:53 CEST] <JEEB> not lavf
[16:36:06 CEST] <JEEB> lavf has that functionality, yes. but it doesn't do jack shit unless you tell it to
[16:36:22 CEST] <JEEB> I use that functionality in my crappy thumbnailer
[16:36:36 CEST] <JEEB> but it should show you how lavf works
[16:36:46 CEST] <JEEB> and no, you don't need those helpers, you can just start reading
[16:37:04 CEST] <JEEB> you can force the mpeg-ts or aac or mp4 demuxer
[16:37:12 CEST] <JEEB> and start reading and all that jazz
[16:37:31 CEST] <JEEB> although HLS is kind of a special snowflake since it's a meta demuxer (demuxer that then opens further demuxers)
[16:37:59 CEST] <primaeval> So if you don't use av_find_best_stream will it just find the highest bandwidth stream or is it still Kodi that chooses somewhere?
[16:38:25 CEST] <DHE> kodi reads the m3u8 file, decides which variant it wants, and gives it to ffmpeg to play
[16:38:28 CEST] <DHE> probably
[16:38:40 CEST] <JEEB> that's how it seemed to do before, now it lets lavf handle HLS completely
[16:38:56 CEST] <JEEB> that's why primaeval is here to discuss the issue of multi-second probing
[16:38:58 CEST] <primaeval> DHE: that is what kodi used to do when it was fast in Jarvis. now it sends the header
[16:39:20 CEST] <JEEB> in any case, just stop with the "FFmpeg picks something" thing
[16:39:24 CEST] <primaeval> and is very slow 10s
[16:39:39 CEST] <JEEB> because if you are using the API the API client is in full charge
[16:40:10 CEST] <primaeval> so somewhere in kodi it must still be picking the variant itself?
[16:40:25 CEST] <DHE> well, since bitrate changes are supposed to happen in response to network condition changes I'd almost say ffmpeg should just be given a bitstream of the .ts or .mp4 files to parse. let the app deal with swapping streams at the same sequence numbers...
[16:41:32 CEST] <JEEB> primaeval: yes. either with calling the lavf helpers or making it manually. also streams can be added on the fly so there's a fuckload of control you could do
[16:41:47 CEST] <JEEB> DHE: that's because you find the current HLS demuxer implementation lackluster for such stuff I guess. which it very well could be.
[16:42:36 CEST] <JEEB> but as I said, this is a goddamn political/technical thing for the Kodi people. either they go the easy way out and use their own higher level parser for HLS main playlists, or improve lavf HLS demuxer for their use case
[16:42:47 CEST] <JEEB> and/or improve their lavf usage wrt probing in general
[16:43:11 CEST] <JEEB> the issues with the current HLS thing stem from the fact that it was done in the simplest way possible given the framework :P
[16:43:25 CEST] <JEEB> aka "don't care about the playlist too much and just take in the streams and believe in the streams"
[16:45:24 CEST] <primaeval> If ffmpeg didn't probe all the hls variants when the stream is opened wouldn't it stall horribly later on if you switched variants?
[16:47:06 CEST] <JEEB> well API user controls how much is probed
[16:47:12 CEST] <DHE> maybe what the HLS demuxer needs is an option to select what bitrates to select. options like "highest", "all", and "realtime". where "realtime" actually measures throughput and switches variants when throughput conditions change
[16:47:13 CEST] <JEEB> time-wise and byte-wise
[16:47:52 CEST] <primaeval> Is there any program that you know of that uses ffmpeg the right way and starts hls streams quickly?
[16:48:03 CEST] <JEEB> DHE: we're not even there yet, but sure - although in my opinion API users could handle that if you provided a way of switching between variants (I'm pretty sure this is already possible in the API)
[16:48:53 CEST] <JEEB> primaeval: I really don't know - all I know is that when I poke at mpv it doesn't load things for too long and it uses lavf generally. that still doesn't set any low probe duration/size parameters by default, though
[16:49:30 CEST] <JEEB> anyways, while it's possible that kodi isn't using the API right, I also know the HLS demuxer is a shitshow
[16:49:57 CEST] <JEEB> so if Kodi decides to continue with the "moving more and more to lavf" way, then there's plenty of low hanging fruits methinks :P
[16:50:31 CEST] <primaeval> There is a bit of buck-passing going on from what I can see. ;)
[16:50:53 CEST] <DHE> yeah. if I were to make a player today I would do all HLS parsing myself and only let ffmpeg deal with the .ts or .mp4 files
[16:52:09 CEST] <JEEB> DHE: yes, since that most likely is more simple for you than improving the HLS demuxer for your use :D
[16:53:02 CEST] <JEEB> primaeval: dunno if there's any buck-passing here. The problem in theory could be on multiple levels and be worked on on multiple levels
[16:53:53 CEST] <primaeval> Would it be possible to trust the hls header info and start playing immediately, only probing on play?
[16:53:56 CEST] <JEEB> I am not saying that Kodi is using the API wrongly (or just in a way that causes extra latency), just that it's a possibility. Also I think I've said plenty of times how the HLS demuxer in lavf is made that is simpler from the framework level
[16:54:12 CEST] <JEEB> primaeval: yes
[16:54:31 CEST] <JEEB> I mean, you just have to trust the playlist(s) to fill the initial data and gaps
[16:54:51 CEST] <JEEB> and then you pick a variant and then at that point you might have to probe
[16:55:14 CEST] <JEEB> or you trust the file names to make sense and have an easily parse'able extension (which can be yes/no/maybe with URLs)
[16:55:34 CEST] <DHE> my concern would be selecting a video and audio track that are from different variants.
[16:56:09 CEST] <JEEB> that should also be possible, although can require multiple streams to be pulled in to get all the required data
[16:57:55 CEST] <primaeval> Is there a mechanism for other codecs that can switch between a quick and dirty probe or a more robust one? Remember I'm new to the code.
[16:58:50 CEST] <JEEB> the probe is the same, you just control the size/length of it
[16:59:25 CEST] <TAFB> can I limit how much ram ffmpeg uses?
[16:59:43 CEST] <JEEB> anyways, there's quite some low-hanging fruit in the hls demuxer so you could really just focus on it :P trusting the playlist(s) more rather than going down to the stream level (stream in this case being data streams)
[17:01:16 CEST] <primaeval> If it was your app, do you really think the playlist info should be trusted?
[17:02:05 CEST] <JEEB> well I have a feeling that so many HLS implementations do it that effectively you could. of course you could always add a parameter to the demuxer that tells it to go nuclear
[17:02:22 CEST] <JEEB> I would probably probe the selected streams just in case, but that limits the probe quite a bit
[17:02:37 CEST] <JEEB> but I'm pretty sure that hls.js or something just checks the extension or something :DD
[17:03:03 CEST] <JEEB> the whole stream thing also pops up with seeking, where it's /really/ inefficient
[17:03:21 CEST] <JEEB> because instead of going backwards in the playlist it does some really funky shit
[17:03:56 CEST] <primaeval> I've seen a lot of complaints recently about seeking in Krypton. Perhaps that is it.
[17:05:51 CEST] <JEEB> I wonder if the MPEG-DASH demuxer on the mailing list is any better in that sense
[17:06:03 CEST] <JEEB> both by trusting the playlists more, as well as handling seeking better
[17:06:14 CEST] <JEEB> probably not if the guy was looking at the HLS one as a base 8)
[17:07:04 CEST] <primaeval> The MPEG-DASH playback in iplayerwww is much quicker to start than hls in krypton.
[17:07:26 CEST] <JEEB> for obvious reasons
[17:07:45 CEST] <primaeval> now I know
[17:08:13 CEST] <JEEB> but yea, for that kind of use case the HLS demuxer is so unoptimized I tend to note that it's a low hanging fruit :P
[17:08:28 CEST] <JEEB> you could do a fuckload of optimization without breaking existing use cases
[17:08:36 CEST] <primaeval> but iplayerwww does its own hls header pre-processing and only sends the variant to ffmpeg
[17:12:47 CEST] <xtina_> hey guys. i'm trying to stream audio and video to a janus webRTC gateway on a remote server from my Pi. i've just tried out gstreamer to do so, but it hogs 100% of my Pi 0's CPU and drops all the audio packets
[17:13:17 CEST] <xtina_> i've already used ffmpeg to stream audio and video to youtube from my pi zero and it worked great with 20% cpu, so i'm wondering if i can use ffmpeg to do what gstreamer is currently doing
[17:14:10 CEST] <xtina_> is it possible for ffmpeg to accomplish this? http://vpaste.net/6Zsol
[17:16:51 CEST] <xtina_> essentially the gstreamer is sending video/audio via UDP to two different ports on my remote server, can ffmpeg do that?
[17:17:25 CEST] <c_14> isn't that just a standard rtp output?
[17:17:56 CEST] <c_14> well
[17:18:16 CEST] <c_14> ffmpeg can definitely do 2 udp outputs one carrying audio and one carrying video
[17:18:36 CEST] <c_14> but I don't think it can do rtp in that particular configuration since it throws rtcp on port+1
[17:18:43 CEST] <c_14> so you'd have to leave an empty port between the video and audio stream
[17:19:13 CEST] <xtina_> oh, interesting
[17:19:28 CEST] <xtina_> c_14: on my server i can leave an empty port between video and audio
[17:19:45 CEST] <xtina_> but how do i send the video and audio to 2 different ports using ffmpeg? i can only think to write two separate ffmpeg commands?
[17:20:25 CEST] <c_14> then you can just do -map 0 rtp://host:port -map 1 rtp://host:port+2
[17:20:30 CEST] <JEEB> with cli it'd be <input><params for output 1><output 1><params for output2><output2>
[17:20:49 CEST] <tefid> Hello all. When using cuvid Cuda decoder and nvenc and using h264_nvenc, after encoding several files continuously, the encoding process hangs and then outputs this error: [h264_nvenc @ 0x3c66e40] Failed locking bitstream buffer: invalid param (8). The strange thing is that encoding to hevc_nvenc works just fine.
[17:21:44 CEST] <xtina_> ohh, i'll give it a shot, thank you
[17:23:04 CEST] <tefid> I could not find any useful information on the web. Might this be due to very fast encoding by ffmpeg and cuda :p ? Shall I file a bug?
[17:23:21 CEST] <JEEB> if you can make it easily replicatable, yes
[17:23:23 CEST] <JEEB> trac is for that
[17:24:46 CEST] <tefid> JEEB, thanks. I will try to make sure it can be replicated .... it always hangs, though.
[17:28:22 CEST] <tefid> Also, I feel it is related to Nvidia as well, as when it hangs, the enc percentage in nvidia-smi goes up to 50% and the whole system hangs.
[18:02:50 CEST] <tefid> https://devtalk.nvidia.com/default/topic/1003226/gpu-accelerated-libraries/continuously-using-h264-cuvid-with-h264_nvenc-makes-the-encoding-process-hang/
[18:51:59 CEST] <sonion_> I made (hoping) a dvd-video that i want to be able to play in a dvd player ... so before i burn it if i do  mplayer dvd:// -dvd-device n.dvd    where the video_ts and audio_ts dirs/files are  and it works is that a good indication that the dvd will be good?
[19:39:34 CEST] <tefid> sonion_, make an iso of it first, you can mount it and treat it like a dvd. Before burning it.
[19:39:58 CEST] <tefid> if it works in the form of an iso, then it would work on the dvd as well.
[20:03:05 CEST] <sonion_> it works after making an iso and mounting it .. and it works after burning it (growisofs calls mkisofs to make an iso)
[20:03:24 CEST] <sonion_> now the test in a 'real dvd player
[20:18:37 CEST] <sonion_> i need -dvd-video in the mkiosfs command btw
[20:19:03 CEST] <sonion_> thanks
[20:51:21 CEST] <thebombzen> sonion_: If you're looking for a simple dvd, I'd use ffmpeg to generate the .mpg file. then dvdauthor to create the filesystem directory, then genisoimage (mkisofs is an alias) to create the iso
[20:51:27 CEST] <thebombzen> finally wodim to burn it
[20:52:09 CEST] <sonion_> i did use ffmpeg to make the mpeg and then dvdauthor and then groisofs  :)
[20:53:45 CEST] <sonion_> i don't have a dvd player .. and my reputaion is at stake - so i've spent more time testing the product then doing it ;)
[20:55:38 CEST] <sonion_> i'm pretty sure  mplayer dvd:// -dvd-device n2.dvd     is the simplest test    cause just changing one option in ffmpeg/dvdauthor the cli at a time causes problems
[20:56:32 CEST] <sonion_> with n2.dvd being the dir created from dvdauthor
[21:09:53 CEST] <Lirk> hi all
[21:10:38 CEST] <Lirk> how I can to offset audio without using of 2 stream inputs?
[21:11:01 CEST] <Lirk> How I can to do it with filters?
[21:12:03 CEST] <Lirk> I know about filter "adeley", but it can only delay audio. I need to deley video
[21:13:12 CEST] <gurki> whats the difference of audio with negative delay when compared with video with positive delay?
[21:17:33 CEST] <sonion_> gurki: are/were you lirk ? i was gonna tell you to tell lirk your command line ;)
[21:17:49 CEST] <gurki> sonion_: nah :)
[21:17:58 CEST] <gurki> i was actually trying to hint him towards sth obv
[21:18:31 CEST] <sonion_> :)
[21:28:40 CEST] <acovrig> I need to grab a v4l2 /dev/video input and save it to a file, but more importantly, display it live; what's the best way to do this? I'm trying this and the video is ~5s delay: ffmpeg -y -f v4l2 -r 24 -video_size 720x480 -i /dev/video0 -r 24 -q 0 -crf 20 -an -f flv del.flv -f flv - | ffplay -f flv -
[21:31:39 CEST] <gurki> well it takes time to encode and write that video to disk ...
[21:31:40 CEST] <furq> acovrig: for starters, don't use flv
[21:33:02 CEST] <furq> http://vpaste.net/XXTmI
[21:33:09 CEST] <acovrig> furq, yea, my goal is H.264, but I was using flv from ffmpeg del.flv & mplayer del.flv kinda a thing; I know it takes time to save to disk, but can't I play it live, then transcode as time allows?
[21:33:29 CEST] <furq> well your original command wasn't using h264
[21:33:40 CEST] <furq> but yeah the pipe output doesn't need to be encoded at all
[21:34:12 CEST] <furq> if you want to display the encoded output then you'll need to use the tee muxer
[21:34:18 CEST] <acovrig> furq, I still get a 2-3s delay, is my system just slow?
[21:34:27 CEST] <furq> shrug
[21:34:30 CEST] <furq> could be ffplay or v4l buffers
[21:34:53 CEST] <furq> remove the flv output and see if it still happens
[21:35:11 CEST] <furq> if it does then there's probably not much you can do with ffmpeg, it's not really optimised for this sort of thing
[21:35:46 CEST] <acovrig> I'm guessing there's much better ways for this, but I have a 5Ghz receiver -> RCA -> USB capture and was hoping to do FPV stuff with this until I get my FPV goggles
[21:36:11 CEST] <acovrig> yea, still a delay, I wonder if mplayer with dumpstream or something like that would work? use mplayer to display it, dumping the stream to ffmpeg somehow?
[21:36:34 CEST] <sonion_> cat /dev/video > file.avi  ?
[21:36:46 CEST] <furq> that's extremely not a thing
[21:37:08 CEST] <furq> acovrig: no idea
[21:37:40 CEST] <furq> you could try ffplay -fflags nobuffer
[21:38:37 CEST] <furq> but based on past conversations, ffmpeg -f v4l2 is bad at low latency
[21:39:18 CEST] <acovrig> furq, yea, I was thinking so; I can't have mplayer and ffmpeg pull form /dev/video* at once, can I? mkfifo somethingness?
[21:40:21 CEST] <furq> you can't have multiple readers on a fifo
[21:40:27 CEST] <furq> well you can, but they won't get the same data
[21:42:41 CEST] <acovrig> furq, thanks, I'll explore some other options
[21:43:16 CEST] <sonion_> acovrig:  you mentioned mplayer with dumpstream
[21:43:36 CEST] <acovrig> sonion_, yea, I'm tinkering with that now
[21:48:19 CEST] <sonion_> please paste what you find that works :)  seems something handy to know
[22:27:18 CEST] <nightlingo> hello guys
[22:28:15 CEST] <nightlingo> I have video A and video B. I want to re-encode video B so that it has the exact same encoding options as video A
[22:28:19 CEST] <nightlingo> how can I do this?
[22:30:20 CEST] <sonion_> i'd start by identifying what vodeo a is ... mplayer -v videoa   and writing down all the video and audio information
[22:31:38 CEST] <nightlingo> sonion_ I have found several options to identify video A. using ffprobe, mediainfo and as you say, mplayer
[22:32:10 CEST] <nightlingo> sonion_ my problem is that I dont know how to translate each of those details into an ffmpeg parameter
[22:32:21 CEST] <sonion_> ffmpeg -i videoa
[22:32:49 CEST] <sonion_> what have you identified for the video?
[22:33:17 CEST] <nightlingo> sonion_ i am tryint to do this programmatically, to work for any video
[22:33:19 CEST] <sonion_> what is the real names of videoa and videob
[22:33:45 CEST] <nightlingo> input22.mp4 and split22.mp4
[22:33:46 CEST] <djk> I trying to do a youtube live stream with ffmpeg and get  RTMP_ReadPacket, failed to read RTMP packet header. Any idea how I get more detail on what the error is or best way to live stream /dev/video0?
[22:34:27 CEST] <sonion_> nightlingo: they both look like mp4 :)  get the specific codec from what i have given you
[22:35:17 CEST] <nightlingo> sonion_ the codec is h264 , but I need to re-encode using the exact same settings as in video A
[22:35:27 CEST] <sonion_> djk have you looked into using rtmpdump ?
[22:35:57 CEST] <djk> I am a novice on this I have not
[22:36:09 CEST] <nightlingo> sonion_ there are countless settings that h264 might have
[22:41:38 CEST] <sonion_> djk: this was pasted earlier -   might be good    for /dev/video you are doing   ffmpeg -y -f v4l2 -r 24 -video_size 720x480 -i /dev/video0 -c:v libx264 -crf 20 -an del.flv -c:v rawvideo -f nut - | ffplay -
[22:44:09 CEST] <djk> that helps prove ffmpeg will display the cam on the local box. Now to figure out how to stream to youtube rtp://
[22:49:58 CEST] <sonion_> use a browser ?
[22:52:04 CEST] <sonion_> i tired to get youtube live stream and it is to complicated for me with ffmpeg/mplayer etc   but lots and lots in google on it though ;)
[22:52:56 CEST] <sonion_> i now wait for the 'event' to end and get it with youtube-dl   (also youtube-dl is suppose to be able to get youtube live stream ...
[22:54:22 CEST] <djk> I would like to live stream from a raspberry pi to youtube, facebook, or another major that could be the 'video distributor' and take the load. Any suggestions?
[22:55:23 CEST] <sonion_> i'd see if you can get icecast on it
[22:55:41 CEST] Action: sonion_ wants a rasp pi ... 
[22:56:42 CEST] <furq> djk: i'm not sure what you want us to suggest, that should just work
[22:57:09 CEST] <furq> i guess pastebin the command somewhere so we have something to debug
[22:58:37 CEST] <djk> sonion comment on finding youtube live stream too complicated I wondered if he had another source that might be easier
[22:59:32 CEST] <sonion_> /msg djk furq is the one you want to have help you - do what he asks
[23:01:28 CEST] <sonion_> and don't take 20 minutes between posts
[23:01:59 CEST] <djk> furq: https://pastebin.com/hY5iFWpJ
[23:02:25 CEST] <djk> that is the output from one of the example I was using
[23:04:01 CEST] <furq> should that not be a.rtmp.youtube.com
[23:04:19 CEST] <furq> also there is no such thing as 712k mp3
[23:04:26 CEST] <furq> you should probably set that to something sensible like 192k
[23:06:09 CEST] <djk> https://gist.github.com/olasd/9841772
[23:06:22 CEST] <djk> that was the original example
[23:07:22 CEST] <furq> that is a bad example
[23:08:28 CEST] <furq> i can see at least six things wrong with that
[23:08:57 CEST] <furq> which is about average for randos' ffmpeg commands copied off the internet
[23:09:44 CEST] <sonion_> i'm surprised that the french peoples are allowed on the internet
[23:10:29 CEST] <djk> I welcome better examples. This a new space for me
[23:11:21 CEST] <sonion_> what is the youtube live event you are wanting to get ?
[23:12:28 CEST] <sonion_> maybe with the purifying 3 inches of snow we had last night (***WHAT THE F****) i can try to do youtube live streaming again
[23:13:13 CEST] <furq> oh right
[23:13:23 CEST] <furq> probably a bigger issue is that you don't have an audio source
[23:13:23 CEST] <djk> I am wanting to stream a raspberry pi webcam I have setup up for significant town event happening at a church
[23:13:54 CEST] <furq> if you just want blank audio then add -f lavfi -i anullsrc
[23:14:05 CEST] <furq> otherwise you'll need to add a microphone input or something
[23:14:31 CEST] <djk> there is audio on the camera that would be curious what it picks up but not key
[23:14:35 CEST] <sonion_> djk how much time do you have to get it working? :)
[23:15:06 CEST] <djk> lol like anything not enough
[23:15:08 CEST] <furq> http://vpaste.net/6TgMX
[23:15:10 CEST] <furq> that should work
[23:15:18 CEST] <djk> event is after Easter
[23:15:22 CEST] <furq> ideally you'd use aac audio but it doesn't really matter with null audio
[23:15:38 CEST] <furq> and you're using pre-3.0 ffmpeg so you probably don't have a worthwhile aac encoder anyway
[23:16:39 CEST] <djk> Still get
[23:16:39 CEST] <djk> RTMP_ReadPacket, failed to read RTMP packet header
[23:16:39 CEST] <djk> rtmp://a.rtmp.youtube.com/live2/[KEY]: Unknown error occurred
[23:17:16 CEST] <djk> ffmpeg version 2.6.9
[23:17:33 CEST] <djk> that is most current apt package I found
[23:17:55 CEST] <furq> i take it the url is correct
[23:18:00 CEST] <sonion_> since you guys are bragging about versions   FFmpeg version UNKNOWN, Copyright (c) 2000-2011 the FFmpeg developers built on Mar  9 2011 16:53:30 with gcc 4.4.3
[23:18:34 CEST] <sonion_> utah
[23:18:48 CEST] <djk> oh not bragging not in the least
[23:19:26 CEST] <djk> I will double check but that is the URL from the YouTube live stream page
[23:20:41 CEST] <sonion_> the [KEY] identifies your page?  don't you need your own youtube page?
[23:21:43 CEST] <djk> oh maybe I am misunderstanding and I need to put the url of my serer in there on youtube (grrr)
[23:22:53 CEST] <sonion_> is this what you are trying to do ?
[23:22:53 CEST] <sonion_> to send stream from a rasp pi recording a live event to a youtube channel
[23:23:52 CEST] <sonion_> then people will use ?? their browser to watch it?  i thought you want to use ffmpeg etc to watch a live stream from youtube channel
[23:24:03 CEST] <djk> right i want to send the live /dev/video0 on the rPi to youtube live stream
[23:25:06 CEST] <djk> the rPi and local network connect I certain can't handle the load and want to stream to youtube to be the 'video distributor'
[23:25:17 CEST] <djk> does that make sense?
[23:25:31 CEST] <sonion_> if you have bandwidth on the rpi?  i'd still use icecast and broadcast yourself
[23:26:08 CEST] <sonion_> i can use mplayer and get it
[23:26:26 CEST] <djk> but people would be view direct from the rPi that way, correct?
[23:26:32 CEST] <sonion_> yes
[23:27:33 CEST] <sonion_> that is what i couldn't do with cli tools is watch a live event on youtube
[23:29:30 CEST] <sonion_> the icecast uses apache server   so curl/mplayer anything can get the stream
[23:30:14 CEST] <djk> I have not clue the volume but it has a good chance of being 100's up simultaneous viewing and the up speed on the local network is ~25mbps
[23:30:36 CEST] <sonion_> yea let youtube handle it ...
[23:31:08 CEST] <sonion_> 101 :)  i'm gonna stay till you get it and then i'll be one of them watching
[23:31:14 CEST] <djk> so now to figure out how to do that
[23:32:23 CEST] <sonion_> i'm guessing furq gave you good code - i think you just need to figure out where you are sending the stream to
[23:32:47 CEST] <sonion_> can you do a local test to see if you are sending out from /dev/video ?
[23:34:50 CEST] <djk> yes furq suggestion popped a window with the live feed
[23:34:50 CEST] <djk> ffmpeg -y -f v4l2 -r 24 -video_size 720x480 -i /dev/video0 -c:v libx264 -crf 20 -an del.flv -c:v rawvideo -f nut - | ffplay -
[23:37:46 CEST] <djk> I am slowly understanding thing I need to setup a on the rpi an rtp stream that youtube pulls from and I put in the server url form
[23:39:03 CEST] <djk> as I said a real novice on this and the ffmpeg
[23:40:15 CEST] <sonion_> that is pretty fantastic - how is the resolution?
[23:40:47 CEST] <sonion_> how do you know it is rtp stream they want?
[23:40:50 CEST] <djk> I messaged you earlier did you get it?
[23:41:00 CEST] <sonion_> yes and i answered utah
[23:41:44 CEST] <djk> oh I missed if you replied in the separate chat
[23:44:01 CEST] <djk> the youtube example references it rtp://a.....
[23:44:40 CEST] <djk> furq: does what I am trying to do make sense to you?
[23:45:21 CEST] <sonion_> djk it looks like you use javascript? i use dillo as a browser ... no javascript
[23:46:22 CEST] <djk> right the HawkEye uses javascript hence can do the youtube live with it
[23:48:54 CEST] <sonion_> what is the url you are suppose to send it too? are you sure you can send to it? is that what the key is?
[23:50:39 CEST] <djk> If I am understanding youtube live correct you stand up a server that is doing rtp stream and put it into the youtube live server url link but I may be wrong
[23:52:52 CEST] <sonion_> i don't know about sending up to youtube .. maybe furq does ... but you get the video from a regular youtube page
[00:00:00 CEST] --- Mon Apr 10 2017



More information about the Ffmpeg-devel-irc mailing list