[Ffmpeg-devel-irc] ffmpeg.log.20190121

burek burek021 at gmail.com
Tue Jan 22 03:05:02 EET 2019


[00:08:44 CET] <mateothegreat> g1itch: you *may* need to pass the stream path (at the end of the url)
[00:08:58 CET] <mateothegreat> I have to do that with some of the older rtsp cameras connected to my platform
[00:47:43 CET] <g1itch> mateothegreat this one doesn't have a stream path i don't think - at least when i plug it into shinobi / vlc it doesn't require one
[00:51:07 CET] <sn00ker> hi all
[00:51:18 CET] <sn00ker> i want convert a movie in two pass
[00:52:48 CET] <markweston> Hi, I want to play a movie with ffplay but left/right stereo audio is out of balance; How do I fix that? "ffplay -af amix movie.mp4" makes it mute
[00:52:57 CET] <sn00ker> and than i see this
[00:52:57 CET] <sn00ker> 00000350  29 10 ca 79 5e 0f 3e d3 c9 ad a8 ae dc b6 2d f7 )..y^.>.......-.
[00:52:57 CET] <sn00ker> 00000360  2b 62 42 69 60 19 32 10 fc 1e 77 44 e9 55 04 61 +bBi`.2...wD.U.a
[00:53:49 CET] <markweston> with error "Input pad "input0" with type audio of the filter instance "Parsed_amix_0" of amix not connected to any source"
[00:58:06 CET] <markweston> Maybe try amerge instead?
[00:59:54 CET] <Hello71> use mpv?
[01:00:45 CET] <markweston> Actually I know why it's bad. It uses 5.1 audio
[01:07:00 CET] <markweston> so I need to amerge it to mono/stereo
[01:07:15 CET] <markweston> how can I do that with -af and not with -filter_complex?
[01:11:31 CET] <markweston> FOUND IT
[01:11:32 CET] <markweston> https://superuser.com/questions/852400/properly-downmix-5-1-to-stereo-using-ffmpeg
[01:52:55 CET] <ariyasu> the default crf value for x264 is 23 and x265 is 28
[01:53:15 CET] <ariyasu> would these values produce an encode of comparable quality?
[01:53:30 CET] <ariyasu> not factoring in bitrate and filesizes
[02:10:12 CET] <relaxed> encode some samples and decide for yourself
[02:10:41 CET] <maxrazer> Does ffmpeg have a good waveform visual generator? The examples I saw didn't look that good. I'm looking for a program. I know Adobe After Effects can do it. I also just learned of a linux program called Natron, which maybe can.
[02:12:13 CET] <maxrazer> I want to take a video file and replace the video with a background image and a waveform animation as output.
[02:36:51 CET] <ariyasu> yeah im doing that relaxed but x265 encodes at 0.01x realtime so it takes long time to see the results
[02:40:04 CET] <Hello71> are you setting -preset placebo or something
[02:50:17 CET] <ariyasu> -preset slow
[03:24:47 CET] <sn00ker> hi all
[03:27:49 CET] <friendofafriend> Hello, sn00ker.
[03:28:47 CET] <sn00ker> I despair of ffmpeg. So I want to send / dev / video0 to a rtmp server. ffmpeg also sends but the video can not be played
[03:30:47 CET] <sn00ker> Dequeued v4l2 buffer contains 339968 bytes, but 338040 were expected. Flags: 0x00000001.
[03:30:52 CET] <sn00ker> hhmmpppffff...
[04:07:01 CET] <trysten> silly question, where is the documentation for the -b option? I looked all through the man page. like ffmpeg -b:v 1M
[04:32:42 CET] <hendry> hi, I'm trying to encode https://media.dev.unee-t.com/2019-01-21/new-user-flow.mp4 for the Web, i.e. add -movflags +faststart
[04:33:21 CET] <hendry> but when I do that, the process seems to fail https://media.dev.unee-t.com/2019-01-21/new-user-flow.mp4.log
[04:33:40 CET] <hendry> so what's wrong with my source and how do I fix it so it can be playable on the Web please?
[04:37:06 CET] <pink_mist> whoah, first sn00ker and now hendry ... who's next? ronnie? murphy? higgins? =)
[04:37:19 CET] <pink_mist> (sorry, no idea about your question)
[05:16:47 CET] <furq> trysten: https://www.ffmpeg.org/ffmpeg-codecs.html#Codec-Options
[05:18:37 CET] <furq> hendry: i don't see any errors in that log, what's it actually doing
[05:19:05 CET] <furq> bear in mind -movflags faststart will write the entire file twice and then delete one, so make sure you're not out of disk space or anything like that
[05:23:27 CET] <hendry> furq: i'm not out of space
[05:23:43 CET] <hendry> perhaps there is an issue with the source file: https://media.dev.unee-t.com/2019-01-21/new-user-flow.mp4 ?
[05:31:08 CET] <trysten> ah, if I had only looked in man ffmeg-codecs. thank you
[05:36:13 CET] <ariyasu> why are you using 4:4:4 out of curiosity
[06:53:59 CET] <hendry> ariyasu: i didn't create the source, just downloaded it from a site
[13:56:01 CET] <aristaware> Hi! I'm trying to join several little clips from my webcam (Yi home). Each clip last 1 minute. The problem here is that the camera has motion detection and stops recording till it detects new movement. So I have several <=1' clips and for some minutes there's no video. What I would want is to join them all, but fill the gaps between two videos with the last frame of the video preceding the gap.
[13:56:10 CET] <aristaware> Thanks!
[14:10:31 CET] <fella> you mean, you have a 1min clip recorded at 6pm and the next one at say 6am; instead of appending them you would rather have 12hrs of freeze image in between?
[14:11:44 CET] <aristaware> Yeah
[14:12:22 CET] <aristaware> That would be an extreme case, cause usually the gaps is no more than 30-60'
[14:14:46 CET] <fella> ffmpeg -i INOUT -ss 00:00:59.000 -vframes 1 last_frame.png; mpeg -loop 1 -i last_frame.png -c:v ENCODER -t 30 -pix_fmt yuv420p OUTPUT
[14:14:59 CET] <fella> ^^ try sth like that
[14:15:23 CET] <fella> lol - ^INOUT^INPUT
[14:16:14 CET] <fella> takes one picture/frame at guven time
[14:16:33 CET] <fella> then makes a movie looping over that frame
[14:17:25 CET] <aristaware> That would serve me to "extend" the video before the gap an arbritary amount of time, right?
[14:17:47 CET] <fella> should do, yes
[14:18:36 CET] <fella> that's in the 2nd ffmoeg (-t nnn)
[14:19:08 CET] <sn00ker> hi all
[14:19:10 CET] <fella> gosh, pardon my tyoing ... smartphine :(
[14:19:44 CET] <aristaware> But, if understood the command correctly, it would take the frame at 0:59.0000, right?
[14:20:06 CET] <fella> 59seconds
[14:20:19 CET] <aristaware> Not the very last frame
[14:20:41 CET] <sn00ker> I want to create two or three virtual video devices with v4l. I want to put these together using ffmpeg and send them to a rtmp server. is that possible? How do I do that with the sound? Can I transfer it virtually?
[14:21:03 CET] <fella> think is, last frame isn't necessarily a full frame
[14:21:06 CET] <sn00ker> on the virtual video devices I will then send videos and pictures and mix with ffmepg
[14:21:33 CET] <fella> you would need to search backwards for the last I-frame
[14:22:31 CET] <aristaware> @fella searching the web, I found some forum posts that suggested using overlays
[14:24:16 CET] <aristaware> That makes sense?
[14:24:20 CET] <fella> i don't get that idea, but that doesn't mean it's wring
[14:24:34 CET] <fella> ^wring^wrong
[14:24:51 CET] <aristaware> They said something like that the overlay would extend the last frame for the duration of the video
[14:25:37 CET] <aristaware> Anyway, my "naive" aproximation to the problem was to create a video for each gap and join all the videos
[14:25:56 CET] <aristaware> Your method would require to reencode the video?
[14:26:31 CET] <aristaware> I expect the resulting video to be 8-10h at 1080p
[14:26:39 CET] <aristaware> B&W
[14:26:43 CET] <fella> no, if they are the same firmat you coukd stick them together
[14:27:25 CET] <aristaware> Even the videos created from extracted frames?
[14:27:29 CET] <fella> https://trac.ffmpeg.org/wiki/Concatenate
[14:28:16 CET] <fella> well, after that it's a movie/video/clip like all the others
[14:31:48 CET] <aristaware> Mmmm
[14:31:51 CET] <aristaware> I see
[14:32:55 CET] <aristaware> And the videos created from the extracted frames would be encoded with the same codecs and options as the original clips?
[14:34:53 CET] <fella> no, you have to set them yourself - therefor the 'ENCODER' placeholder
[14:37:02 CET] <fella> aristaware: i'ld suggest you just give it a try ... and come back with the error(s) you get if it fails ;)
[14:38:19 CET] <aristaware> You're right! Sorry for so many questions. I thought ENCODER was a ffmpeg token or something. Thank you very much!
[14:40:13 CET] <fella> np, yw :)
[14:51:12 CET] <g1itch> i have an IP camera i'm using ffmpeg to stream, but it looks like the video is fine but the audio pitch is like half of what it should be? what option can i use to increase the audio pitch 2x?
[14:52:06 CET] <furq> g1itch: what sample rate is the audio
[14:52:25 CET] <g1itch> can i determine that from the ffmpeg output?
[14:52:39 CET] <furq> it should mention it in there yeah
[14:52:42 CET] <g1itch> one se
[14:53:25 CET] <g1itch> https://paste.w00t.cloud/nijesehodo.sql
[14:53:45 CET] <g1itch> so from what i can tell, the audio is either extremely low pitch OR it's like it's delayed? slowed down maybe?
[14:53:48 CET] <furq> -af asetrate=16000
[14:54:05 CET] <g1itch> thanks will give that a try!
[14:54:18 CET] <g1itch> ever heard of this kind of issue with rtsp streams>?
[14:55:31 CET] <g1itch> and i imagine it wouldn't be a slowed down issue - that wouldn't really make sense in a live stream right?
[14:57:54 CET] <g1itch> if that is an issue, can i change the tempo of the live stream?
[14:58:47 CET] <furq> -af atempo
[14:59:17 CET] <furq> but that wouldn't work very well with a live stream
[15:05:46 CET] <g1itch> gotcha
[15:06:06 CET] <g1itch> will test the asetrate. i would find it very unusual for a live stream to have a tempo issue with the audio, right?
[15:13:45 CET] <GTest1989> Hello, anyone available to help?
[15:15:14 CET] <sn00ker> nobody have an answer for me?
[15:39:21 CET] <GTest1989> Anyone here to help with a Decklink issue?
[15:52:39 CET] <GTest1989> Anyone available to assist?
[15:57:14 CET] <DHE> GTest1989: just say what your problem is and someone will reply at their earliest convenience
[16:17:35 CET] <th3_v0ice> How can I extract every frame from a video rescale it to 90p and save as raw yuv file? This doesnt work : -i test.mp4 -s 128x90 -pix_fmt yuv420p -c:v rawvideo test_%05d.yuv
[16:24:59 CET] <DHE> th3_v0ice: try adding "-f image2" just before the output filename
[16:26:52 CET] <g1itch> so is anyone willing to help troubleshoot a streaming issue with me? it's an rtsp stream via ffmpeg, but ffmpeg and VLC (i tried multiple platforms to make sure the issue was consistent) have the audio really low - i can't tell if the pitch is just off or if the audio is somehow being played back at a slower rate?
[16:27:16 CET] <g1itch> i tried asetrate=16000 (audio bitrate in ffmpeg says its 8000) and i tried atempo=2.0 but neither of them seem to affect the stream
[16:28:31 CET] <BtbN> lower rate? Too low? I don't follow.
[16:28:42 CET] <BtbN> So is it to quiet, or messed otherwise up?
[16:29:12 CET] <g1itch> messed up - like if someone talks their voices are extremely deep
[16:33:22 CET] <th3_v0ice> DHE: Everything seems to be just gray color, but its producing single files
[16:38:25 CET] <th3_v0ice> DHE: It doesnt matter if I scale it or not.
[16:42:54 CET] <th3_v0ice> I will use python to compare bytes, thanks for the help :)
[17:38:26 CET] <sn00ker> i want reencode a movie in 2 pass
[17:38:35 CET] <sn00ker> but the filesize is equal
[17:42:20 CET] <kepstin> not sure what you mean. when you're encoding in 2-pass mode, you set a bitrate - and the filesize is bitrate × time
[17:42:37 CET] <kepstin> so you ask the encoder to make a file of a specific size, and it tries to get as close as possible
[17:43:05 CET] <kepstin> if you want a different file size, calculate a bitrate that would result in the desired size
[17:43:20 CET] <DHE> 2pass exists to provide best possible quality with a file size (and/or bitrate) budget
[17:44:36 CET] <kepstin> if you want the encoder to make the file as small as possible while maintaining a consistent quality, then you should be using (single-pass) crf mode with x264 (or x265 i guess)
[17:44:44 CET] <sn00ker> https://nopaste.linux-dev.org/?1191602
[17:44:47 CET] <sn00ker> i do this
[17:45:39 CET] <sn00ker> and datei.mp4 and output.mp4 as the same filesize
[17:45:54 CET] <sn00ker> datei.mp4 has 1200k bitrate
[17:47:52 CET] <kepstin> i'd assume that either your input is actually not 1200k, or the audio track is just really big.
[17:48:36 CET] <sn00ker> fflogger, the command rauns again. i wait for finish
[17:48:58 CET] <sn00ker> In time, I can take care of another problem. what can ffmpeg do?
[17:48:59 CET] <kepstin> but yeah, you'd probably be better served by using a single-pass "crf" mode encode unless you really need an exact target bitrate
[17:50:10 CET] <sn00ker> I would like to create with v4l two / dev / video and then describe them with ffmpeg. Is that possible?
[17:50:46 CET] <sn00ker> From these two streams, I would like to make a stream again. So both / dev / video as input and then on eien rtmp as output. Is that possible?
[17:51:32 CET] <kepstin> yes, ffmpeg can read from multiple v4l inputs, combine them into a single video using filters, then stream output to rtmp.
[17:51:55 CET] <kepstin> But note that ffmpeg sometimes has issues with realtime streaming multiple inputs - they might not be synchronized.
[17:52:07 CET] <sn00ker> OK. and then I can send a picture on a dev, on a device a video and ffmpeg mixes me?
[17:53:09 CET] <kepstin> I can't understand what you just wrote.
[17:53:23 CET] <sn00ker> I do not really want to mix both ... I just want to have a permanent rtmp stream that does not send anything if I write nothing on video and sends if the video is there but the stream should be preserved
[17:54:02 CET] <kepstin> sn00ker: The ffmpeg cli tool cannot do that, it is not designed to dynamically start/stop inputs.
[17:54:16 CET] <sn00ker> hmpf..
[17:54:21 CET] <sn00ker> im german.. i hate english
[17:54:30 CET] <sn00ker> I can not explain that correctly
[17:54:47 CET] <kepstin> sn00ker: It is possible to write a custom application that does that using ffmpeg libraries. Or you could use a dedicated streaming tool like OBS.
[17:55:10 CET] <sn00ker> OBS for cli not possible
[17:55:53 CET] <sn00ker> ffmpeg should join me / dev / video0 and / dev / video1 and send to a rtmp. whether it is running on / dev / video or not
[17:58:47 CET] <kepstin> If ffmpeg tries to read a frame from an input, but no frame is available, then ffmpeg will stop and wait. It will stop sending output.
[17:59:27 CET] <sn00ker> yes and that is exactly my problem.
[17:59:50 CET] <sn00ker> how and with what are such things produced? So such streams? but console not graphically
[18:00:21 CET] <kepstin> You can write a custom tool to do it. Most people probably use GUI tools like OBS.
[18:00:44 CET] <kepstin> Sometimes you can stop ffmpeg & start a different ffmpeg, and the gap is small enough that the stream looks ok
[18:01:48 CET] <sn00ker> No. This will also disconnect and rebuild the rtmp, and that's exactly what should be prevented. The stream should remain permanently connected no matter if an input or not. then ffmpeg should send an empty stream during this time
[18:02:04 CET] <kepstin> ffmpeg cli does not support that. You need a different tool.
[18:03:11 CET] <sn00ker> Great. So continue to torture Google. It can not be that there is no software for it. the whole streaming provider. the whole tv provider. that use synonymous any software
[18:03:33 CET] <sn00ker> does ffmpeg stop the connection immediately when no frame comes?
[18:11:23 CET] <kepstin> ffmpeg CLI is a single-threaded batch processing tool. It does a simple loop - read, decode, filter, encode, write. If the read blocks or the write blocks, then ffmpeg will stall.
[18:11:44 CET] <kepstin> The ffmpeg libraries (libavcodec, etc.) can be used to write a proper realtime streaming tool
[18:14:57 CET] <kepstin> I don't know what software streaming providers or TV providers use, but the key part is to have a video mixer that generates continuous output.
[18:19:50 CET] <sn00ker> can i use this?
[18:19:51 CET] <sn00ker> https://nopaste.linux-dev.org/?1191603
[18:19:58 CET] <sn00ker> but with /dev/video
[18:22:42 CET] <kepstin> sn00ker: that idea is ok, but the main problem is that you have to detect when the device starts/stops and change the ffmpeg input command to switch between reading from the device and generating blank video.
[18:24:21 CET] <sn00ker> I thought so. I do two sanding. These loops will permanently send a picture to / dev / video0 and / dev / video1. So frames are there on both. I lead these together with ffmpeg to the stream
[18:24:23 CET] <zerodefect> In the C-API, is it possible to get to the 708 caption data that is embedded in the user_data of the elementary stream?
[18:25:18 CET] <sn00ker> if I want to stream now. recognize a script the connection and terminates the loop for videoX as soon as my program separates the loop starts and a still picture is sent again
[19:06:17 CET] <kevinnn> Can anyone link me to a very simple windows desktop duplication API c++ example?
[19:06:46 CET] <kevinnn> I am pulling my hair out over here trying to figure out the convoluted example windows has on their site
[19:07:35 CET] <TiZ> Hi there. I'm having a great deal of trouble capturing my desktop and encoding with VAAPI at a stable 30 FPS. I've compiled ffmpeg and its components all from source. I've tried with both 4.1 and master. I made a script to generate a ffmpeg command line and capture the output. The command and the output are here: https://pastebin.com/bG4WKuH8 (The ffmpeg pastebin doesn't exist anymore.) What am I doing wrong?
[19:21:32 CET] <remus> Ok - I took a recording of the RSTP stream, extracted the audio and did some manual adjustments of it - it looks like the audio in the stream is not only a lower pitch but it's also slowed down. Does that make sense for a live streaming IP camera?
[20:50:43 CET] <dryft> Evening people, I'm using ffmpeg to split 5.1 audio into individual wavs. This works fine but I'd like to specify a region with "-ss" & "-to" to avoid the huge file sizes, however ffmpeg seems to ignore them. Is it not possible with this action?
[20:52:11 CET] <dryft> here's the cmd https://0x0.st/sFc4.txt
[20:53:10 CET] <DHE> order of parameters matters. that only applies -ss and -to to the first .wav file written
[20:53:22 CET] <DHE> you'll have to either repeat them for each .wav file, or apply them to the input itself instead
[20:54:31 CET] <dryft> silly me
[20:54:36 CET] <dryft> ty
[21:38:18 CET] <th3_v0ice> Is mp4 muxer doing anything to the I frames?
[22:34:03 CET] <kepstin> th3_v0ice: I'd assume that it's putting them into the file, and possibly marking their locations into the index if appropriate.
[22:34:41 CET] Action: kepstin is being a bit silly, and isn't familiar with the issue you're having.
[22:40:03 CET] <th3_v0ice> For some reason two mp4 files have one byte difference in all I frames and because of that frame is not properly decoded.
[22:41:39 CET] <kepstin> what codec? what command is being used to create the files?
[00:00:00 CET] --- Tue Jan 22 2019


More information about the Ffmpeg-devel-irc mailing list